- Model Context Protocol
- a standard protocal to connect tools and context to AI applications (like a USB port of AI apps)
- custom integrations of your choice
Architecture (client-server architecture)
- MCP host (e.g. cursor) who start a MCP client process
- MCP client makes a request to the MCP server, and receive response
- MCP servers
- Prompt
- resrouce interaction
- Tool (function, api, image processing)
1. MCP Client
🔹 What it is:
The MCP client is the part that interacts with the LLM (e.g., GPT) directly.
🔹 Responsibilities:
- Sends the model a tool manifest (i.e., what tools are available and how to use them).
- Receives tool calls from the model and forwards them to the MCP server.
- For example:
- “Here’s a
weather
tool. You can callget_weather(location)
.” - If the model then decides to call
get_weather("London")
, the client forwards this to the MCP server.
- “Here’s a
2. MCP Server
🔹 What it is:
The MCP server is a proxy or translator that connects the model’s tool calls to real external services/APIs.
🔹 Responsibilities:
- Accepts tool invocation requests from the MCP client.
- Maps those requests to actual HTTP calls to the external service.
- May also validate inputs, authenticate, or enrich the request.
Steps:
- Client fetches tools from MCP server and stores them
- User enters prompt
- Client sends prompt + tools list to LLM
- LLM responds with tool call(s) and arguments
- Client sends tool call(s) to MCP server
- MCP server invokes real external service
- Response returns to client → to LLM
- LLM generates final answer for the user