Skip to content

Commit 401fe22

Browse files
authored
[FA-25] Adding documentation on client-server interaction (#13)
* docs: adding documentation on client-server interaction * docs: updating main readme * docs: adding documentation for writing a client with FastMCP * feat: adding missing step to query processing * docs: updating title * chore: fixing pre-commit issues
1 parent 910e837 commit 401fe22

File tree

6 files changed

+330
-6
lines changed

6 files changed

+330
-6
lines changed

README.md

Lines changed: 20 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
An LLM agent built using Model Context Protocol to play online games
66

7-
# Pre-requisites
7+
## Pre-requisites
88

99
- `uv` installable via brew.
1010
- [Claude Desktop](https://claude.ai/download)
@@ -22,9 +22,9 @@ Set up project:
2222
make project-setup
2323
```
2424

25-
# Quick Start
25+
## Quick Start
2626

27-
1. Install server in Claude Desktop:
27+
### 1. Install server in Claude Desktop:
2828

2929
```bash
3030
cd agent_uno
@@ -40,13 +40,13 @@ uv run mcp install server.py
4040
> [!NOTE]
4141
> If you have Claude Desktop open when you run the above command, you will need to restart it for the server to be available.
4242
43-
2. Interact with MCP server with Claude Desktop.
43+
### 2. Interact with MCP server with Claude Desktop.
4444

45-
:chess_pawn: Agent vs. Stockfish Bot :robot::
45+
#### :chess_pawn: Agent vs. Stockfish Bot :robot::
4646

4747
> Can you please log into the Chess API and then create a game against an AI. Once the game has been created the opponent will make the first move. Can you use the previous moves and the layout of the board to determine what an optimal next move will be and then make your own move playing continuously back and forth until completion? Please use the UCI chess standard for your moves, e.g., e2e4.
4848
49-
:chess_pawn: Agent vs. User :adult::
49+
#### :chess_pawn: Agent vs. User :adult::
5050

5151
1. Ask agent to login and create a game against a user:
5252

@@ -70,3 +70,17 @@ Application tests for the MCP server can be run with the following command:
7070
```bash
7171
uv run application_tests/tests.py
7272
```
73+
74+
## Documentation
75+
76+
Documentation for this project can be found in the [docs](docs) directory. The following documentation is available:
77+
78+
* [direct_execution.md](docs/direct_execution.md): Documentation on how to run the server directly with `sse` transport and HTTP requests.
79+
* [client-server-interaction.md](docs/client-server-interaction.md): Documentation on how MCP clients and servers interact with each other and how clients process user queries.
80+
81+
## Useful Links
82+
* [Model Context Protocol](https://modelcontextprotocol.io/)
83+
* [Client Development Docs](https://modelcontextprotocol.io/quickstart/client)
84+
* [Server Development Docs](https://modelcontextprotocol.io/quickstart/server)
85+
* [FastMCP](https://github.com/modelcontextprotocol/python-sdk)
86+
* [Claude Messages API](https://github.com/anthropics/anthropic-sdk-python/blob/8b244157a7d03766bec645b0e1dc213c6d462165/src/anthropic/resources/messages/messages.py)

docs/client-server-interaction.md

Lines changed: 310 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,310 @@
1+
# Client Server Interaction
2+
3+
## How do clients and servers communicate?
4+
5+
1. Client sends an initialisation request to the server and receives the protocol version and capabilities from the server.
6+
7+
![mcp-initialisation-request](imgs/mcp-initialisation-request.png)
8+
9+
2. Client sends an initialisation notification to the server this acts as an acknowledgement of the server's response.
10+
11+
![mcp-initialised-notification](imgs/mcp-initialised-notification.png)
12+
13+
3. Normal message exchange begins. The client and server can send messages to each other using the request-response pattern or notifications.
14+
15+
![mcp-message-exchange](imgs/mcp-message-exchange.png)
16+
17+
The MCP documentation for [Client Developers](https://modelcontextprotocol.io/quickstart/client) describes how to connect to a server as a client. The following code snippet demonstrates how to implement this through the `stdio` transport method:
18+
19+
```python
20+
async def connect_to_server(self, server_script_path: str):
21+
"""Connect to an MCP server
22+
23+
Args:
24+
server_script_path: Path to the server script (.py or .js)
25+
"""
26+
is_python = server_script_path.endswith('.py')
27+
is_js = server_script_path.endswith('.js')
28+
if not (is_python or is_js):
29+
raise ValueError("Server script must be a .py or .js file")
30+
31+
command = "python" if is_python else "node"
32+
server_params = StdioServerParameters(
33+
command=command,
34+
args=[server_script_path],
35+
env=None
36+
)
37+
38+
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
39+
self.stdio, self.write = stdio_transport
40+
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
41+
42+
await self.session.initialize()
43+
44+
# List available tools
45+
response = await self.session.list_tools()
46+
tools = response.tools
47+
print("\nConnected to server with tools:", [tool.name for tool in tools])
48+
```
49+
50+
Breaking this down, we have:
51+
52+
1. Create server parameters for handling `stdio` transport:
53+
54+
```python
55+
is_python = server_script_path.endswith('.py')
56+
is_js = server_script_path.endswith('.js')
57+
if not (is_python or is_js):
58+
raise ValueError("Server script must be a .py or .js file")
59+
60+
command = "python" if is_python else "node"
61+
server_params = StdioServerParameters(
62+
command=command,
63+
args=[server_script_path],
64+
env=None
65+
)
66+
```
67+
68+
2. Create a `stdio` client session:
69+
70+
```python
71+
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
72+
self.stdio, self.write = stdio_transport
73+
self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
74+
```
75+
76+
3. Perform initialisation with the server:
77+
78+
```python
79+
await self.session.initialize()
80+
```
81+
82+
4. Begin message exchange with the server:
83+
84+
```python
85+
# List available tools
86+
response = await self.session.list_tools()
87+
tools = response.tools
88+
print("\nConnected to server with tools:", [tool.name for tool in tools])
89+
```
90+
91+
## How does the client process user queries?
92+
93+
The MCP documentation for [Client Developers](https://modelcontextprotocol.io/quickstart/client) describes how to process user queries during the message exchange process using the LLM to make decisions on which tools should be chosen to fulfil the users request. The following code snippet demonstrates how to implement this:
94+
95+
```python
96+
async def process_query(self, query: str) -> str:
97+
"""Process a query using Claude and available tools"""
98+
messages = [
99+
{
100+
"role": "user",
101+
"content": query
102+
}
103+
]
104+
105+
response = await self.session.list_tools()
106+
available_tools = [{
107+
"name": tool.name,
108+
"description": tool.description,
109+
"input_schema": tool.inputSchema
110+
} for tool in response.tools]
111+
112+
# Initial Claude API call
113+
response = self.anthropic.messages.create(
114+
model="claude-3-5-sonnet-20241022",
115+
max_tokens=1000,
116+
messages=messages,
117+
tools=available_tools
118+
)
119+
120+
# Process response and handle tool calls
121+
final_text = []
122+
123+
assistant_message_content = []
124+
for content in response.content:
125+
if content.type == 'text':
126+
final_text.append(content.text)
127+
assistant_message_content.append(content)
128+
elif content.type == 'tool_use':
129+
tool_name = content.name
130+
tool_args = content.input
131+
132+
# Execute tool call
133+
result = await self.session.call_tool(tool_name, tool_args)
134+
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
135+
136+
assistant_message_content.append(content)
137+
messages.append({
138+
"role": "assistant",
139+
"content": assistant_message_content
140+
})
141+
messages.append({
142+
"role": "user",
143+
"content": [
144+
{
145+
"type": "tool_result",
146+
"tool_use_id": content.id,
147+
"content": result.content
148+
}
149+
]
150+
})
151+
152+
# Get next response from Claude
153+
response = self.anthropic.messages.create(
154+
model="claude-3-5-sonnet-20241022",
155+
max_tokens=1000,
156+
messages=messages,
157+
tools=available_tools
158+
)
159+
160+
final_text.append(response.content[0].text)
161+
162+
return "\n".join(final_text)
163+
```
164+
165+
Breaking this down, we have:
166+
167+
1. Format user message:
168+
```python
169+
messages = [
170+
{
171+
"role": "user",
172+
"content": query
173+
}
174+
]
175+
```
176+
> The messages object is responsible for storing the message history allowing the LLM to have access to prior context.
177+
178+
2. List available tools:
179+
```python
180+
response = await self.session.list_tools()
181+
```
182+
> This is the same as the HTTP request to list tools in the [direct execution documentation](direct_execution.md).
183+
184+
3. Format available tools to be handed to Anthropic API:
185+
```python
186+
available_tools = [{
187+
"name": tool.name,
188+
"description": tool.description,
189+
"input_schema": tool.inputSchema
190+
} for tool in response.tools]
191+
```
192+
193+
4. Make request to Anthropic API passing user message and available tools objects:
194+
195+
```python
196+
response = self.anthropic.messages.create(
197+
model="claude-3-5-sonnet-20241022",
198+
max_tokens=1000,
199+
messages=messages,
200+
tools=available_tools
201+
)
202+
```
203+
204+
5. Iterate through the response and process either the text or tool_use content type:
205+
```
206+
for content in response.content:
207+
if content.type == 'text':
208+
...
209+
210+
elif content.type == 'tool_use':
211+
...
212+
```
213+
6. If the response content type is tool_use then the tool is executed:
214+
```python
215+
elif content.type == 'tool_use':
216+
tool_name = content.name
217+
tool_args = content.input
218+
219+
# Execute tool call
220+
result = await self.session.call_tool(tool_name, tool_args)
221+
final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
222+
```
223+
224+
7. Update messages object with next context:
225+
```
226+
# assistant response message
227+
messages.append({
228+
"role": "assistant",
229+
"content": assistant_message_content
230+
})
231+
# user message with tool result
232+
messages.append({
233+
"role": "user",
234+
"content": [
235+
{
236+
"type": "tool_result",
237+
"tool_use_id": content.id,
238+
"content": result.content
239+
}
240+
]
241+
})
242+
```
243+
244+
8. Finally, re-prompt LLM with the updated messages object to get a final response to the user:
245+
246+
```python
247+
response = self.anthropic.messages.create(
248+
model="claude-3-5-sonnet-20241022",
249+
max_tokens=1000,
250+
messages=messages,
251+
tools=available_tools
252+
)
253+
254+
final_text.append(response.content[0].text)
255+
```
256+
257+
The following diagram illustrates the end-to-end process of requesting the agent to login to the Lichess API from user query to response using the MCP server and LLM:
258+
259+
![mcp-message-exchange-e2e](imgs/mcp-message-exchange-e2e.png)
260+
261+
## Simplified Client Connection using FastMCP
262+
263+
The [FastMCP documentation](https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#writing-mcp-clients) outlines how the above two code snippets can be performed in a single block:
264+
265+
```python
266+
from mcp import ClientSession, StdioServerParameters, types
267+
from mcp.client.stdio import stdio_client
268+
269+
# Create server parameters for stdio connection
270+
server_params = StdioServerParameters(
271+
command="python", # Executable
272+
args=["example_server.py"], # Optional command line arguments
273+
env=None, # Optional environment variables
274+
)
275+
276+
277+
async def run():
278+
async with stdio_client(server_params) as (read, write):
279+
async with ClientSession(
280+
read, write
281+
) as session:
282+
# Initialize the connection
283+
await session.initialize()
284+
285+
# List available prompts
286+
prompts = await session.list_prompts()
287+
288+
# Get a prompt
289+
prompt = await session.get_prompt(
290+
"example-prompt", arguments={"arg1": "value"}
291+
)
292+
293+
# List available resources
294+
resources = await session.list_resources()
295+
296+
# List available tools
297+
tools = await session.list_tools()
298+
299+
# Read a resource
300+
content, mime_type = await session.read_resource("file://some/path")
301+
302+
# Call a tool
303+
result = await session.call_tool("tool-name", arguments={"arg1": "value"})
304+
305+
306+
if __name__ == "__main__":
307+
import asyncio
308+
309+
asyncio.run(run())
310+
```
101 KB
Loading
118 KB
Loading
350 KB
Loading

docs/imgs/mcp-message-exchange.png

132 KB
Loading

0 commit comments

Comments
 (0)