As proposed in #10 we aim to support multiple LLM providers (OpenAI, Anthropic, Ollama, etc.) under a unified interface for ros-mcp-client.
Approach 1: Custom Base + Provider Classes (Current Plan)
Structure:
clients/
├── base/ # Defines BaseLLMClient
├── openai/
├── anthropic/
├── ollama/
└── gemini/
How it works:
BaseLLMClient defines high-level methods like find_tools(), process_query(), main_loop().
- Each provider (e.g.,
OpenAIClient) implements low-level methods: send_message(), stream_events(), tool_call().
- The base client accepts a provider instance via composition.
Pros:
- Full control over interface design.
- Lightweight dependency footprint.
- Easy to debug and extend incrementally.
Cons:
- Each provider must be manually implemented and maintained.
- Requires redundant setup (auth, configs, etc.).
- Slower onboarding for new LLM backends.
Approach 2: Using LangGraph + LangChain-MCP-Adapters
Overview:
Leverage LangChain MCP Adapters to unify LLMs and tools automatically under LangGraph.
How it works:
- LangGraph provides built-in integration for multiple LLM providers (OpenAI, Anthropic, etc.).
- The MCP adapter layer automatically exposes MCP-compatible tools as LangChain tools.
- The
ros-mcp-client simply registers MCP endpoints; LangGraph handles orchestration and provider switching via configuration.
Example Setup:
from langgraph.graph import Graph
from langchain_mcp_adapters import MCPToolAdapter
from langchain_openai import ChatOpenAI
# Create provider dynamically
llm = ChatOpenAI(model="gpt-4o")
# Create MCP adapter
ros_tools = MCPToolAdapter.from_server("ros-mcp-server")
# Build agent graph
graph = Graph(llm=llm, tools=[ros_tools])
graph.run("move robot arm to position A")
Pros:
- Provider-agnostic — switch via config (no new class needed).
- Built-in streaming, session, and error management.
- Compatible with both MCP servers and standard LangChain tools.
- Future-proof: supports function-calling, memory, and orchestration.
Cons:
- Higher dependency overhead (LangGraph, LangChain).
- Less fine-grained control compared to custom implementation.
- Tighter coupling with LangChain’s evolving APIs.
Suggested Path
Start with Approach 1 for core architectural clarity,
then add Approach 2 as a plug-in alternative once stability is ensured.
This would allow developers to:
- Run pure MCP-native clients (Approach 1)
- Or opt for LangGraph-backed orchestration for advanced agent setups (Approach 2)
As proposed in #10 we aim to support multiple LLM providers (OpenAI, Anthropic, Ollama, etc.) under a unified interface for
ros-mcp-client.Approach 1: Custom Base + Provider Classes (Current Plan)
Structure:
How it works:
BaseLLMClientdefines high-level methods likefind_tools(),process_query(),main_loop().OpenAIClient) implements low-level methods:send_message(),stream_events(),tool_call().Pros:
Cons:
Approach 2: Using LangGraph + LangChain-MCP-Adapters
Overview:
Leverage LangChain MCP Adapters to unify LLMs and tools automatically under LangGraph.
How it works:
ros-mcp-clientsimply registers MCP endpoints; LangGraph handles orchestration and provider switching via configuration.Example Setup:
Pros:
Cons:
Suggested Path
Start with Approach 1 for core architectural clarity,
then add Approach 2 as a plug-in alternative once stability is ensured.
This would allow developers to: