LangChain supports MCP through the langchain-mcp-adapters library, letting you add Toolcog’s API capabilities to LangChain agents and LangGraph workflows.
pip install langchain langchain-mcp-adapters langgraphThe langchain-mcp-adapters library provides MultiServerMCPClient for connecting to MCP servers:
from langchain_mcp_adapters.client import MultiServerMCPClient
# Connect to Toolcogclient = MultiServerMCPClient({ "toolcog": { "url": "https://mcp.toolcog.com", "transport": "sse" }})
# Get tools as LangChain-compatible formatasync with client: tools = await client.get_tools() print(f"Available tools: {[t.name for t in tools]}")from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agent
async def run_agent(): client = MultiServerMCPClient({ "toolcog": { "url": "https://mcp.toolcog.com", "transport": "sse" } })
async with client: tools = await client.get_tools()
# Create a ReAct agent with Toolcog tools agent = create_react_agent( ChatOpenAI(model="gpt-4o"), tools )
# Run the agent result = await agent.ainvoke({ "messages": [ {"role": "user", "content": "Find GitHub operations for creating issues"} ] })
return result["messages"][-1].contentBy default, MultiServerMCPClient is stateless—each tool invocation creates a fresh session:
# Stateless (default) - good for most casesclient = MultiServerMCPClient({ "toolcog": { "url": "https://mcp.toolcog.com", "transport": "sse" }})For workflows that need persistent state across tool calls:
# Stateful - maintains session across callsasync with client.session("toolcog") as session: # All tool calls in this block share the same session result1 = await session.call_tool("find_api", {"intent": "..."}) result2 = await session.call_tool("call_api", {"operation": "..."})from langchain_mcp_adapters.client import MultiServerMCPClientfrom langchain_openai import ChatOpenAIfrom langgraph.prebuilt import create_react_agent
async def api_agent(user_request: str) -> str: """An agent that can discover and execute API operations."""
client = MultiServerMCPClient({ "toolcog": { "url": "https://mcp.toolcog.com", "transport": "sse" } })
async with client: tools = await client.get_tools()
agent = create_react_agent( ChatOpenAI(model="gpt-4o"), tools, state_modifier="""You are a helpful assistant with access to API tools.
To accomplish tasks: 1. Use find_api to discover relevant operations 2. Use learn_api if you need to understand the interface 3. Use call_api to execute operations
When authorization is needed, inform the user of the URL to visit.""" )
result = await agent.ainvoke({ "messages": [{"role": "user", "content": user_request}] })
return result["messages"][-1].content
# Example usageimport asyncioresult = asyncio.run(api_agent("Create a Stripe customer with email test@example.com"))print(result)Toolcog enables agents to work across multiple services in a single workflow:
async def cross_service_workflow(): client = MultiServerMCPClient({ "toolcog": { "url": "https://mcp.toolcog.com", "transport": "sse" } })
async with client: tools = await client.get_tools()
agent = create_react_agent( ChatOpenAI(model="gpt-4o"), tools )
# The agent can discover and use operations across services result = await agent.ainvoke({ "messages": [{ "role": "user", "content": """ 1. Find my latest GitHub issues 2. Create a Notion page summarizing them 3. Send a Slack message with the link """ }] })
return resultInterceptors give you middleware-like control over tool calls:
from langchain_mcp_adapters.client import MultiServerMCPClient
def logging_interceptor(tool_name, args): print(f"Calling {tool_name} with {args}") return None # Continue with the call
def auth_interceptor(tool_name, args): # Add custom headers or modify requests if tool_name == "call_api": args["headers"] = {"X-Custom-Header": "value"} return None
client = MultiServerMCPClient( {"toolcog": {"url": "https://mcp.toolcog.com", "transport": "sse"}}, interceptors=[logging_interceptor, auth_interceptor])Connect to a specific catalog:
client = MultiServerMCPClient({ "toolcog-internal": { "url": "https://mcp.toolcog.com/mycompany/internal-apis", "transport": "sse" }})Combine Toolcog with other MCP servers:
client = MultiServerMCPClient({ "toolcog": { "url": "https://mcp.toolcog.com", "transport": "sse" }, "local-tools": { "command": "python", "args": ["./my_mcp_server.py"], "transport": "stdio" }})Use async context managers — Always use async with client: to ensure proper cleanup.
Let the agent iterate — ReAct agents may need multiple tool calls. Don’t limit iterations unnecessarily.
Handle auth gracefully — When authorization is needed, the agent receives a URL. Instruct it to present this to users.
Monitor tool calls — Use interceptors to log and monitor tool usage in production.