LangChain

LangChain supports MCP through the langchain-mcp-adapters library, letting you add Toolcog’s API capabilities to LangChain agents and LangGraph workflows.

Prerequisites

Installation

Terminal window
pip install langchain langchain-mcp-adapters langgraph

Connecting to Toolcog

The langchain-mcp-adapters library provides MultiServerMCPClient for connecting to MCP servers:

from langchain_mcp_adapters.client import MultiServerMCPClient
# Connect to Toolcog
client = MultiServerMCPClient({
"toolcog": {
"url": "https://mcp.toolcog.com",
"transport": "sse"
}
})
# Get tools as LangChain-compatible format
async with client:
tools = await client.get_tools()
print(f"Available tools: {[t.name for t in tools]}")

Using with LangGraph Agents

from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
async def run_agent():
client = MultiServerMCPClient({
"toolcog": {
"url": "https://mcp.toolcog.com",
"transport": "sse"
}
})
async with client:
tools = await client.get_tools()
# Create a ReAct agent with Toolcog tools
agent = create_react_agent(
ChatOpenAI(model="gpt-4o"),
tools
)
# Run the agent
result = await agent.ainvoke({
"messages": [
{"role": "user", "content": "Find GitHub operations for creating issues"}
]
})
return result["messages"][-1].content

Stateless vs Stateful Connections

By default, MultiServerMCPClient is stateless—each tool invocation creates a fresh session:

# Stateless (default) - good for most cases
client = MultiServerMCPClient({
"toolcog": {
"url": "https://mcp.toolcog.com",
"transport": "sse"
}
})

For workflows that need persistent state across tool calls:

# Stateful - maintains session across calls
async with client.session("toolcog") as session:
# All tool calls in this block share the same session
result1 = await session.call_tool("find_api", {"intent": "..."})
result2 = await session.call_tool("call_api", {"operation": "..."})

Building a Complete Agent

from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
async def api_agent(user_request: str) -> str:
"""An agent that can discover and execute API operations."""
client = MultiServerMCPClient({
"toolcog": {
"url": "https://mcp.toolcog.com",
"transport": "sse"
}
})
async with client:
tools = await client.get_tools()
agent = create_react_agent(
ChatOpenAI(model="gpt-4o"),
tools,
state_modifier="""You are a helpful assistant with access to API tools.
To accomplish tasks:
1. Use find_api to discover relevant operations
2. Use learn_api if you need to understand the interface
3. Use call_api to execute operations
When authorization is needed, inform the user of the URL to visit."""
)
result = await agent.ainvoke({
"messages": [{"role": "user", "content": user_request}]
})
return result["messages"][-1].content
# Example usage
import asyncio
result = asyncio.run(api_agent("Create a Stripe customer with email test@example.com"))
print(result)

Cross-Service Workflows

Toolcog enables agents to work across multiple services in a single workflow:

async def cross_service_workflow():
client = MultiServerMCPClient({
"toolcog": {
"url": "https://mcp.toolcog.com",
"transport": "sse"
}
})
async with client:
tools = await client.get_tools()
agent = create_react_agent(
ChatOpenAI(model="gpt-4o"),
tools
)
# The agent can discover and use operations across services
result = await agent.ainvoke({
"messages": [{
"role": "user",
"content": """
1. Find my latest GitHub issues
2. Create a Notion page summarizing them
3. Send a Slack message with the link
"""
}]
})
return result

Using Interceptors

Interceptors give you middleware-like control over tool calls:

from langchain_mcp_adapters.client import MultiServerMCPClient
def logging_interceptor(tool_name, args):
print(f"Calling {tool_name} with {args}")
return None # Continue with the call
def auth_interceptor(tool_name, args):
# Add custom headers or modify requests
if tool_name == "call_api":
args["headers"] = {"X-Custom-Header": "value"}
return None
client = MultiServerMCPClient(
{"toolcog": {"url": "https://mcp.toolcog.com", "transport": "sse"}},
interceptors=[logging_interceptor, auth_interceptor]
)

Catalogs

Connect to a specific catalog:

client = MultiServerMCPClient({
"toolcog-internal": {
"url": "https://mcp.toolcog.com/mycompany/internal-apis",
"transport": "sse"
}
})

Multiple MCP Servers

Combine Toolcog with other MCP servers:

client = MultiServerMCPClient({
"toolcog": {
"url": "https://mcp.toolcog.com",
"transport": "sse"
},
"local-tools": {
"command": "python",
"args": ["./my_mcp_server.py"],
"transport": "stdio"
}
})

Best Practices

  1. Use async context managers — Always use async with client: to ensure proper cleanup.

  2. Let the agent iterate — ReAct agents may need multiple tool calls. Don’t limit iterations unnecessarily.

  3. Handle auth gracefully — When authorization is needed, the agent receives a URL. Instruct it to present this to users.

  4. Monitor tool calls — Use interceptors to log and monitor tool usage in production.

Next Steps