LlamaIndex supports MCP through the llama-index-tools-mcp package, letting you add Toolcog’s API capabilities to LlamaIndex agents.
pip install llama-index llama-index-tools-mcpThe McpToolSpec connects to MCP servers and exposes their tools:
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
# Connect to Toolcogmcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")mcp_tool_spec = McpToolSpec(client=mcp_client)
# Get tools as LlamaIndex FunctionToolstools = mcp_tool_spec.to_tool_list()print(f"Available tools: {[t.metadata.name for t in tools]}")from llama_index.core.agent import ReActAgentfrom llama_index.llms.openai import OpenAIfrom llama_index.tools.mcp import BasicMCPClient, McpToolSpec
# Connect to Toolcogmcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")tool_spec = McpToolSpec(client=mcp_client)tools = tool_spec.to_tool_list()
# Create a ReAct agentagent = ReActAgent.from_tools( tools, llm=OpenAI(model="gpt-4o"), verbose=True)
# Run the agentresponse = agent.chat("Find GitHub operations for creating issues")print(response)LlamaIndex also provides a convenience function:
from llama_index.tools.mcp import get_tools_from_mcp_url
# Get tools directly from URLtools = await get_tools_from_mcp_url("https://mcp.toolcog.com/sse")
# Use with any agentagent = ReActAgent.from_tools(tools, llm=OpenAI(model="gpt-4o"))from llama_index.core.agent import ReActAgentfrom llama_index.llms.openai import OpenAIfrom llama_index.tools.mcp import BasicMCPClient, McpToolSpec
def create_api_agent(): """Create an agent that can discover and execute API operations."""
# Connect to Toolcog mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse") tool_spec = McpToolSpec(client=mcp_client) tools = tool_spec.to_tool_list()
# Create agent with custom system prompt agent = ReActAgent.from_tools( tools, llm=OpenAI(model="gpt-4o"), system_prompt="""You are a helpful assistant with access to API tools.
To accomplish tasks: 1. Use find_api to discover relevant operations 2. Use learn_api if you need to understand the interface 3. Use call_api to execute operations
When authorization is needed, inform the user of the URL to visit.""", verbose=True )
return agent
# Example usageagent = create_api_agent()response = agent.chat("Create a new customer in Stripe with email test@example.com")print(response)LlamaIndex agents maintain conversation history:
agent = create_api_agent()
# First turn: discoverresponse1 = agent.chat("What Stripe operations are available for customers?")print(response1)
# Second turn: execute (agent remembers the context)response2 = agent.chat("Create a customer with email user@example.com")print(response2)
# Third turn: follow upresponse3 = agent.chat("Now add a payment method to that customer")print(response3)Toolcog works with any LLM supported by LlamaIndex:
from llama_index.llms.anthropic import Anthropicfrom llama_index.llms.openai import OpenAI
# With Claudeagent = ReActAgent.from_tools( tools, llm=Anthropic(model="claude-sonnet-4-20250514"))
# With GPT-4agent = ReActAgent.from_tools( tools, llm=OpenAI(model="gpt-4o"))
# With local models via Ollamafrom llama_index.llms.ollama import Ollamaagent = ReActAgent.from_tools( tools, llm=Ollama(model="llama3.1"))Connect to a specific catalog:
mcp_client = BasicMCPClient("https://mcp.toolcog.com/mycompany/internal-apis/sse")tool_spec = McpToolSpec(client=mcp_client)tools = tool_spec.to_tool_list()Toolcog tools work alongside other LlamaIndex tools:
from llama_index.core.tools import FunctionToolfrom llama_index.tools.mcp import BasicMCPClient, McpToolSpec
# Toolcog toolsmcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")tool_spec = McpToolSpec(client=mcp_client)toolcog_tools = tool_spec.to_tool_list()
# Custom local toolsdef calculate_price(quantity: int, unit_price: float) -> float: """Calculate total price.""" return quantity * unit_price
custom_tool = FunctionTool.from_defaults(fn=calculate_price)
# Combine all toolsall_tools = toolcog_tools + [custom_tool]
agent = ReActAgent.from_tools( all_tools, llm=OpenAI(model="gpt-4o"))For async applications:
import asynciofrom llama_index.core.agent import ReActAgentfrom llama_index.llms.openai import OpenAIfrom llama_index.tools.mcp import BasicMCPClient, McpToolSpec
async def async_agent(): mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse") tool_spec = McpToolSpec(client=mcp_client) tools = tool_spec.to_tool_list()
agent = ReActAgent.from_tools( tools, llm=OpenAI(model="gpt-4o") )
response = await agent.achat("Find Notion operations for creating pages") return response
# Run asyncresult = asyncio.run(async_agent())print(result)Enable verbose mode — Set verbose=True during development to see the agent’s reasoning.
Use appropriate LLMs — Larger models (GPT-4, Claude) handle complex tool sequences better.
Handle auth in prompts — Include instructions about authorization in your system prompt.
Reuse connections — Create the MCP client once and reuse it for multiple agent interactions.