LlamaIndex

LlamaIndex supports MCP through the llama-index-tools-mcp package, letting you add Toolcog’s API capabilities to LlamaIndex agents.

Prerequisites

Installation

Terminal window
pip install llama-index llama-index-tools-mcp

Connecting to Toolcog

The McpToolSpec connects to MCP servers and exposes their tools:

from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
# Connect to Toolcog
mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")
mcp_tool_spec = McpToolSpec(client=mcp_client)
# Get tools as LlamaIndex FunctionTools
tools = mcp_tool_spec.to_tool_list()
print(f"Available tools: {[t.metadata.name for t in tools]}")

Using with LlamaIndex Agents

from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
# Connect to Toolcog
mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")
tool_spec = McpToolSpec(client=mcp_client)
tools = tool_spec.to_tool_list()
# Create a ReAct agent
agent = ReActAgent.from_tools(
tools,
llm=OpenAI(model="gpt-4o"),
verbose=True
)
# Run the agent
response = agent.chat("Find GitHub operations for creating issues")
print(response)

One-Line Tool Loading

LlamaIndex also provides a convenience function:

from llama_index.tools.mcp import get_tools_from_mcp_url
# Get tools directly from URL
tools = await get_tools_from_mcp_url("https://mcp.toolcog.com/sse")
# Use with any agent
agent = ReActAgent.from_tools(tools, llm=OpenAI(model="gpt-4o"))

Building a Complete Agent

from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
def create_api_agent():
"""Create an agent that can discover and execute API operations."""
# Connect to Toolcog
mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")
tool_spec = McpToolSpec(client=mcp_client)
tools = tool_spec.to_tool_list()
# Create agent with custom system prompt
agent = ReActAgent.from_tools(
tools,
llm=OpenAI(model="gpt-4o"),
system_prompt="""You are a helpful assistant with access to API tools.
To accomplish tasks:
1. Use find_api to discover relevant operations
2. Use learn_api if you need to understand the interface
3. Use call_api to execute operations
When authorization is needed, inform the user of the URL to visit.""",
verbose=True
)
return agent
# Example usage
agent = create_api_agent()
response = agent.chat("Create a new customer in Stripe with email test@example.com")
print(response)

Multi-Turn Conversations

LlamaIndex agents maintain conversation history:

agent = create_api_agent()
# First turn: discover
response1 = agent.chat("What Stripe operations are available for customers?")
print(response1)
# Second turn: execute (agent remembers the context)
response2 = agent.chat("Create a customer with email user@example.com")
print(response2)
# Third turn: follow up
response3 = agent.chat("Now add a payment method to that customer")
print(response3)

Using with Different LLMs

Toolcog works with any LLM supported by LlamaIndex:

from llama_index.llms.anthropic import Anthropic
from llama_index.llms.openai import OpenAI
# With Claude
agent = ReActAgent.from_tools(
tools,
llm=Anthropic(model="claude-sonnet-4-20250514")
)
# With GPT-4
agent = ReActAgent.from_tools(
tools,
llm=OpenAI(model="gpt-4o")
)
# With local models via Ollama
from llama_index.llms.ollama import Ollama
agent = ReActAgent.from_tools(
tools,
llm=Ollama(model="llama3.1")
)

Catalogs

Connect to a specific catalog:

mcp_client = BasicMCPClient("https://mcp.toolcog.com/mycompany/internal-apis/sse")
tool_spec = McpToolSpec(client=mcp_client)
tools = tool_spec.to_tool_list()

Combining with Other Tools

Toolcog tools work alongside other LlamaIndex tools:

from llama_index.core.tools import FunctionTool
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
# Toolcog tools
mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")
tool_spec = McpToolSpec(client=mcp_client)
toolcog_tools = tool_spec.to_tool_list()
# Custom local tools
def calculate_price(quantity: int, unit_price: float) -> float:
"""Calculate total price."""
return quantity * unit_price
custom_tool = FunctionTool.from_defaults(fn=calculate_price)
# Combine all tools
all_tools = toolcog_tools + [custom_tool]
agent = ReActAgent.from_tools(
all_tools,
llm=OpenAI(model="gpt-4o")
)

Async Support

For async applications:

import asyncio
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from llama_index.tools.mcp import BasicMCPClient, McpToolSpec
async def async_agent():
mcp_client = BasicMCPClient("https://mcp.toolcog.com/sse")
tool_spec = McpToolSpec(client=mcp_client)
tools = tool_spec.to_tool_list()
agent = ReActAgent.from_tools(
tools,
llm=OpenAI(model="gpt-4o")
)
response = await agent.achat("Find Notion operations for creating pages")
return response
# Run async
result = asyncio.run(async_agent())
print(result)

Best Practices

  1. Enable verbose mode — Set verbose=True during development to see the agent’s reasoning.

  2. Use appropriate LLMs — Larger models (GPT-4, Claude) handle complex tool sequences better.

  3. Handle auth in prompts — Include instructions about authorization in your system prompt.

  4. Reuse connections — Create the MCP client once and reuse it for multiple agent interactions.

Next Steps