OpenAI SDK

OpenAI’s Agents SDK supports MCP (Model Context Protocol), enabling GPT models to use Toolcog for API discovery and execution.

Prerequisites

Installation

Terminal window
pip install openai-agents

Connecting to Toolcog

The OpenAI Agents SDK supports remote MCP servers via SSE transport.

from agents import Agent, Runner
from agents.mcp import MCPServerSse
# Connect to Toolcog as an MCP server
toolcog = MCPServerSse(
name="toolcog",
url="https://mcp.toolcog.com"
)
# Create an agent with Toolcog tools
agent = Agent(
name="API Agent",
instructions="""You are a helpful assistant that can discover and use APIs.
Use find_api to search for operations, learn_api to understand interfaces,
and call_api to execute operations.""",
mcp_servers=[toolcog]
)
# Run the agent
async def main():
async with toolcog:
result = await Runner.run(
agent,
"Find Stripe operations for creating customers"
)
print(result.final_output)

Using Toolcog’s Tools

Once connected, your agent has access to three API meta-tools:

ToolPurpose
find_apiDiscover operations by describing what you want to do
learn_apiGet TypeScript types for an operation’s interface
call_apiExecute an operation with arguments

The agent automatically uses these tools based on the conversation.

Example: Multi-Step Workflow

from agents import Agent, Runner
from agents.mcp import MCPServerSse
async def run_workflow():
toolcog = MCPServerSse(
name="toolcog",
url="https://mcp.toolcog.com"
)
agent = Agent(
name="Workflow Agent",
instructions="""You help users accomplish tasks by discovering
and executing API operations. When asked to do something:
1. Use find_api to discover relevant operations
2. Use learn_api if you need to understand the interface
3. Use call_api to execute the operation
4. Report the results clearly""",
mcp_servers=[toolcog]
)
async with toolcog:
# The agent will discover, learn, and execute
result = await Runner.run(
agent,
"Create a new customer in Stripe with email user@example.com"
)
print(result.final_output)

Handling Authentication

When Toolcog needs authorization, the call_api response includes an authorization URL. The agent should present this to the user:

agent = Agent(
name="Auth-Aware Agent",
instructions="""When an operation requires authorization,
you'll receive an authorization URL. Present this to the user
and ask them to complete authorization before retrying.""",
mcp_servers=[toolcog]
)

The agent handles this naturally—when it receives an auth requirement, it informs the user and can retry after authorization is complete.

Catalogs

To use a specific catalog:

toolcog = MCPServerSse(
name="toolcog-internal",
url="https://mcp.toolcog.com/mycompany/internal-apis"
)

Multiple Agents with Shared Tools

You can create multiple specialized agents that share access to Toolcog:

from agents import Agent, Runner
from agents.mcp import MCPServerSse
async def multi_agent_workflow():
toolcog = MCPServerSse(name="toolcog", url="https://mcp.toolcog.com")
# Discovery agent finds relevant operations
discovery_agent = Agent(
name="Discovery",
instructions="Find API operations relevant to the user's request.",
mcp_servers=[toolcog]
)
# Execution agent performs operations
execution_agent = Agent(
name="Executor",
instructions="Execute API operations based on discovered interfaces.",
mcp_servers=[toolcog]
)
async with toolcog:
# First, discover operations
discovery_result = await Runner.run(
discovery_agent,
"What GitHub operations can create issues?"
)
# Then execute with context
execution_result = await Runner.run(
execution_agent,
f"Based on this discovery: {discovery_result.final_output}\n"
f"Create an issue titled 'Bug fix needed' in the acme/project repo"
)
print(execution_result.final_output)

Streaming Responses

For long-running operations, use streaming:

from agents import Agent, Runner
from agents.mcp import MCPServerSse
async def stream_response():
toolcog = MCPServerSse(name="toolcog", url="https://mcp.toolcog.com")
agent = Agent(
name="Streaming Agent",
instructions="Help users with API operations.",
mcp_servers=[toolcog]
)
async with toolcog:
async for event in Runner.run_streamed(
agent,
"List all my GitHub repositories"
):
if event.type == "raw_response_event":
# Handle streaming content
print(event.data, end="", flush=True)

Best Practices

  1. Context management — The Agents SDK maintains conversation context automatically. For multi-turn interactions, use the same Runner instance.

  2. Error handling — Wrap agent runs in try/except to handle network errors and API failures gracefully.

  3. Tool validation — The agent validates tool inputs before execution. Invalid inputs are reported back for correction.

  4. Parallel execution — For independent operations, you can run multiple agents concurrently using asyncio.gather().

Next Steps