API Tools

Every API operation becomes a tool that AI can discover and call. Upload an OpenAPI spec, and each operation—stripe_customers_create, github_repos_get, slack_chat_postMessage—becomes available to any connected agent.

Native Mode: Operations as Tools

In native mode (?tools=native), each operation is an individual MCP tool. When an agent lists available tools, it sees the operations themselves. When it calls github_issues_create, that tool executes directly.

Most operations use deferred loading—available but not in the agent’s context until discovered. The agent uses find_api to search, which returns tool references that expand to full definitions. Pinned operations skip discovery and are always in context.

claude mcp add --transport http toolcog-github 'https://mcp.toolcog.com/github?tools=native'

Meta Mode: Indirect Access

In meta mode (?tools=meta, the default), three tools provide indirect access to all operations:

This mode works with any MCP client regardless of tool search support.

claude mcp add --transport http toolcog-github 'https://mcp.toolcog.com/github'

Discovery: find_api

AI describes what it wants to do, and find_api returns matching operations via semantic search.

Input:

intent: "create an issue on GitHub"

Output (meta mode):

Found 12 relevant operations:
// Create an issue
function "github/issues/create"(args: {
owner: string;
repo: string;
body: {
title: string | number;
body?: string;
labels?: ({ id?: number; name?: string; } | string)[];
assignees?: string[];
};
}): IssuesCreateResponse;
Also found:
- `github/issues/update`
- `github/pulls/create`
- `github/issues/create-comment`
Use the `learn_api` tool to expand types. Use the `call_api` tool to execute.

In native mode, find_api returns tool_reference blocks instead. The client expands these to full tool definitions, and the agent calls operations directly by name.

The search is semantic, not keyword-based. “Bill the client” matches createInvoice. “What’s in my cart?” matches getCheckoutSession. Every operation is indexed with intent phrases—natural language descriptions of what it accomplishes.

Type Information: learn_api

When an operation has complex types or the agent needs to understand response structure, learn_api provides expanded declarations.

Input:

operation: "github/issues/create"
response: true

Output:

interface IssuesCreateRequest {
owner: string;
repo: string;
body: {
title: string | number;
body?: string;
labels?: ({ id?: number; name?: string; } | string)[];
assignees?: string[];
};
}
interface IssuesCreateResponse201 {
status: "201";
body: Issue;
}
interface Issue {
id: number;
node_id: string;
url: string;
html_url: string;
number: number;
state: string;
title: string;
body?: string | null;
// ...
}

Types are generated from the OpenAPI spec—required fields show as required, optional fields have ?, enums become unions, responses are discriminated by status code.

Execution

In native mode, agents call operations directly:

github_issues_create({ owner: "anthropics", repo: "claude-code", body: { title: "Bug report" } })

In meta mode, agents use call_api:

{
"operation": "github/issues/create",
"arguments": {
"owner": "anthropics",
"repo": "claude-code",
"body": { "title": "Bug report" }
}
}

Either way, the execution engine bridges two worlds. LLMs operate in semantic space—structured objects, typed fields. APIs operate in protocol space—URL-encoded query strings, multipart boundaries, RFC headers. The agent provides semantic arguments; the engine transforms them into a valid HTTP request.

This transformation handles the full complexity of OpenAPI: parameter styles, explode modes, content encodings, and all their permutations. Path parameters are interpolated, query strings serialized according to their style, headers formatted per spec, bodies encoded by content type. The agent provides structured data; the engine handles encoding.

Credentials are applied according to the operation’s security scheme—OAuth tokens in Authorization headers, API keys in headers or query parameters, Basic auth properly encoded. Credentials are retrieved encrypted, decrypted for this request only, applied, and discarded immediately.

{
"status": 201,
"body": {
"id": 1234567,
"number": 42,
"title": "Bug report",
"html_url": "https://github.com/anthropics/claude-code/issues/42"
}
}

When credentials are missing or expired, the response includes an authorization URL. AI presents the link to the user. After the user authorizes, they can ask AI to try again.

Why This Architecture

Traditional MCP servers force a choice: which operations are important enough to include? The official GitHub MCP server exposes 35 operations. That’s 35 out of 1,088—someone decided which 3% of GitHub’s API matters. Need an operation they didn’t pick? Out of luck.

This isn’t arbitrary conservatism. Tool definitions consume context and degrade selection accuracy as they scale. A few dozen tools work fine. A few hundred overwhelm the model. Traditional servers stay small because they have to.

Toolcog’s GitHub bridge exposes all 1,088 operations. The difference is deferred loading: operations are available but not in context until discovered through semantic search. You get full API coverage without the cost. The agent’s context stays lean while having access to everything the API offers.

No one decides which operations matter. No one predicts what you’ll need. Every operation is available:

New APIs work immediately after indexing. The agent discovers what it needs, when it needs it.

Composing Workflows

Because AI discovers operations dynamically, it composes workflows across services without predefined integrations:

“Find overdue invoices in Stripe and send reminder emails through SendGrid”

AI discovers the relevant operations in both services and chains them together. The workflow emerges from intent rather than hardcoded integrations.

Next Steps