Toolcog doesn’t give AI a list of specific tools. It gives AI three API meta-tools that unlock every API.
This is a fundamental architectural difference. Traditional AI tool systems register hundreds of operations upfront—stripe_create_customer, github_list_repos, slack_send_message—each with its own schema. Scaling this to thousands of APIs means millions of tool definitions. Toolcog inverts this: AI discovers, learns, and executes operations dynamically.
Every API interaction follows three steps:
Each step is a separate tool. Together, they unlock every indexed API.
AI describes what it wants to do in natural language. The system performs semantic search across all indexed operations and returns matches with their TypeScript signatures.
Input:
intent: "create an issue on GitHub"Output:
Found 12 relevant operations:
// Create an issuefunction "github/issues/create"(args: { path: { // The account owner of the repository. The name is not case sensitive. owner: string;
// The name of the repository without the `.git` extension. The name is not case sensitive. repo: string; }; body: { // The title of the issue. title: string | number;
// The contents of the issue. body?: string;
// Labels to associate with this issue. labels?: ({ id?: number; name?: string; } | string)[];
// Logins for Users to assign to this issue. assignees?: string[]; };}): IssuesCreateResponse;
// Create reaction for an issuefunction "github/reactions/create-for-issue"(args: { path: { owner: string; repo: string; issue_number: number; }; body: { content: "+1" | "-1" | "laugh" | "confused" | "heart" | "hooray" | "rocket" | "eyes"; };}): ReactionsCreateForIssueResponse;
Also found:- `github/issues/update`- `github/pulls/create`- `github/issues/create-comment`- `github/issues/add-labels`
Use the `learn_api` tool to expand types. Use the `call_api` tool to execute.The search is semantic, not lexical. “Add a new person to my account” matches createCustomer even though the words don’t overlap. This works because every operation is indexed with multiple intent phrases—natural language descriptions of what the operation accomplishes.
The top matches include full function signatures with inline documentation. For many operations, AI can call directly without needing learn_api—the signature shows exactly what arguments call_api expects. Type references like IssuesCreateResponse can be expanded via learn_api when AI needs to understand response structure.
When an operation has complex types or AI needs to understand response structure, it requests expanded declarations. The system generates full TypeScript types that resolve all references.
Input:
operation: "github/issues/create"response: trueOutput:
// Create an issue//// Any user with pull access to a repository can create an issue...interface IssuesCreateRequest { path: { owner: string; repo: string; }; body: IssuesCreateRequestBody;}
interface IssuesCreateRequestBody { title: string | number; body?: string; labels?: ({ id?: number; name?: string; } | string)[]; assignees?: string[]; // ...}
// Responseinterface IssuesCreateResponse201 { status: "201"; body: Issue;}
// Issues are a great way to keep track of tasks, enhancements, and bugs...interface Issue { id: number; node_id: string; url: string; html_url: string; number: number; state: string; state_reason?: "completed" | "reopened" | "not_planned" | "duplicate" | null; title: string; body?: string | null; user: SimpleUser | null; labels: ({ id?: number; name?: string; color?: string | null; } | string)[]; // ... additional fields}
interface SimpleUser { login: string; id: number; avatar_url: string; // ...}
type IssuesCreateResponse = IssuesCreateResponse201 | IssuesCreateResponse400 | ...;These types are generated from the operation’s OpenAPI specification. They’re not approximate—if a field is required, it shows as required. If it’s optional, it shows with ?. Enums become union types. Response types are discriminated unions by HTTP status code, so AI can handle success and error cases appropriately.
By default, learn_api returns request types. Response types are optional because AI often doesn’t need them—the request signature from find_api is usually enough to construct valid arguments.
AI calls operations by providing the operation name and structured arguments. The system handles everything else: constructing the HTTP request, applying credentials, and returning the response.
Input:
{ "operation": "github/issues/create", "arguments": { "path": { "owner": "anthropics", "repo": "claude-code" }, "body": { "title": "Bug in authentication flow", "body": "Steps to reproduce..." } }}Output:
{ "status": 201, "statusText": "Created", "body": { "id": 1234567, "number": 42, "title": "Bug in authentication flow", "html_url": "https://github.com/anthropics/claude-code/issues/42", "state": "open" }}Arguments are structured by location: path for URL parameters, query for query strings, header for HTTP headers, body for the request body. The signature from find_api shows exactly which location each parameter belongs to.
Credentials never appear in the conversation. When the operation requires authentication, the system:
AI constructs arguments. The system handles authentication invisibly.
When credentials are missing or expired, call_api returns an authorization URL along with the error response. AI presents this link to the user. After authorization, AI retries the operation. An auth error for one operation doesn’t mean all operations are blocked—different operations may require different credentials or scopes.
Consider the alternative: giving AI 100 Stripe tools, 100 GitHub tools, 100 Salesforce tools. Now multiply across 10,000 APIs with 100 operations each. That’s a million tool definitions, all shipped with every AI deployment, all requiring synchronization when APIs change.
The meta-tool architecture reduces this to three tools regardless of API count:
| Approach | Tool Definitions | API Changes |
|---|---|---|
| Fixed tools | 1,000,000+ | Requires redeployment |
| Meta-tools | 3 | Automatic |
When a new API is indexed, AI can use it immediately. When an API updates, the new schema is available instantly. No code changes, no retraining, no synchronization.
Because AI discovers operations dynamically, it can compose workflows across services without predefined integrations:
“Find payment failures in Stripe and create a GitHub issue for each one”
AI discovers stripe/GetPaymentIntents, filters for failures, then discovers github/issues/create for each one. No one programmed this specific workflow. AI composed it from available operations.
Cross-service operations work the same way:
“When someone stars my GitHub repo, add them as a Mailchimp subscriber”
AI finds the relevant operations in both services and chains them together. The ability to discover and execute arbitrary operations means workflows emerge from intent rather than from hardcoded integrations.
Here’s what happens when you ask AI to “create a GitHub issue for the bug we discussed”:
1. Discovery
AI calls find_api with intent “create an issue on GitHub” and gets back the signature for github/issues/create showing it needs path.owner, path.repo, body.title, and optionally body.body.
2. Execution
AI constructs the arguments and calls call_api. The system returns { status: 201, body: { number: 42, html_url: "..." } }.
If GitHub isn’t authorized, call_api returns an authorization URL with the error. AI presents the link, user authorizes, and AI retries.
For simple operations, AI skips learn_api entirely—the signature from find_api contains everything needed. learn_api is for expanding type references when AI needs to understand complex nested structures or response formats.
When a new API is added to the catalog:
find_apilearn_apicall_apiNo configuration required. No AI retraining. The new API works like every other API—because the meta-tools provide a universal interface to every API.
This is the core innovation: not three individual tools, but a systemic inversion of how AI interacts with APIs. The system handles discovery, learning, and authentication. AI focuses on intent and composition.