Back to the blog

The APIs Are Fine

Everyone's talking about making APIs 'AI-ready.' The APIs aren't the problem.

· 4 min read

A recent article making the rounds claims that “most APIs fail in AI systems” and proposes a six-dimension framework for measuring “API AI-readiness.” The premise is that APIs are designed for humans, not machines, and need to be fixed before AI can use them effectively.

This is backwards.

APIs are machine interfaces

That’s what API stands for: Application Programming Interface. The entire stack—HTTP, JSON, OpenAPI—exists for programmatic consumption. APIs have been serving machines reliably for decades. They’re not failing. They’re doing exactly what they were designed to do.

What’s designed for humans is the SDK wrapper. The language-native functions that developers call from code. The generated client libraries that translate between programming language idioms and wire protocols.

The article conflates “APIs” with “the way we’ve been trying to connect AI to APIs.” That’s where the failure is.

The static tool paradigm

Here’s how most AI tool systems work: take an API, generate a tool definition with a JSON Schema, register it at connection time, hope the model picks the right one. Every operation becomes a fixed function in the model’s context.

This approach has fundamental scaling limits:

Context exhaustion. You can’t front-load thousands of operations. The tool definitions alone would consume the entire context window.

Schema constraints. Every LLM vendor has different restrictions on tool schemas. Depth limits, property count limits, type subset restrictions. The space of all APIs is not a subset of what any single vendor allows.

Selection degradation. Give a model 500 tools and accuracy drops. Similar descriptions blur together. Rare operations get ignored. The paradigm assumes tools are scarce.

When this fails, people blame the APIs. “The descriptions weren’t clear enough.” “The schemas were too complex.” “The spec wasn’t AI-ready.”

The APIs aren’t the problem. The paradigm is.

What actually works

Yesterday I used AI to work across eight platforms in a single conversation: Google Calendar, Google Slides, Gmail, Stripe, Asana, GitHub, ChatGPT, and Cloudflare. Calendar invites with Meet links. Slide decks with content. Subscription products with pricing tiers. Tasks created and completed. Issues filed. Emails sent.

The APIs worked perfectly.

Not because those APIs are unusually well-designed. Not because their OpenAPI specs score highly on some readiness framework. But because the infrastructure interpreting those specs handles the messiness that exists in the real world.

Authentication modeled as header parameters instead of security schemes? The bridge normalizes it. Complex encoding rules that permute in ways the spec doesn’t fully define? The bridge handles every combination. Schemas that need transformation for models to generate reliably? The bridge synthesizes appropriate interfaces.

The APIs don’t need to be perfect. The bridge needs to be robust.

The readiness gap is an artifact

The “AI-readiness gap” the article describes is real—but it’s an artifact of the wrong paradigm, not a property of APIs.

If your approach requires every API to have flawless descriptions, perfectly consistent schemas, and explicit semantic tagging before AI can use it, then yes, you’ll find a gap. You’ll measure APIs against an ideal they’ll never reach. You’ll sell consulting services to improve scores that don’t solve the underlying problem.

Or you can build infrastructure that handles APIs as they are.

We analyzed over 4,000 OpenAPI specs. 61% model authentication as regular parameters instead of security schemes. That’s a real problem—if your system can’t distinguish credentials from data. We built a normalizer that promotes auth parameters to proper security schemes automatically. The specs don’t need to change. The bridge handles it.

OpenAPI’s encoding model is genuinely complex—parameter styles, explode modes, content types, property encodings permuting in underspecified ways. That’s a real problem—if your system special-cases its way through. We built a unified execution engine that handles every combination through composable primitives. The specs don’t need to simplify. The bridge handles it.

The semantic content needed for discovery doesn’t exist in most API documentation. That’s a real problem—if your system can only embed what already exists. We generate intent phrases using AI that understands what operations are for, not just what they’re called. The specs don’t need better descriptions. The bridge creates what’s needed.

You don’t need to wait

The article frames API improvement as a prerequisite for AI success. Fix your specs, improve your scores, then AI will work.

You don’t need to wait for that.

The APIs are fine. They’ve been fine. The protocols work, the specs exist, the operations execute correctly. What was missing was infrastructure that bridges the gap between what APIs provide and what AI systems need—without requiring the entire ecosystem to change first.

That infrastructure exists now.

If you want to see what AI can do when the bridge is right, try it. Not a carefully curated demo with hand-picked APIs. Your actual workflows. Your actual services. The messy reality of production systems.

The APIs will work. They always did.