Back to the blog

The Case for Broken Standards

JSON Schema is a mess. OpenAPI is poorly designed. But the value of a standard isn't its elegance—it's its existence.

· 4 min read

I’ve spent the last six months deep in the bowels of JSON Schema. I have opinions.

JSON Schema is a mess. The spec has evolved through multiple incompatible drafts. Draft 4, Draft 6, Draft 7, Draft 2019-09, Draft 2020-12—each with breaking changes, each with different keyword semantics. OpenAPI 3.0 uses its own dialect that’s almost-but-not-quite Draft 5. OpenAPI 3.1 claims JSON Schema compatibility but adds its own interpretations.

The nullable keyword alone has three different meanings depending on which dialect you’re in. type can be a string or an array, except when it can’t. $ref used to replace sibling keywords, now it doesn’t. The definitions keyword became $defs. Every parser has to handle all of this.

OpenAPI is worse. The spec is ambiguous in critical places. What happens when style: deepObject combines with allowReserved: true? The spec doesn’t say. How do property-level encodings interact with schema-level contentEncoding? Unclear. The security scheme inheritance model requires careful reading of scattered paragraphs to understand.

I’ve read standards I’d call well-designed. RFC 6570 for URI templates is elegant—clear semantics, comprehensive, unambiguous. HTTP/1.1 is a masterpiece of practical protocol design. JSON Schema and OpenAPI are not in that category.

The temptation

It’s easy to design something better. I know because I’ve done it. Multiple times.

Early in this project, we had a template engine for transforming JSON responses into focused markdown. AI-generated templates that extracted the relevant fields and presented them in human-readable form. It was elegant. It worked well in demos. JSON isn’t the ideal format for LLMs, and the templates made responses cleaner.

We also experimented with our own API description format. Cleaner than OpenAPI. No dialect confusion. Explicit semantics for every edge case. Everything JSON Schema does wrong, we’d do right.

These were technically superior approaches. We abandoned both.

The realization

For years I missed something obvious: the value of a standard isn’t its elegance. It’s its existence.

Silly Putty can form many more shapes than Lego bricks. But you don’t see Silly Putty Saturn V rockets flying off the shelf. The constraint is often the gift, even when it feels like the burden.

JSON Schema has tooling. Validators, generators, documentation tools, editor support. Thousands of developers understand it. Every API spec in the wild uses some dialect of it, or can be converted to it. When we parse a JSON Schema, we’re parsing something that actually exists at scale.

OpenAPI specs exist for nearly every major API. Not theoretical specs we’d like to exist—actual specs, published, maintained, used. GitHub, Stripe, Google, AWS, Salesforce. The coverage is remarkable. Imperfect, inconsistent, sometimes wrong—but real.

Our more elegant alternatives would start with zero ecosystem support and zero existing specs. We’d be building on foundations that exist only in our codebase.

Using standards in non-standard ways

Here’s what we actually did instead: we use OpenAPI—a documentation format—as an execution format. We parse the spec and use it to drive actual API calls. Every parameter style, every encoding rule, every security scheme becomes executable.

We use JSON Schema—a validation format—as the intermediate representation for a type system. The schema is the AST. Keywords are the nodes. We traverse, transform, and generate from it like a compiler working on source code.

This isn’t what these standards were designed for. But because the standards exist, because specs exist, because tooling exists—we can build on them.

The current frustration

MCP is the protocol we’re using to connect to AI clients. It’s well-designed, active development, good momentum. But it doesn’t support dynamic tool refresh. You register tools at connection time. If you want to change the tool set, you need a new connection.

This is a problem. Our whole approach depends on dynamic tool discovery. We can’t front-load thousands of operations into the initial tool set. We need to be able to change what’s available based on what the user is doing.

We’re working within this constraint. There may be creative solutions we haven’t found yet. The standard is what it is, and we build on what exists.

The lesson

Every standard is flawed. Perfection isn’t the goal. Standardization is.

A broken standard that exists beats a beautiful one that doesn’t. The mess of JSON Schema dialects is better than a clean format no one uses. The ambiguities in OpenAPI are better than clarity in a spec with no adoption.

I spent years working around standards that weren’t as clean as they could be. I designed better alternatives. None of them mattered because none of them existed in the way that matters—with adoption, tooling, and real-world artifacts.

The constraint is the gift. Even when it doesn’t feel like it.