Content Pipeline
Two efforts in parallel: schema normalization for native LLM tools, and completing the content pipeline with response decoding.
Schema Normalization
Native LLM tool generation requires transforming JSON Schemas into whatever subset each vendor accepts. No vendor documents which keywords they support. No two vendors support the same subset.
We’re building normalizers for each vendor’s quirks. Every hundred operations surfaces another valid construct that some vendor rejects. additionalProperties works in some contexts but not others. oneOf supported by one vendor, ignored by another. The transformation pipeline keeps growing.
MCP dynamic tool updates would help—register tools on demand as agents discover operations. But no clients support it yet. We’re evaluating whether to wait for ecosystem support or find a different approach.
Response Decoding
The content pipeline needs its other half. Request encoding handles values going out; response decoding handles what comes back.
OpenAPI defines multiple possible responses per operation—success codes, error codes, ranges like 2XX, default fallback. Each response can declare multiple media types. The decoder matches status codes using OpenAPI precedence, selects media types with wildcard support, applies the appropriate decoder, and preserves the matched definitions for downstream use.
Same pluggable architecture as request encoding, inverted. The content pipeline is now symmetric.