Rendered at 17:30:54 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
clevengermatt 2 days ago [-]
Hi HN. OpenBindings is an open spec for describing what a service does once and binding it to any protocol. You define operations with input/output schemas, then point at your existing OpenAPI doc, proto file, MCP server, or whatever else. The spec doesn't replace any of them. They're inputs.
The short version of why: programming languages have had interfaces and duck typing forever. You code to a shape, not an implementation. The web never got a successful equivalent at the network boundary. OpenBindings is an attempt at that.
Fastest way to try it:
brew install openbindings/tap/ob
ob demo
That starts a coffee shop service on six protocols. `ob op exec localhost:8080 getMenu` calls it. The CLI discovers the OBI (OpenBindings Interface) at /.well-known/openbindings and handles the rest.
Would love feedback on the spec design.
mindcrime 2 days ago [-]
Huh. This sounds really interesting. Will definitely give it a look later this evening. At first blush, this sounds like something I could use.
clevengermatt 2 days ago [-]
Thanks! Happy to answer any questions if you're interested. The
ob demo is the fastest way to see it end to end. Starts a
service on six protocols and lets you call it from the CLI.
aaomidi 1 days ago [-]
> The spec doesn't replace any of them. They're inputs.
Claude wrote this
clevengermatt 1 days ago [-]
Claude writes a lot these days. Sometimes, if it works, I don't change it. Good sense of smell on you.
2 days ago [-]
riwsky 2 days ago [-]
The web IS the duck typing equivalent at the network boundary! That’s why plenty of alternative service providers can and do implement eg object storage APIs that work with aws s3 client libraries, or LLM APIs that work with Claude Code. The reasons these use cases are standardized (while others remain fragmented) are economic, not technical (lock-in isn’t as profitable for these alt services as raw adoption)—and so a purely technical solution like this is unlikely to address the crux of the problem.
Even purely on the technical level, this seemingly hasn't internalized the lessons of https://xkcd.com/927/
clevengermatt 1 days ago [-]
On the web being the duck typing equivalent. That's ad-hoc duck typing at the wire format level. S3-compatible APIs exist because companies reverse-engineered S3's surface and matched it. You have to replicate the paths, headers, and response shapes closely enough that the client libraries can't tell the difference, or they break. There's no spec that declares "I satisfy the S3 interface" and no way for a tool to verify compatibility without running requests.
OBI operates at the contract level instead. Two services with different wire formats can satisfy the same interface as long as their operation shapes match. The binding executors handle the wire differences. That's the duck typing analogy. Match the shape, not the implementation.
You're right that the drivers are partly economic. But technical standards and economics aren't isolated from each other. Terraform, Kubernetes, and OpenAPI itself are technical solutions that enabled economic behavior that wasn't viable before them. Lowering the cost of interop changes what's economically rational to pursue.
On the xkcd, the post addresses this. OBI structurally can't replace OpenAPI, gRPC, or MCP. An OBI without sources and bindings pointing to them is an unbound contract, not an actionable interface. The dependency runs one way. Those specs are inputs, not competitors.
rrgok 1 days ago [-]
All that song and dance about programming languages advantages and you chose to use JSON?
Man I hate JSON so much.
clevengermatt 1 days ago [-]
JSON Schema is already the schema format inside OpenAPI, AsyncAPI, and MCP. Using it means OBI can reference those specs directly without translation. Any other choice would have made interop harder, not easier.
The programming language analogy is about structural compatibility between contracts, not about the wire format the contract is written in.
spicyusername 1 days ago [-]
Out of curiosity, what would have been better?
What are the limitations of json schema?
clevengermatt 1 days ago [-]
Fair questions.
OBI needs its schema format to be universally parseable, deterministic to compare, and rich enough to describe the portable subset of any binding format's type system. JSON Schema meets those requirements without tying the spec to any one ecosystem. Proto-level fidelity would center gRPC. A custom IDL would fragment tooling. JSON Schema is the neutral ground.
The limitations are real, especially with proto. For example, our own proto interface creator implementation maps 32-bit integers to JSON Schema `integer` and 64-bit integers to `string` to avoid JSON precision loss beyond 2^53, following Proto3's canonical JSON mapping. Proto enums are int-valued with names; JSON Schema enums are typically strings. Proto's oneof differs semantically from JSON Schema's oneOf. Maps with non-string keys can't be expressed directly.
These affect cross-service structural comparison, not execution. Executors handle source types natively on the wire, so int64 stays int64 when you actually make a call. Transforms bridge shape differences between operation schemas and binding sources at runtime. The comparison layer works at a coarser grain in v0.1 because that's what makes it deterministic and universally implementable. Future profile versions can tighten comparison precision as the ecosystem converges on the tradeoffs worth making.
imtringued 1 days ago [-]
Well he didn't choose JSON, he chose JSON Schema and since the documentation is trying to hide the existence of JSON Schema and the potential limitations of it when used in combination with e.g. gRPC, when the schema is 99% of the project it's hard to trust the project.
That said, if it feels buried, I'll look at surfacing it more clearly.
You're right about the gRPC fidelity issues though. int64 precision, oneof vs oneOf semantics, enum value mapping, and well-known types all need careful handling when binding to Proto. The tradeoff is that JSON Schema is already the schema format inside OpenAPI, AsyncAPI, and MCP, so OBI can reference their schemas directly without translation. Proto would have given better fidelity for gRPC but required schema translation for every other binding. Picking JSON Schema prioritizes cross-binding reach over depth in any one protocol.
The fidelity limitations deserve clearer documentation. Adding that to the list. Thanks for pushing on this.
clevengermatt 1 days ago [-]
[dead]
quellhorst 2 days ago [-]
Shouldn't AI have made this less of a problem by now?
clevengermatt 2 days ago [-]
It's made it more tolerable and less visible, but not less real. An LLM can read docs and generate API calls, but it's guessing at structure that could be declared, parsing docs that could be machine-readable, and inferring equivalence that could be verified.
An OBI gives an AI agent typed operations and bindings it can use on the first try. No doc parsing, no guessing at endpoints. And AI can generate OBIs from existing specs or use the ob cli to do it for them.
AI and structured contracts aren't competing concepts. Good interface design matters as much for AI as it does for us. Maybe more.
vivzkestrel 2 days ago [-]
this means AWS and GCP ll have to implement your specification right?
clevengermatt 1 days ago [-]
No, and that's the point. You can generate an OBI from AWS's or GCP's existing OpenAPI specs today with the ob CLI. An OBI is a unified primitive that tools can build on. You can generate OBIs ad hoc for your own consumption without the vendor knowing or caring. Vendor buy-in makes the ecosystem stronger, but it's not required to get value.
The longer game is vendor-published OBIs and shared interface roles (e.g. a neutral object-storage interface that S3 and GCS both satisfy), but one OBI, generated from specs that already exist, is still useful on day one.
The short version of why: programming languages have had interfaces and duck typing forever. You code to a shape, not an implementation. The web never got a successful equivalent at the network boundary. OpenBindings is an attempt at that.
What's here today: - The spec (v0.1.0): https://openbindings.com/spec - ob CLI: https://github.com/openbindings/ob - Go SDK: https://github.com/openbindings/openbindings-go - TypeScript SDK: https://github.com/openbindings/openbindings-ts - Binding executors for different protocols
Fastest way to try it: brew install openbindings/tap/ob ob demo
That starts a coffee shop service on six protocols. `ob op exec localhost:8080 getMenu` calls it. The CLI discovers the OBI (OpenBindings Interface) at /.well-known/openbindings and handles the rest.
Would love feedback on the spec design.
Claude wrote this
Even purely on the technical level, this seemingly hasn't internalized the lessons of https://xkcd.com/927/
OBI operates at the contract level instead. Two services with different wire formats can satisfy the same interface as long as their operation shapes match. The binding executors handle the wire differences. That's the duck typing analogy. Match the shape, not the implementation.
You're right that the drivers are partly economic. But technical standards and economics aren't isolated from each other. Terraform, Kubernetes, and OpenAPI itself are technical solutions that enabled economic behavior that wasn't viable before them. Lowering the cost of interop changes what's economically rational to pursue.
On the xkcd, the post addresses this. OBI structurally can't replace OpenAPI, gRPC, or MCP. An OBI without sources and bindings pointing to them is an unbound contract, not an actionable interface. The dependency runs one way. Those specs are inputs, not competitors.
Man I hate JSON so much.
The programming language analogy is about structural compatibility between contracts, not about the wire format the contract is written in.
What are the limitations of json schema?
OBI needs its schema format to be universally parseable, deterministic to compare, and rich enough to describe the portable subset of any binding format's type system. JSON Schema meets those requirements without tying the spec to any one ecosystem. Proto-level fidelity would center gRPC. A custom IDL would fragment tooling. JSON Schema is the neutral ground.
The limitations are real, especially with proto. For example, our own proto interface creator implementation maps 32-bit integers to JSON Schema `integer` and 64-bit integers to `string` to avoid JSON precision loss beyond 2^53, following Proto3's canonical JSON mapping. Proto enums are int-valued with names; JSON Schema enums are typically strings. Proto's oneof differs semantically from JSON Schema's oneOf. Maps with non-string keys can't be expressed directly.
These affect cross-service structural comparison, not execution. Executors handle source types natively on the wire, so int64 stays int64 when you actually make a call. Transforms bridge shape differences between operation schemas and binding sources at runtime. The comparison layer works at a coarser grain in v0.1 because that's what makes it deterministic and universally implementable. Future profile versions can tighten comparison precision as the ecosystem converges on the tradeoffs worth making.
That said, if it feels buried, I'll look at surfacing it more clearly.
You're right about the gRPC fidelity issues though. int64 precision, oneof vs oneOf semantics, enum value mapping, and well-known types all need careful handling when binding to Proto. The tradeoff is that JSON Schema is already the schema format inside OpenAPI, AsyncAPI, and MCP, so OBI can reference their schemas directly without translation. Proto would have given better fidelity for gRPC but required schema translation for every other binding. Picking JSON Schema prioritizes cross-binding reach over depth in any one protocol.
The fidelity limitations deserve clearer documentation. Adding that to the list. Thanks for pushing on this.
An OBI gives an AI agent typed operations and bindings it can use on the first try. No doc parsing, no guessing at endpoints. And AI can generate OBIs from existing specs or use the ob cli to do it for them.
AI and structured contracts aren't competing concepts. Good interface design matters as much for AI as it does for us. Maybe more.
The longer game is vendor-published OBIs and shared interface roles (e.g. a neutral object-storage interface that S3 and GCS both satisfy), but one OBI, generated from specs that already exist, is still useful on day one.