Back to the blog

You Hold the Keys

We manage the vault. You hold the only key. Why we built credential protection around mutual interest.

· 4 min read

We don’t want your credentials.

This isn’t a policy statement or a promise we’re asking you to trust. It’s an architectural fact. The system is designed so that we cannot access your credentials, even if we wanted to, even with full database access, even under legal compulsion.

Here’s why that matters, and how it works.

The alignment problem

Most credential systems are adversarial by design. The platform wants access so it can provide services. The user wants protection against misuse. These goals conflict, and the conflict is resolved by policies, contracts, promises. Trust.

We wanted to eliminate the conflict entirely. Design a system where our interests and yours are perfectly aligned. Where we don’t have access because we don’t want it and the architecture doesn’t allow it.

Think of a safety deposit box. The bank manages the vault, maintains security, controls physical access. But the bank doesn’t have your key. They couldn’t open your box if they wanted to. The architecture makes it impossible.

That’s what we built for credentials.

The encryption chain

Your credentials are protected by envelope encryption with four tiers:

  1. Your session token encrypts a user-specific secret we never store
  2. That secret derives a key encryption key (KEK) via HKDF
  3. The KEK wraps your vault’s data encryption key (DEK)
  4. The DEK encrypts individual credentials

To decrypt anything, you need the full chain. The chain starts with your session token. We don’t have your session token—we have a hash of it for validation, but hashes don’t decrypt.

Without your token, the chain cannot begin. Every credential remains encrypted, unreadable, inaccessible.

What we can see

We can see that you have credentials. The metadata—which APIs, when created, how often used—is visible to us. We need this for operational purposes: displaying the UI, rate limiting, debugging issues.

We cannot see the credentials themselves. The Stripe API key, the GitHub token, the OAuth refresh token—these are opaque blobs of encrypted data. We store them but cannot read them.

If you asked us what your Stripe API key is, we couldn’t tell you. If a court ordered us to produce it, we couldn’t comply. Not because of policy, but because we don’t have it.

Confidentiality over durability

Traditional secret management optimizes for durability. Don’t lose credentials. Backup the encryption keys. Have recovery procedures. The assumption is that credentials are precious and irreplaceable.

We optimize for confidentiality instead. Credentials are replaceable—you can generate a new Stripe API key. What’s not replaceable is trust, once violated.

So we made a tradeoff: if you lose your session and haven’t linked a persistent identity (Google, GitHub, email), your vault is unrecoverable. The encryption keys derived from your session are gone. Your credentials are encrypted data that no one can decrypt.

This is a feature, not a bug. The same property that makes your credentials unrecoverable by you makes them unrecoverable by anyone. The confidentiality guarantee is absolute.

The trust question

When you evaluate any credential management system, the question is: what am I trusting, and can that trust be violated?

With traditional platforms, you trust their policies, their security practices, their employees, their legal response. Any of these can fail. An employee goes rogue. A database is breached. A subpoena is served. The trust is in people and processes.

With envelope encryption, the trust shifts to a narrower, auditable question: is the cryptography sound? If the encryption is implemented correctly, the guarantees hold regardless of anything else. Policies can’t override mathematics.

We use standard primitives (AES-GCM, HKDF) with established implementations. The approach is auditable. You don’t have to trust our intentions—you can verify the mechanism.

How AI uses credentials

When AI makes an authenticated API call through Toolcog:

  1. AI provides operation name and parameters—never credentials
  2. The execution engine identifies the required authentication scheme
  3. The engine verifies that all requirements are met for using the credential
  4. Your session-derived keys decrypt the necessary credential
  5. The credential is held in memory momentarily
  6. The HTTP request is constructed with the credential applied
  7. The request executes
  8. The credential is discarded from memory
  9. AI receives the response

At no point do credentials enter the conversation. The LLM processes your requests without any mechanism to access credential values. The narrow waist architecture ensures AI never sees secrets.

The mutual interest

We designed this system because we don’t want the liability of holding your credentials. A breach that exposes credentials is catastrophic—for you and for us. By eliminating every exposure avenue we reasonably can, we’ve minimized the risk for both parties.

This is what aligned incentives look like in infrastructure design. Not “we promise to protect your data” but “we’ve structured the system so your data can’t be extracted.” Not trust in our goodwill but trust in architecture that makes goodwill irrelevant.

You hold the keys. We manage the vault. The system works because neither party needs to trust the other beyond what the architecture guarantees.

That’s how credential security should work.