Cloud launching May 2026. The library is MIT and shipping today.
kavachOS

00/AI agents

Every agent is a subject.
Treat it like one.

Most auth libraries were built when the only thing signing in was a person at a keyboard. kavachOS treats every agent as a first-class identity with its own scopes, parent, and audit trail. You can reason about what an agent is allowed to do without reading a prompt template.

01/Why this matters

Agents log in differently.

A user signs in with a password, a passkey, or a federated provider. An agent signs in with a delegation: the user grants a subset of their own capability, bound to one tool or one resource, for a short window.

Every primitive in kavachOS is built around this. The core library models delegation as the default case, not as a layer you bolt on later. You do not have to reinvent the scope reduction logic, or remember to add an aud claim, or build your own revocation tree.

02/Signals

What a healthy agent token looks like.

When you look at a token in the audit log, you should see six things. If any one is missing, you have a design bug. kavachOS makes all six the default.

Parent subject

Every agent has a verifiable root.

Scope subset

Child scopes are provably a subset of the parent.

Audience bound

Tokens carry `aud`, cannot be replayed elsewhere.

Short TTL

Minutes by default, never hours.

Chain audit

Every hop is logged with a reason.

Kill the parent

Revoke a human and every agent downstream stops.

03/Example

A real chain, start to finish.

A human signs in, hands an orchestrator a narrow capability, which fans out to a researcher, which finally calls a specific tool against a named MCP server.

human (root, scopes: *)
  └── agent.orchestrator (scopes: tools:call:*, ttl 1h)
         └── agent.researcher (scopes: tools:call:search, ttl 15m)
                └── mcp.tool.call (aud=notion.mcp, ttl 5m)

Every edge is a signed assertion. The library verifies each hop offline, so the tool server does not need a network call to the control plane to trust the leaf token.

04/Do not do this

Four patterns we see in agent codebases that fail in production.

If your current agent auth looks like one of these, it is a fix-ahead-of-incident kind of problem.

Ship the user's session cookie to the LLM.

Cookies carry every scope. Nothing reduces. One leaked prompt and the agent can do anything the user can.

Mint an API key per agent and store it in the LLM context.

API keys do not carry a parent. When the user leaves the company, the agent keeps moving.

Use a JWT without an audience claim.

Unbound tokens can be replayed against any server that accepts the same signer. That is one compromised tool away from disaster.

Rely on the LLM to call the right tool.

Authorization is not a prompt. It is a cryptographic check at the tool server. Put it there.
If you cannot say what an agent is allowed to do without reading a prompt, you have not written auth yet.
Design principle

05/Works with

The agent runtimes you already ship with.

kavachOS is runtime-agnostic. If it speaks Web Fetch and presents a bearer token, it plugs in.

Claude & Anthropic MCP
OpenAI Agents SDK
LangChain / LangGraph
LlamaIndex
Mastra
Vercel AI SDK
Temporal workflows
Your own loop

Ship agents with real auth.

Open source library today. Managed cloud in early access. The wedge is the identity model.