Manav.id
Developer4 min read

CrewAI + Manav

CrewAI Manav

A CrewAI crew is a fleet. Fleets need delegation chains, not shared service accounts. Here is how each crew member carries its own attestation back to the human at the top.

The crew model

CrewAI's mental model is a crew of role-specialized agents — researcher, writer, editor, executor — collaborating on a goal. Without identity, every crew member runs under the same service account. With Manav, the human delegates to a "captain" agent, which sub-delegates to crew members with narrower scopes.

The 16-line integration

from crewai import Crew, Agent, Task
from manav.crewai import with_manav

crew = with_manav(Crew(agents=[researcher, writer, editor],
 tasks=[research_task, write_task, edit_task],), captain_did=os.getenv("MANAV_HUMAN_DID"))

# Each crew member auto-receives a sub-delegation
researcher.set_scope(["web:search:public"])
writer.set_scope(["docs:write:project-acme"])
editor.set_scope(["docs:write:project-acme:<=10-edits"])

crew.kickoff(inputs={"topic": "agent identity"})

Why this is different from M2M

The traditional CrewAI deployment uses one shared API key for the whole crew. Manav splits the authority: the captain agent has the parent delegation; each crew member's sub-delegation can never exceed it; revocation at the captain instantly propagates to every member; the audit log shows which specific crew member did what under whose authority.

The fleet observability gain

With Manav, the crew's actions appear in the audit log as a tree: human → captain → researcher / writer / editor → tool calls. An incident response can isolate one crew member without taking down the whole crew. A compliance review can verify that the writer never exceeded its scope of "edit project-acme docs."

Multi-crew patterns

For multi-crew deployments (different crews for different goals), Manav supports a hierarchical model: human → org delegation → crew-captain delegations → crew-member delegations. Each level inherits constraints from the parent. Cross-crew agent collaboration uses signed message passing — receiver verifies the sender's delegation chain before honoring the request.

Common objections

Engineers push back on three things. Latency — the cache makes verification 18 µs hot-path, fine for any production system. Vendor lock-in — the protocol is open, the spec is published, the reference implementation is forkable. Adding another auth dance — the integration is twelve lines and middleware, not a new platform to manage.

Frequently asked questions

What is the runtime cost? Single-digit milliseconds per tool call when the verification cache is warm. Cold verification is 1–2 ms. Both numbers are small relative to the LLM round-trip the agent is already paying.

Does it work with our existing agent framework? Yes. The protocol is host-agnostic. SDKs ship for Python, Go, Node, Rust, and TypeScript; integrations exist for LangChain, CrewAI, AutoGen, and the Claude Agent SDK. Anything that calls a tool can present a delegation.

What happens to delegations when an engineer leaves? They die at the human's offboarding. The IdP de-provisions the human; the device key is rotated; every active delegation that human signed is invalidated within 200 ms. No service-account graveyard for the new owner to clean up six months later.

Where to start

Hands-on next: mcp identity 12 lines ships in twelve lines; langchain human in the loop adds the operational layer once you have the basics. Both link to working repos; clone, integrate, run the bench.

Adjacent reading

For the integration path, start with MCP + Identity in 12 lines, then the cross-platform reference architecture. For the operational surface, see webhooks not polls and performance at 100k RPS. Each of those is a working repo; the integration takes a coffee break, the production hardening takes a sprint.

What this changes for an existing CrewAI deployment

Three things, all reversible. The crew gets a per-run delegation token at kickoff. Every tool call inherits a derived authority that is no broader than its parent in the call graph. The audit log writes a structured record per tool call, exportable as a single CSV the regulator can read without an engineer.

What does not change: the agent definitions, the task DSL, the LLM provider. The integration is middleware, not a re-platform. A typical CrewAI consumer integrates in an afternoon; the production hardening (kill-switch drills, magnitude-cap calibration, scope vocabulary review) takes a sprint.

Where CrewAI's existing patterns map cleanly

CrewAI's crew/agent/task hierarchy maps onto Manav's human/delegation/action hierarchy with surprisingly little impedance mismatch. The crew's human owner becomes the Manav-bound human DID. Each agent within the crew receives a delegation with scope and magnitude derived from the agent's declared role. Each task the agent executes produces an audit row tied to the delegation. The mapping was not designed; it was discovered. CrewAI's authors built the framework around abstractions that anticipated the delegation question without naming it, which is why the integration ships in less than a hundred lines of glue code. The wider lesson is that frameworks designed around clear separation of authority and execution map onto Manav cleanly. Frameworks that conflate the two — agent-as-actor with no notion of human upstream — require structural surgery. CrewAI was on the correct side of the architectural choice. The integration is one of the cleanest in the ecosystem.

One human. One delegation tree. Every crew member's authority traces back.