Manav.id
Developer4 min read

Zero-knowledge proofs for selective disclosure

ZK selective disclosure

A recruiter wants to verify "this candidate has shipped 200+ commits" without learning the candidate's name, GitHub handle, or employer. A regulator wants "this agent is acting under a verified human" without seeing the human's identity. Selective disclosure with zero-knowledge proofs is the math that makes both queries answerable.

The core idea, in one paragraph

A zero-knowledge proof lets one party (the prover) convince another party (the verifier) that a statement is true, without revealing anything beyond the truth of the statement. In identity, this means: prove a credential is signed by a trusted issuer, prove specific attributes satisfy a predicate, reveal nothing else. The credential stays on the prover's device; only the proof crosses the wire.

The three primitives Manav uses

BBS+ signatures. A signature scheme designed for selective disclosure. The issuer signs a vector of attributes; the holder later presents a derived signature that proves possession of the credential plus selected attributes. The verifier learns nothing about the unselected ones. zk-SNARK predicates. For non-attribute proofs — "the count is greater than 200," "the salary is in the 70th percentile" — a SNARK is generated against a circuit and verified in milliseconds. Range proofs. A specialized SNARK pattern proving a value lies in a range without revealing the value, used for any "at least X" or "between A and B" claim.

Worked example: a verified resume

The candidate holds a Manav credential signed by their employer with attributes {employer, role, start_date, end_date, commits_authored, deals_closed, performance_band}. A recruiter requests "shipped at a company you've heard of, seniority above mid, 200+ commits, performance band B or higher." The candidate's wallet generates a presentation that proves all four predicates and reveals zero attributes. The recruiter verifies in 8 ms. The candidate's name, employer, GitHub handle, and exact numbers never leave the device.

What you can build with it

Privacy-preserving KYC: prove "older than 18, citizen of an OFAC-allowed country, not on sanctions list" without revealing the passport. Compliant analytics: prove "the agent acted on behalf of a verified human in jurisdiction X" without surfacing the human. Reputation portability: prove "Trust Score above 700, attested in healthcare, no incidents in 12 months" to a new employer without exporting your full history.

What you should not build with it

Do not build ZK proofs of "this agent is allowed to do anything." Selective disclosure works because the predicates are narrow and verifiable. "I am authorized" is not a predicate; it is a synonym for "trust me." Keep the proofs tight, the predicates explicit, and the verifier's circuit auditable.

Performance you can plan around

BBS+ presentation generation: 30–80 ms on a phone. Verification: 4–10 ms on a server. SNARK predicate generation: 200–800 ms; verification: 5–12 ms. Range proofs: 50–150 ms generation; 2–5 ms verification. None of these dominate the LLM call you are wrapping.

Where Manav exposes it

The Manav SDK ships credential.present(predicates=...) for BBS+ flows and credential.prove(circuit=...) for SNARK predicates. The reference circuits for the most common queries (age threshold, jurisdiction, work-history thresholds) are published, audited, and ready to use.

Common objections

Engineers push back on three things. Latency — the cache makes verification 18 µs hot-path, fine for any production system. Vendor lock-in — the protocol is open, the spec is published, the reference implementation is forkable. Adding another auth dance — the integration is twelve lines and middleware, not a new platform to manage.

Frequently asked questions

What is the runtime cost? Single-digit milliseconds per tool call when the verification cache is warm. Cold verification is 1–2 ms. Both numbers are small relative to the LLM round-trip the agent is already paying.

Does it work with our existing agent framework? Yes. The protocol is host-agnostic. SDKs ship for Python, Go, Node, Rust, and TypeScript; integrations exist for LangChain, CrewAI, AutoGen, and the Claude Agent SDK. Anything that calls a tool can present a delegation.

What happens to delegations when an engineer leaves? They die at the human's offboarding. The IdP de-provisions the human; the device key is rotated; every active delegation that human signed is invalidated within 200 ms. No service-account graveyard for the new owner to clean up six months later.

Where to start

Hands-on next: delegation tokens explained ships in twelve lines; ssi without crypto headache adds the operational layer once you have the basics. Both link to working repos; clone, integrate, run the bench.

Why ZK proofs are not the slow part

The popular intuition treats ZK proofs as the heavy step in selective-disclosure flows, the cryptographic anchor that everything else has to wait for. The benchmarks tell a different story. Modern ZK constructions — BBS+, Groth16, PLONK — verify in under a millisecond on commodity hardware. The slow part of selective disclosure is not the proof verification; it is the prover's context-gathering and the verifier's downstream policy evaluation. Builders who optimize the wrong part discover the latency budget held all along. The lesson for integrators is that the proof-system choice is rarely the bottleneck unless the proof is being generated in real-time on a constrained device. Most production deployments precompute proofs and verify them at the relying party, where the millisecond budget is invisible to the user. ZK is not a luxury you cannot afford; it is a primitive whose cost is well below the noise floor of modern integrations.

The point of cryptographic identity is not to hide. It is to disclose precisely.