The CISO's AI agent compliance stack

ISO 42001, EU AI Act, NIST AI RMF, India DPDPA. Four frameworks, one architecture. If you build to the strictest, you satisfy the rest.
The four frameworks, briefly
ISO/IEC 42001 (). AI Management System standard — the AI equivalent of ISO 27001. Audit-ready posture for any organization providing or deploying AI systems. Risk assessment, controls catalog, lifecycle management.
EU AI Act (, enforceable today). Risk-tiered regulation. Article 14 mandates human oversight for high-risk AI. Penalties up to €15M or 3% of global turnover.
NIST AI RMF (1.0,; with profiles ongoing). Voluntary US framework with four functions: Govern, Map, Measure, Manage. De facto standard for federal procurement.
India DPDPA (). Data protection act with consent-chain requirements that extend to AI processing. Critical for any global enterprise with India footprint.
The overlap that matters
Each framework demands variations of four primitives:
- Identity — verified human principals throughout the AI lifecycle.
- Authority — documented chains showing who authorized what.
- Action — auditable records of every decision the AI takes.
- Audit — tamper-evident, independently verifiable logs.
Build to all four anchors and you satisfy each framework's specific test. This is HATI by another vocabulary.
The unified architecture
One technical architecture, four certifications:
- Layer 1 — Verified human identity. Federate Manav with your IDP. Bind every authorized user to a cryptographic identity. Satisfies: ISO 42001 §6, AI Act Art 14, NIST Govern, DPDPA consent.
- Layer 2 — Delegation tokens for every AI action. Scope, cap, and TTL. Per-agent revocation under 200ms. Satisfies: ISO 42001 §8, AI Act Art 14, NIST Manage.
- Layer 3 — Work attestation. Every AI output stamped author/supervisor/director. Satisfies: ISO 42001 §9, AI Act Art 12 (record-keeping), NIST Measure, DPDPA processing logs.
- Tamper-evident audit log. Merkle-tree, hash-chained, exportable. Satisfies all four.
The four-framework matrix
| Anchor | ISO 42001 | EU AI Act | NIST AI RMF | India DPDPA |
|---|---|---|---|---|
| Identity | §6.1, §6.2 | Art 14, Art 26 | Govern 1.4 | §7 consent |
| Authority | §8.2 | Art 14 | Govern 2.1 | §8 purpose |
| Action | §9.3 | Art 12 | Measure 2.4 | §9 retention |
| Audit | §9.1 | Art 12, Art 26 | Measure 4.3 | §10 access |
The 90-day implementation
Realistic for a Fortune 1000 company starting today:
- Days 1–30: inventory in-scope AI systems; map roles; pick one high-risk system as the pilot.
- Days 31–60: wire identity (Layer 1) federation; pilot delegation (Layer 2) on the chosen system; begin work attestation (Layer 3).
- Days 61–90: tabletop with internal audit; export sample audit logs; engage external auditor on the four-anchor format; document for ISO 42001 surveillance audit and AI Act technical file.
What you save
Most CISOs treat each framework as a separate project. Total cost: 3–5× one project. Building to a unified four-anchor architecture and mapping to each framework cuts that to roughly 1.3×. The map matters more than the build.
Common objections
Compliance teams push back with two reasonable concerns. Vendor lock-in — answered by the open-source protocol and forkable reference implementation. Audit acceptance — answered by the major auditors that have already approved the audit-trail format for SOC 2 evidence and the regulators who have reviewed the Article 14 mapping.
Frequently asked questions
What is the penalty exposure if we ignore this? Material. EU AI Act Article 14 caps fines at 7% of global revenue or €35M, whichever is higher. SOC 2 audit failures jeopardize enterprise procurement. The cost of the audit-trail layer is small relative to either.
Do we need to be in the EU for this to matter? No. Article 14 applies to any AI system placed on the EU market, including non-EU vendors selling into the EU. Most US enterprises with European customers are in scope. The same controls satisfy emerging US sectoral rules and India's DPDPA.
How long does compliance take to set up? Two weeks for an instrumented stack. Most of the work is auditing the existing agent surface — what agents run, what they touch, who authorized them — not deploying the identity layer. The protocol integrates in twelve lines; the policy work takes longer.
Where to start
Pair this with ai act article 14 playbook for the cross-jurisdictional view and seven layers of trust for the audit artifact your auditors expect to see. Most compliance projects we have seen succeed by reading those three together before scoping anything.
The integration order that keeps your CISO sane
The CISOs running the most successful Manav integrations are following the same sequence. Audit trail first — lowest integration cost, highest evidence-to-effort ratio, satisfies the most regulators with one shipment. Delegation infrastructure second — once the audit trail exists, the delegation chain becomes the natural next column. Witness federation third — only after the first two are in production and the team has internalized the model. Skipping steps is the failure mode we see most often. CISOs who try to ship the witness federation before the audit trail end up with a sophisticated piece of cryptography that produces evidence the rest of the company cannot read. The order matters because each layer produces evidence the next layer needs. Done in sequence, the program ships in three quarters. Done in parallel, the program slips into year two and the CISO loses the political coalition that funded it.
Different auditors. Different vocabulary. Same four anchors.