Manav.id
Compliance4 min read

NIST AI RMF — the four functions, mapped to controls

NIST AI RMF mapping

NIST AI RMF gives you four functions: Govern, Map, Measure, Manage. Here is what each looks like as concrete controls for an AI agent fleet.

Govern

Establish policies, accountability, and culture for AI risk. The agent-identity controls that satisfy this:

Map

Identify context, capabilities, and limitations of each AI system. The controls:

Measure

Analyze, assess, and benchmark AI risks. Controls:

Manage

Allocate resources to identified risks. Controls:

Why HATI satisfies all four

Each function asks for primitives that HATI already produces:

NIST functionPrimitive neededHATI layer
GovernVerified human accountabilityL1 + L4
MapScope, capability documentationL2
MeasureAction telemetry, audit logL3 + audit
ManageRevocation, kill switchL2 enforcement

The federal procurement angle

NIST AI RMF is voluntary in name and de-facto required in practice for any AI system sold into US federal agencies. FedRAMP-aligned procurements increasingly cite RMF profiles. Building to the four functions early shortens federal sales cycles by 6–12 months.

Generative AI Profile

NIST's Generative AI Profile (NIST AI 600-1) layers GenAI-specific risks onto the four functions. Most additions emphasize provenance, content authenticity, and human-in-the-loop — exactly what HATI Layer 3 (work attestation) provides. Building to HATI gives you the AI 600-1 controls for free.

Common objections

Compliance teams push back with two reasonable concerns. Vendor lock-in — answered by the open-source protocol and forkable reference implementation. Audit acceptance — answered by the major auditors that have already approved the audit-trail format for SOC 2 evidence and the regulators who have reviewed the Article 14 mapping.

Frequently asked questions

What is the penalty exposure if we ignore this? Material. EU AI Act Article 14 caps fines at 7% of global revenue or €35M, whichever is higher. SOC 2 audit failures jeopardize enterprise procurement. The cost of the audit-trail layer is small relative to either.

Do we need to be in the EU for this to matter? No. Article 14 applies to any AI system placed on the EU market, including non-EU vendors selling into the EU. Most US enterprises with European customers are in scope. The same controls satisfy emerging US sectoral rules and India's DPDPA.

How long does compliance take to set up? Two weeks for an instrumented stack. Most of the work is auditing the existing agent surface — what agents run, what they touch, who authorized them — not deploying the identity layer. The protocol integrates in twelve lines; the policy work takes longer.

Where to start

Pair this with ai act article 14 playbook for the cross-jurisdictional view and ciso compliance stack for the audit artifact your auditors expect to see. Most compliance projects we have seen succeed by reading those three together before scoping anything.

What RMF still does not require, and why it will

NIST's AI Risk Management Framework names the principles — accountability, transparency, oversight — but stops short of specifying the artifacts. That gap is intentional in version one and unsustainable in version two. Every regulator we have spoken to in the past quarter is converging on the same evidentiary expectation: a signed delegation chain naming the human upstream of every consequential agent action. The expectation is not yet in the framework. It will be. The version of RMF that ships in the next major revision is unlikely to mandate Manav by name, but it is virtually certain to require evidence that the Manav substrate produces by default. Builders who deploy ahead of the revision will find their frameworks already compliant; builders who wait will be retrofitting under deadline. The asymmetry favors the early movers, which is a category of regulatory tailwind we did not need to lobby for and have no intention of accelerating beyond what the framework itself produces.

NIST gives you the four verbs. HATI gives you the cryptographic primitives that make them auditable.