HIPAA + AI agents

Clinical AI without human-in-the-loop attestation is uninsurable. HIPAA's framework — written for paper and phones — extends, with surprising cleanliness, to the agent age.
Where HIPAA already meets AI
HIPAA's Privacy and Security Rules govern who can access protected health information (PHI), under what conditions, with what audit. The Privacy Rule's "minimum necessary" standard has always been a kind of scope-restriction; the Security Rule's audit controls (45 CFR 164.312(b)) have always required reviewable records. The agent age does not require new HIPAA primitives — it requires that the existing primitives be enforced cryptographically rather than by policy and trust.
The four agent-era HIPAA gaps
Gap 1: minimum-necessary, by agent. An AI scribe with broad EHR scope violates minimum-necessary every time it reads more than the active note requires. The fix: per-agent delegation with chart-segment scope and per-action enforcement.
Gap 2: audit attribution. HIPAA logs say "user_id 4129 read patient record." For agentic flows, the user_id is often the EHR vendor's service account. The chain back to the licensed clinician who authorized the agent's action is missing. The fix: HATI Layer 2 delegation in every agent call, Layer 3 attestation on every output (note draft, billing code, prior-auth submission).
Gap 3: BAA scope. Business Associate Agreements with AI vendors must now address agent-specific obligations: per-agent revocation, breach notification timelines for agent compromises, attestation of human supervision over outputs. Standard -era BAA templates are silent on these. Update the templates.
Gap 4: clinical responsibility. HIPAA does not specify clinical accountability for AI-assisted decisions, but the FDA's evolving guidance and medical-malpractice case law are converging: a licensed clinician must be cryptographically attested to the decision. Build the audit log to show this without ambiguity.
The clinical AI architecture
- Layer 1 — every clinician verified via Manav, federated with the hospital's IDP and credentialing system.
- Layer 2 — every clinical AI agent runs under a delegation token signed by a named clinician, scoped to the patient encounter.
- Layer 3 — every output (draft note, suggested order, billing code) carries author / supervisor / director attestation.
- Audit — tamper-evident log, exportable in HIPAA-required formats, with attestation chains visible.
The use cases that will drive adoption first
Three high-volume use cases push hospitals toward HATI in the last few years: AI medical scribes (notes need clinician attestation to be billable), prior-authorization agents (payers increasingly demand attestation chains), and clinical decision support (medical-malpractice insurers reward documented HITL).
What insurers now ask
Cyber and medical-malpractice insurers today are quietly adding questionnaires that ask: how do AI agents in your organization carry human attribution? How fast can you revoke them? Insurers offering meaningfully better terms are conditioning them on cryptographic attestation chains. The actuarial argument is straightforward: provable HITL is provably underwritable.
What to do this quarter
- Inventory clinical AI agents by service line.
- Pilot HATI Layer 2 on AI scribe or prior-auth flows — smallest blast radius, clearest ROI.
- Update BAA templates with agent-specific clauses.
- Engage cyber and malpractice insurers on the new audit-log format.
Common objections
Compliance teams push back with two reasonable concerns. Vendor lock-in — answered by the open-source protocol and forkable reference implementation. Audit acceptance — answered by the major auditors that have already approved the audit-trail format for SOC 2 evidence and the regulators who have reviewed the Article 14 mapping.
Frequently asked questions
What is the penalty exposure if we ignore this? Material. EU AI Act Article 14 caps fines at 7% of global revenue or €35M, whichever is higher. SOC 2 audit failures jeopardize enterprise procurement. The cost of the audit-trail layer is small relative to either.
Do we need to be in the EU for this to matter? No. Article 14 applies to any AI system placed on the EU market, including non-EU vendors selling into the EU. Most US enterprises with European customers are in scope. The same controls satisfy emerging US sectoral rules and India's DPDPA.
How long does compliance take to set up? Two weeks for an instrumented stack. Most of the work is auditing the existing agent surface — what agents run, what they touch, who authorized them — not deploying the identity layer. The protocol integrates in twelve lines; the policy work takes longer.
Where to start
Pair this with ai act article 14 playbook for the cross-jurisdictional view and agent identity finance for the audit artifact your auditors expect to see. Most compliance projects we have seen succeed by reading those three together before scoping anything.
Why HIPAA will not be re-written for AI
HIPAA was written before the Internet. It survived the cloud. It will survive AI agents. The reason is that HIPAA is not a technology rulebook; it is a liability rulebook. The covered entity is liable for use and disclosure regardless of the technology that performs the use. An AI agent processing PHI is not a new category in HIPAA's reading; it is a new technology under the existing category. What changes with AI is not the rule but the evidence the rule requires. Auditors expect to see signed delegation chains, role declarations, and audit logs because the existing HIPAA standards already demand evidence of authorized access — the AI implementation simply makes the evidence harder to produce without infrastructure. Covered entities deploying AI without that infrastructure are not in violation of a new rule; they are in violation of an old one with a new implementation surface. HIPAA does not need to change. The implementation does.
HIPAA was written for paper. It survives the agent age — but only if the audit log is cryptographic.