SOC 2 for AI agents

SOC 2 reports that don't address AI agent identity will fail in the near term. Here is what auditors will ask, and how to be ready before they do.
The trust services criteria, refreshed for agents
SOC 2 evaluates against five criteria: Security, Availability, Processing Integrity, Confidentiality, Privacy. Each maps to specific control points. The agent age extends each into territory the original criteria did not anticipate.
Security — agent access controls
Auditors will ask: how do you grant, monitor, and revoke access for non-human identities? The expectation: per-agent delegation tokens with scope, magnitude, and TTL. Service-account answers ("agent uses M2M token rotated quarterly") will draw deficiencies. Show: cryptographic delegation, sub-200ms revocation, scope-violation alerting.
Availability — agent fleet observability
Can you observe agent health, detect anomalies, and respond? SOC 2 has always asked this for systems; now it extends to agent populations. Show: real-time agent telemetry, behavioral baselines, automated isolation on drift.
Processing integrity — author / supervisor / director
The newest territory. Auditors will ask whether AI-produced outputs in your service have provenance. The answer SOC 2 wants in the near term: every AI-influenced output carries an attestation indicating which human authored, supervised, or directed it (HATI Layer 3). Without this, processing integrity for agentic flows is unauditable.
Confidentiality — selective disclosure
Customer data shared with agents must respect confidentiality classifications. The agentic gap: an agent with broad scope can leak across classifications without obvious failure. The control: selective disclosure at the delegation token, plus per-tool data classification enforcement.
Privacy — consent chains
Personal data processed by agents must remain within consent boundaries. Tied to DPDPA, GDPR, CCPA. The control: consent expressed as a delegation token, scope enforced per agent action, withdrawal propagated through revocation.
The new control points to add
- CC6.x — non-human identity lifecycle (creation, monitoring, decommissioning).
- CC7.x — agent action telemetry and alerting.
- CC8.x — delegation token issuance, scope, revocation.
- P1.x / P2.x — consent-as-delegation for personal data.
- PI1.x — author/supervisor/director attestation.
What "Type II" looks like with agents
Type II covers operational effectiveness over time. For agent flows: 6+ months of audit logs showing delegation issuance and revocation, sample population reviews, evidence of incident response (including agent-specific incidents), and walkthrough of the kill-switch test. If your kill switch has never been tested in production, the auditor will note it.
The 90-day prep
- Inventory agents and map to in-scope systems.
- Wire delegation tokens with revocation under 200ms.
- Begin Layer 3 work attestation for AI-influenced outputs.
- Conduct one tabletop kill-switch exercise; document in audit trail.
- Update control narratives. Have your auditor preview.
Common objections
Compliance teams push back with two reasonable concerns. Vendor lock-in — answered by the open-source protocol and forkable reference implementation. Audit acceptance — answered by the major auditors that have already approved the audit-trail format for SOC 2 evidence and the regulators who have reviewed the Article 14 mapping.
Frequently asked questions
What is the penalty exposure if we ignore this? Material. EU AI Act Article 14 caps fines at 7% of global revenue or €35M, whichever is higher. SOC 2 audit failures jeopardize enterprise procurement. The cost of the audit-trail layer is small relative to either.
Do we need to be in the EU for this to matter? No. Article 14 applies to any AI system placed on the EU market, including non-EU vendors selling into the EU. Most US enterprises with European customers are in scope. The same controls satisfy emerging US sectoral rules and India's DPDPA.
How long does compliance take to set up? Two weeks for an instrumented stack. Most of the work is auditing the existing agent surface — what agents run, what they touch, who authorized them — not deploying the identity layer. The protocol integrates in twelve lines; the policy work takes longer.
Where to start
Pair this with ai act article 14 playbook for the cross-jurisdictional view and ciso compliance stack for the audit artifact your auditors expect to see. Most compliance projects we have seen succeed by reading those three together before scoping anything.
How auditors are reading agent activity now
SOC 2 Type II auditors are quietly retraining on agent activity. The training material is not yet public, but the pattern in fieldwork is consistent. Auditors are asking three new questions in the access-control section: who authorized the agent, what scope did it operate under, and where is the evidence? Organizations that answer with screenshots of dashboard configurations are getting follow-up questions. Organizations that answer with signed delegation chains are getting clean opinions. The audit is not yet failing for organizations without delegation chains, but the failure rate is rising quarter over quarter as auditors get more comfortable with the new question category. We expect SOC 2 attestations to silently begin requiring delegation evidence in the next major TSC revision, with no explicit framework change but a strong informal expectation. The buyers who deploy delegation infrastructure ahead of that shift produce evidence on day one. The buyers who wait produce evidence after a finding.
The SOC 2 was about humans accessing systems. The SOC 2 is about humans authorizing agents.