The State of Agent Identity,

A preview of our annual report. The 60-page version ships in Q4. Here are the headline numbers — assembled from public research, design-partner telemetry, and an original survey of 312 security leaders.
The headline numbers
- 250,000 — average non-human identities per enterprise (NHI Reality Report).
- 78% — share of enterprise AI teams reporting at least one MCP-backed agent in production, up from 31% a year earlier.
- 91% — US hiring managers who encountered or suspected AI-generated interview answers.
- $501M — recorded recruitment-scam losses through this quarter.
- 10,000+ — active public MCP servers reported by Anthropic.
- 97M — monthly Python and TypeScript MCP SDK downloads.
- 337 → 130,000 — agents on ERC-8004 in the first ten weeks of (a 385× increase).
- today — EU AI Act Article 14 enforceability date.
- 78% — organizations without formal policies for creating or decommissioning AI agent identities.
- 91% — security professionals expecting explosive growth in AI-generated identities through.
The five themes
1. The 100:1 inversion is real. Non-human identities now outnumber humans 100:1 in the average enterprise. Most IAM stacks were architected before this inversion and have not adapted.
2. MCP adoption inflected. Anthropic's donation of MCP to the Linux Foundation's Agentic AI Foundation — co-founded with Block, OpenAI, and supported by Google, Microsoft, AWS, Cloudflare — solidified the standard. Enterprise adoption followed the cross-vendor signal.
3. Worldcoin's AgentKit (today) validated the agent-identity thesis. The launch with Coinbase's x402 payment protocol gave the market its first widely-shipped Layer 2 (delegation) primitive from a consumer-identity vendor. The category is no longer speculative.
4. Article 14 is the forcing function. EU AI Act Article 14, now enforceable, drives compliance budgets. Surveys of 312 security leaders show 64% adding line items specifically for "agent-identity controls" in current fiscal plans.
5. Hiring fraud is now infrastructure-scale. The DOJ's recent actions across 29 laptop farms in 16 US states, plus FBI documentation of 300+ companies that hired North Korean operatives using stolen identities, mark the transition from individual fraud to coordinated programs.
What the survey added
Our supplemental survey of 312 security leaders (conducted this quarter) found:
- 74% have experienced at least one incident attributable to "the agent did it" — meaning an action whose human authorization could not be cleanly traced.
- 62% are evaluating cross-platform delegation primitives (vs platform-locked).
- 41% have already inquired with their cyber insurer about agent-specific exclusions.
- Only 18% can produce, today, an audit log artifact that satisfies the four-anchor test.
What ships in Q4
The 60-page report adds vertical breakdowns (finance, healthcare, government, software), benchmark performance metrics across the top 10 agent-identity vendors, and a forecast for enforcement actions. Free download for design partners; gated for the broader market.
Common objections
Two methodological objections we take seriously. Selection bias in the respondent pool — addressed by reporting industry/size mix and weighting where appropriate. Vendor incentive to inflate the gap — addressed by publishing the raw data and source code so anyone can re-run the model with assumptions friendlier to inaction.
Frequently asked questions
How is the methodology auditable? The data, the analysis, and the code are published. Every chart can be reproduced from source. We name our partners (with their permission) and disclose every conflict of interest at the top of the report.
What are the confidence intervals on the headline numbers? Reported per metric in the gated PDF. The 4.6× year-over-year delta on hiring fraud, for instance, has a 95% CI of 3.8× to 5.4×; the median time-to-detection has a CI of 9.2 to 13.1 months.
Why publish numbers your competitors will use? Because the category needs them. The longer the only data is vendor anecdote, the longer the buyer's procurement team waits. We benefit when the category is sized; sizing requires shared numbers.
Where to start
The dataset opens at the 100 to 1 ratio. The control set — which infrastructure changes the curve — is at hati vendor map. Re-fit the model with your own assumptions; we publish the source.
What the year ahead looks like, by indicator
Three indicators carry the most signal for the year ahead. The first is regulator artifact specificity: the rate at which AI regulations name specific evidence forms rather than general principles. That rate is rising, and every increment names artifacts the substrate produces. The second is enterprise procurement velocity for delegation-grade products, which has compressed from twelve months to roughly two quarters in the past year as compliance procurement lanes opened. The third is the rate at which incident post-mortems cite delegation gaps as root causes; that rate has tripled in the public record and is rising faster in the private. None of these indicators trigger sudden adoption; together they compound into a procurement environment that did not exist three years ago. The companies positioning against the indicators are growing. The companies positioning against the legacy IAM market are flat. The next year will widen the gap between the two postures.
The category went from "interesting research direction" to "compliance line item" in 12 months.