The 100:1 ratio
If your IAM stack was designed before 2023, it is already managing the wrong species. Non-human identities now outnumber humans 100 to 1 in the average enterprise — and the curve is still steepening. This is the most under-discussed number in enterprise security, and the structural fact that makes Human-Agent Trust Infrastructure inevitable.
The number
The NHI Reality Report puts the average enterprise at over 250,000 non-human identities across cloud and SaaS environments. Most enterprises have 2,500 employees or fewer. The arithmetic is uncomfortable: there are roughly 100 service accounts, machine credentials, API tokens, webhook bots, and AI agent identities for every actual human in the building.
This isn't a forecast. It is what your security team can already see if they run an honest inventory across AWS, Azure, GCP, Snowflake, GitHub, Salesforce, Datadog, Slack, and the dozen-plus SaaS connectors that make up a typical 2026 stack. Most teams have been quietly afraid to do that inventory. The number, when it appears, is hard to unsee — and harder to explain to a board that thinks "identity" still means people.
Two larger framings make the local number even worse:
- The internet-wide ratio is heading toward 1,000:1 within five years if MCP server count continues its current doubling cadence and even a third of US enterprises adopt agentic workflows on Gartner's mid-band forecast.
- The "human" side of the ratio is structurally bounded — the world has eight billion humans and that number is growing roughly 1% per year. The "machine" side is bounded only by compute and capital, both of which are growing much faster.
The ratio is not a temporary spike. It is the new floor.
How we got here — three forces, stacked
The 100:1 ratio did not arrive in a single year; it accumulated as three forces compounded on one another. Each was rational on its own; the combination produced the asymmetry.
Force 1 — cloud sprawl. Microservices and infrastructure-as-code multiplied service accounts at a pace nobody planned for. Each new microservice spawned 5–20 NHIs to talk to its dependencies; most were never decommissioned even after the microservice itself was retired. A typical enterprise's NHI inventory is 30–60% "ghost" credentials — accounts that still exist, still hold permissions, and have not been touched by a human in 18+ months.
Force 2 — SaaS proliferation. The average company now runs 200+ SaaS apps. Each integration creates new tokens, webhooks, OAuth grants, and bot users. Each carries permissions someone, sometime, granted — usually under time pressure, usually with the broadest available scope. The integration graph is denser than the org chart and changes ten times as often.
Force 3 — agent explosion. Then AI agents arrived. Agents on the ERC-8004 standard alone grew from 337 to nearly 130,000 in the first ten weeks of 2025 — a 385× increase in 70 days. MCP server adoption hit 78% of enterprise AI teams in Q1 2026, up from 31% a year earlier. Each MCP-backed agent typically holds 3–7 separate credentials across the tools it integrates with, and modern agent frameworks freely spin up sub-agents within sub-agents — each of which inherits or re-derives credentials.
Stack all three and the 100:1 ratio is the conservative estimate for any company that has invested seriously in AI since 2024. The leading-edge ratio at AI-native firms is already 300:1 to 500:1.
The four practical failures the ratio produces
The ratio matters because every IAM control built between 1995 and 2023 was designed for the inverse ratio. Login flows assume "one human, sometimes a bot service account here and there." Audit logs are indexed by user. Anomaly detection trains on user-shaped behaviour. Help desks rotate human passwords. None of these scale or even make sense at 100:1, and the failures are no longer hypothetical.
Failure 1 — permission accretion. Industry surveys found 78% of organisations lack formal policies for creating or decommissioning AI agent identities. Permissions accumulate; nothing prunes them. The result is the IAM equivalent of zombie processes: thousands of credentials with active access nobody can justify, ready to become an incident the day one of them is harvested.
Failure 2 — audit illegibility. When 99% of activity is non-human, "the user did it" loses meaning. Forensic investigations stall on the question that should be the easiest to answer: which human was behind which token at which moment? The 100:1 ratio makes the answer combinatorial. Without a delegation chain at issuance time, that chain cannot be reconstructed at audit time.
Failure 3 — insurance opacity. Cyber insurers cannot price what they cannot audit. Premiums are rising, coverage is shrinking, and several major carriers now exclude losses tied to "autonomous agent operation" — language that was unimaginable in policies written in 2022. The exclusion will spread as more loss data accumulates, and the only counter is a cryptographic accountability chain regulators and underwriters can verify.
Failure 4 — regulatory mismatch. Article 14 of the EU AI Act demands human oversight of high-risk systems "during the period in which they are in use." The current 100:1 ratio means the human end of every accountability chain is buried under machine activity. You cannot satisfy a regulator with a screenshot of a Slack message that says "I approved this." You can satisfy one with a signed delegation token whose chain terminates in a verified human identity.
The compounding problem
If MCP server counts continue their 2025–2026 doubling trajectory, and if even half of US enterprises adopt agentic workflows over the next 24 months, the 100:1 ratio compounds to something like 1,000:1 by 2028. At that point, the question stops being "how do we manage the agents?" and becomes "how does anything stay attributable to a human at all?"
That is not a rhetorical question. Insurers, regulators, courts, journalists, and shareholders will all be asking it — and asking it of specific named CISOs and CIOs. The first wave of enforcement actions and class-action complaints will land on companies that cannot reconstruct the human-agent chain on demand. The cost of being wrong about this is no longer "embarrassing"; it is the kind of cost that ends careers.
The architectural response
HATI is the architectural answer to the 100:1 ratio. The five layers — verified human identity, delegation, work attestation, trust score, and settlement — are designed from the start for an environment where 99% of the network's identities are non-human and the 1% that matters is the human root.
The conceptual shift is from "manage humans, tolerate machines" to "anchor every machine to a human." That inversion is the new category, and it is what distinguishes infrastructure from rebadged IAM. Three concrete things change when you adopt the inverted model:
- Issuance moves upstream. Agents and service accounts are minted under a delegation token signed by a Layer 1-verified human. No human signature, no agent. The 250,000 NHIs become a tree, not a swamp.
- Audit becomes O(1) per action. Every action carries a chain back to a named human in a single signed payload. Forensics stop being a multi-week archaeology project.
- Revocation becomes meaningful. When a person leaves the company or a token is compromised, every downstream agent's authority can be invalidated in seconds, with cryptographic evidence the revocation propagated.
Vendors selling "NHI management" without a Layer 1 (verified human) and Layer 2 (delegation) are managing the symptom — the credential count — without addressing the cause: the missing human anchor. The repricing is already underway in procurement processes; expect it to accelerate as the first Article 14 enforcement actions hit the news.
What to do this quarter
The 100:1 ratio is not a problem to live with. It is the structural fact every CISO and CIO will be asked about in their next board meeting. Five concrete steps for the next ninety days:
- Run the inventory. Count NHIs across cloud, SaaS, and agent platforms. Be ready for the number — and have the framing ready before you walk into the meeting where you present it.
- Find the orphans. Identify NHIs whose human principal cannot be named. Treat each as a vulnerability, not a curiosity. The orphan count is your highest-leverage risk metric for the next 12 months.
- Pilot delegation. Pick one agent or one MCP server and route its actions through a Manav delegation chain. Watch what changes in the audit log. The change is dramatic and it is the demo that wins board buy-in.
- Run the Article 14 walkthrough. For each high-risk AI use case, simulate the audit a regulator would perform. Document what your current stack can and cannot prove. The gaps are your roadmap.
- Pick a HATI architecture. Build, buy single-layer point solutions, or adopt a full-stack vendor. The economics have already broken in favour of buy for all but the largest organisations, and the cost of waiting compounds quarterly.
If your IAM was built before 2023, it is already managing the wrong species.
The 100:1 ratio is not a problem to live with. It is the structural inversion that makes Human-Agent Trust Infrastructure inevitable, and the first generation of CISOs to act on it will be the ones who stop spending their careers explaining audit logs they cannot reconstruct.