Manav.id
Research4 min read

The cost of unverified AI

Cost of unverified AI

An annualized estimate of identity-shaped AI exposure for your company. Plug in employees, agents per human, and industry. Read out an honest range. The math is published; you can argue with it.

$0 Annualized exposure (median estimate)

How the math works

Annual exposure ≈ employees × agents-per-human × actions-per-agent-per-day × 240 working-days × industry-failure-rate × value-per-action. Industry failure rate is the median rate at which a Manav-instrumented agent action would have been refused or escalated, drawn from telemetry across 28 design partners. The default assumes 60 actions per agent per day; tune in the source if your population differs.

What the number includes

Direct losses (fraud, mis-pricing, mis-routing, mis-priced refunds), regulatory fines (Article 14, GDPR Article 22, sector-specific), and remediation labor (incident response time, customer-relations recovery). It is a loss-day-equivalent, not a single-incident number; small daily costs aggregate.

What the number excludes

Catastrophic-tail events (a $25M deepfake call, a single major SEC settlement, a Joint-Commission revocation). These are real and we have written about them; they do not belong in an annualized expected-value estimate because they are bimodal — most years zero, occasional years enormous.

How to use it

Compare the annualized exposure number to your identity-infrastructure budget. If the ratio is above 4×, the budget is under-funded. If below 1×, your industry is probably over-investing or you have already integrated. Use the number to start a conversation with finance, not to win one.

Honest disclosure

The model is a Manav model. We are an interested party. We publish the assumptions and the source so you can re-fit them. If you fit the model with assumptions friendlier to inaction and still see a defensible exposure number, the case is the same.

Common objections

Two methodological objections we take seriously. Selection bias in the respondent pool — addressed by reporting industry/size mix and weighting where appropriate. Vendor incentive to inflate the gap — addressed by publishing the raw data and source code so anyone can re-run the model with assumptions friendlier to inaction.

Frequently asked questions

How is the methodology auditable? The data, the analysis, and the code are published. Every chart can be reproduced from source. We name our partners (with their permission) and disclose every conflict of interest at the top of the report.

What are the confidence intervals on the headline numbers? Reported per metric in the gated PDF. The 4.6× year-over-year delta on hiring fraud, for instance, has a 95% CI of 3.8× to 5.4×; the median time-to-detection has a CI of 9.2 to 13.1 months.

Why publish numbers your competitors will use? Because the category needs them. The longer the only data is vendor anecdote, the longer the buyer's procurement team waits. We benefit when the category is sized; sizing requires shared numbers.

Where to start

The dataset opens at agent density index. The control set — which infrastructure changes the curve — is at state of agent identity 2026. Re-fit the model with your own assumptions; we publish the source.

Adjacent reading

For the broader sized view, see the State of Agent Identity. For the cost-of-unverified-AI calculator, see the calculator. For the agent-density curve, see the Agent Density Index. The three together let a buyer or investor size the category from public artifacts.

What the calculator does not capture

The model is an annualized expected-value estimate; three categories of cost sit outside it. Tail-risk events — a single $25M deepfake call, a Joint Commission revocation, an SEC settlement — are bimodal: zero in most years, enormous in the year they hit. Talent cost — engineers spending 15% of their week on incident response is a real number that does not appear on any P&L until you measure it. Brand cost — the customer who does not renew because of a story that ran in the trades does not announce why.

The estimator is conservative on purpose. The case for investment usually clears the bar even with the tail and the talent and the brand all set to zero.

Where the cost actually lands

Most analyses of the cost of unverified AI focus on the headline incidents — the wire fraud, the contract repudiation, the regulator fine. The headlines are real and they understate the total cost by roughly an order of magnitude. The dominant cost is in the everyday remediation work: the contracts re-papered when authority cannot be established, the audits extended when agent activity cannot be traced, the procurement deals delayed when security questionnaires cannot be answered. None of these costs hit a single line item; they distribute across legal, compliance, finance, and operations. The aggregate, across the sample we studied, is roughly four to seven percent of revenue in agent-heavy organizations. Investors and analysts who underwrite the cost only against the headlines are mismeasuring the threat. The substrate addresses the headline cost and the distributed cost in the same architecture, which is why the financial case is dominated by the second category, not the first.

If the budget you allocate to identity is smaller than the loss the model predicts, you are not under-spending. You are pre-committed to the loss.