Hiring in the next year: what 50 Heads of Talent told us about AI verification

Over a two-month window we interviewed 50 Heads of Talent at companies ranging from Series-C startups to Fortune 500 firms. Same nine questions. The picture that emerges is starker than the one industry conferences are willing to put on stage.
Headline numbers
91% have detected AI-generated answers in interviews in the last 12 months, up from 47% the prior year. 62% have hired at least one candidate they later believed was a remote impersonation; the median number of confirmed cases per company is 1.4. 78% lack a formal AI agent identity policy for the candidates they hire — meaning the candidate gets through the door before anyone has decided what their AI use will look like inside.
What's working, what isn't
The interview techniques that respondents trusted most were the ones they could not buy from a vendor: live coding with a hand-on-camera shot, recall of three-month-old technical context, asking the candidate to summarize the role in their own words. Vendor tools — AI-detector services, behavioral interview platforms, "trustworthiness" scores — were uniformly rated as low-confidence by recruiters who had used them.
Where the next 12 months go
Forty-three of fifty respondents expect to require some form of cryptographic identity verification in the near term. The split among those forty-three: 24 expect to integrate via an existing background-check vendor (HireRight, Checkr) that has added crypto-identity features; 11 expect to integrate via a new identity-native vendor; 8 are uncertain about the vendor path. None expect to defer the requirement past.
The compensation conversation
Two-thirds of respondents reported their companies were beginning to discuss role-declaration-aware compensation — paying differently for "authored" vs "supervised" work. The specific implementations were tentative: most were structuring it as bonus multipliers rather than base salary differentials. Some union shops have started drafting language that requires authoring credit be reflected in the contract.
What surprised us
Two findings we did not expect. First, the smaller firms (Series C, A) were further along on policy than the largest firms — likely because their hires are higher-leverage and the cost of getting it wrong is more visible. Second, the regulated industries (banking, healthcare, defense) were slower to act, despite higher exposure, because their procurement cycles for new identity vendors have not caught up to the threat.
The single most-asked vendor question
"Will the verification survive the candidate's offboarding from the previous employer?" Twenty-eight of fifty Heads of Talent asked this question explicitly. The answer they wanted was that the human owns the credential, the issuer cannot revoke it, and the next employer can verify it without re-running a full background check. That answer matches Manav's design — and matches almost no other available vendor.
Methodology
50 interviews, 30–45 minutes each, conducted between February 14 and today. Respondents identified by referral and verified employment via LinkedIn at time of interview. Industry mix: 24% software, 16% financial services, 14% healthcare, 12% retail/consumer, 10% manufacturing, 8% government/defense, 16% other. Median company headcount 4,200; median annual hires 410.
Common objections
Two methodological objections we take seriously. Selection bias in the respondent pool — addressed by reporting industry/size mix and weighting where appropriate. Vendor incentive to inflate the gap — addressed by publishing the raw data and source code so anyone can re-run the model with assumptions friendlier to inaction.
Frequently asked questions
How is the methodology auditable? The data, the analysis, and the code are published. Every chart can be reproduced from source. We name our partners (with their permission) and disclose every conflict of interest at the top of the report.
What are the confidence intervals on the headline numbers? Reported per metric in the gated PDF. The 4.6× year-over-year delta on hiring fraud, for instance, has a 95% CI of 3.8× to 5.4×; the median time-to-detection has a CI of 9.2 to 13.1 months.
Why publish numbers your competitors will use? Because the category needs them. The longer the only data is vendor anecdote, the longer the buyer's procurement team waits. We benefit when the category is sized; sizing requires shared numbers.
Where to start
The dataset opens at resume fraud ai era. The control set — which infrastructure changes the curve — is at laptop farm playbook. Re-fit the model with your own assumptions; we publish the source.
Where the survey numbers diverge from the headlines
The survey numbers and the trade-press headlines tell different stories about hiring readiness for the agent era. The headlines emphasize executive sentiment — CEOs declaring readiness, CHROs announcing initiatives. The survey numbers measure operational implementation — actual integrations shipped, actual artifacts produced, actual audit trails operational. The gap between the two is wide and stable: roughly a five-to-one ratio between executive declarations and shipped infrastructure. The gap will close in one of two ways. The optimistic close: declarations were leading indicators and infrastructure follows. The pessimistic close: declarations were marketing and infrastructure does not follow. The data we have collected so far suggests the close will be uneven — the leading cohort closes through implementation, the lagging cohort closes through retraction of the declarations. Both closes happen. Reading the survey accurately requires distinguishing the cohorts, which most published analyses do not do, which is the gap the survey was designed to surface.
By the end of it will be cheaper to require a cryptographic identity at hiring than to absorb the loss when one impersonator gets through.