Manav.id
Research4 min read

The deepfake hiring report

Deepfake hiring report

Annual report. 1,200 corporate respondents, six months of incident data, three independent forensic firms cross-checking. Headline: deepfake-aided hiring fraud rose 4.6× year-over-year, and the controls that work do not look like the controls being sold.

The headline numbers

Deepfake-implicated interviews up 4.6× year-over-year. 91% of US hiring managers reported encountering at least one in the last 12 months. Median time-to-detection from hire: 11 months. Median direct salary cost per impostor: $89,000. Aggregate documented loss across reporting respondents: $501M last year, plus an estimated $1.4B in unreported and ongoing cases at year-end.

Where the fraud concentrates

Remote-only engineering roles at companies between 200 and 5,000 employees. Operators target the band where compliance is mature enough to fund the salary but small enough that internal policing is thin. Above 10,000 employees, the volume falls because internal continuous-verification programs reach maturity.

Sectors hit hardest

Software (38% of confirmed cases). Financial services (24%). Healthcare and life sciences (12%). Government contracting (10%). Manufacturing and retail (each 7%). Education (2%). The financial services concentration reflects the high-leverage of the fraud — a single placement at a fintech can yield credentials worth far more than the salary.

Detection-tool performance

The three classes of vendor we tested. Vision-based deepfake detectors. Median true-positive rate 51% on real-world deepfakes from generators; false-positive rate 14% on real candidates. Useful as one signal; insufficient as a gate. Behavioral biometric continuous-verification. True-positive rate 88% on imposter substitutions across the working week; false-positive rate 4%. The strongest single category. Hardware-attestation identity (Manav and similar). True-positive rate 99.4% on imposter substitutions when the original employee was correctly enrolled at hire; false-positive rate 0.2%. The strongest gate but requires day-one enrollment.

Policy levers that work

Three. Continuous-identity-on-laptop as a condition of remote employment, written into the offer letter. Hardware-attestation enrollment on day one of employment, before laptop ships. Quarterly re-attestation via a live human-to-human session that does not rely on video alone. Companies that adopt all three reported 92% fewer impostor incidents than the industry median.

Policy levers that do not

Three to drop. AI-detector-only screening of resumes. 14% false-positive rate kills good candidates without catching real fraud. Geo-IP gating. Defeated by laptop farms long ago. Voice-only verification. Defeated by voice cloning long ago.

The forecast

deepfake-aided fraud will continue to rise in absolute terms but level off in incidence rate as continuous-verification programs reach 60% adoption among mid-market companies. The residual fraud concentrates at companies that explicitly resist continuous verification — typically for ideological-privacy reasons — where the ratio of impostor-incidents to controls remains stubborn.

Methodology

1,200 respondents from US, UK, EU, India, and Singapore companies above 100 FTE. Survey + voluntary incident-data sharing. Forensic verification by three partner firms. Confidence intervals reported in the gated PDF. The PDF download is at manav.id/research/deepfake-hiring-.

Common objections

Two methodological objections we take seriously. Selection bias in the respondent pool — addressed by reporting industry/size mix and weighting where appropriate. Vendor incentive to inflate the gap — addressed by publishing the raw data and source code so anyone can re-run the model with assumptions friendlier to inaction.

Frequently asked questions

How is the methodology auditable? The data, the analysis, and the code are published. Every chart can be reproduced from source. We name our partners (with their permission) and disclose every conflict of interest at the top of the report.

What are the confidence intervals on the headline numbers? Reported per metric in the gated PDF. The 4.6× year-over-year delta on hiring fraud, for instance, has a 95% CI of 3.8× to 5.4×; the median time-to-detection has a CI of 9.2 to 13.1 months.

Why publish numbers your competitors will use? Because the category needs them. The longer the only data is vendor anecdote, the longer the buyer's procurement team waits. We benefit when the category is sized; sizing requires shared numbers.

Where to start

The dataset opens at laptop farm playbook. The control set — which infrastructure changes the curve — is at hiring 2027 survey. Re-fit the model with your own assumptions; we publish the source.

What HR teams are doing about it now

HR teams responding to the deepfake-hiring threat are converging on a four-step playbook. Step one: switch interview platforms to ones that support cryptographic identity binding rather than session-based authentication. Step two: require Manav-grade verification at offer-stage rather than at start-date. Step three: introduce a randomized in-person element for senior hires. Step four: train interviewers on behavioral red flags that survive deepfake polish — context inconsistencies, latency anomalies, real-time-reasoning gaps. The playbook is visible across the responses we collected; it is not yet codified in any HR-tech vendor's product roadmap, but the pieces are being assembled by the teams that face the threat earliest. We expect the playbook to formalize in the next eighteen months and to be embedded in default ATS configurations after that. Companies that ship the playbook now are roughly two years ahead of the median HR organization on this risk surface, which is the durable advantage early movers earn.

The bottleneck on stopping deepfake hiring is not technology. The technology that works is here. The bottleneck is putting it in the offer letter.