The deepfake hiring playbook
The DOJ raided 29 laptop farms. The FBI documented 300+ Fortune 500 companies that unknowingly hired North Korean operatives. 91% of hiring managers see AI-generated interview answers. Here is the practical playbook for HR teams who can no longer trust the camera.
The threat model has changed
Until, hiring fraud meant resume embellishment, occasional reference inflation, and small-scale impersonation by individuals trying to land specific jobs. Through the last few years, three new threat actors industrialized the practice:
- Nation-state IT-worker schemes. The North Korean program is the documented case; others exist. Real talent, stolen identity, AI-generated face for video. The work product is real. The salary destination is laundered.
- Proxy interview services. Marketplaces where a more-skilled candidate takes the interview for a less-skilled candidate via deepfake mask. The service then helps the hire perform during onboarding.
- AI-only candidates. A real human registers; an LLM answers technical screens, take-homes, and behavioral interviews. The human shows up on day one without the skills the AI demonstrated.
Why current controls fail
Background checks verify what databases say about a name. They do not verify the person on the video is the person in the database.
Selfie-plus-ID systems were trained against pre- deepfakes. Generation-5 models pass them at acceptance rates above human reviewers'.
"AI-detection" tools chase a moving target. The arms race is structurally lost; detection lags generation by months.
In-person final rounds are the strongest current control — and increasingly impossible at scale or distance.
The structural answer is not better detection. It is verifiable provenance from claim to work to person.
The playbook (six steps)
Step 1 — Stratify roles by sensitivity. Not every role needs the same protections. Tier roles by access scope, dollar authority, and customer-facing risk. Apply the heaviest controls to the smallest tier first.
Step 2 — Add a Layer 1 anchor at first contact. Require candidates to verify identity via passkey, government eID + liveness, or HATI Layer 1 anchor at the start of the funnel. This eliminates 60–80% of casual fraud at zero candidate friction. Real candidates already have Apple/Google passkeys.
Step 3 — Continuity-check between stages. The same Layer 1 must reappear at every interview stage and on day one. Not "the same name on the calendar invite" — the same cryptographic identity. This catches the proxy-interview service and the deepfake hire.
Step 4 — Verify work, not just claims. For technical roles, ask candidates to share verified work history through HATI Layer 3 attestations. Real engineers' commits, designers' files, writers' edits all carry stamps. AI-only candidates have nothing to show.
Step 5 — Tighten reference verification. AI voice clones make reference calls unreliable. References should themselves verify their identity through a Layer 1 mechanism before their statements count. This is friction; it works.
Step 6 — Continuous identity post-hire. The hire's Manav identity stays bound through onboarding, badge issuance, equipment provisioning, and first-90-days. Most laptop-farm fraud is detected post-hire, not pre-hire. Continuous identity catches the swap.
What the new candidate experience looks like
From the candidate's side, almost nothing changes. They click "verify with passkey" once. They optionally connect a Manav identity to share verified work history. They appear on calls; their identity revalidates silently. The friction is below today's "selfie-plus-ID" experience and the protection is materially stronger.
From the recruiter's side, the experience improves. Background checks complete faster (because the identity is already verified). Take-homes are tied to the identity (so AI-only candidates can't farm them). Day-one onboarding is verified to the same identity that interviewed.
The legal posture
Adverse-action rules under FCRA, EEOC guidance on selection procedures, and emerging state laws on AI in hiring all complicate this. Three principles keep you on the right side:
- Identity verification is not assessment. Verifying that the person on the call is the person on the resume is a fraud control, not a hiring criterion. Disclose it as such.
- Provide an alternate path. Candidates without a HATI identity should not be auto-rejected; they should follow a higher-friction verification path. This survives EEOC scrutiny.
- Document the controls. If an audit asks why one candidate's identity verification took longer, your hiring policy should pre-answer the question.
What this costs
Per-hire identity verification costs in the range of $5–15 per candidate at the funnel and $0–5 per stage check. For a 1,000-hire company, that is $30–60K annually. The expected loss from a single laptop-farm hire — operational, legal, IP — averages $400–700K in our design-partner data. The math is not close.
Common objections
Two pushbacks we expect. Won't this slow workers down? First delegation prompt costs 90 seconds; allowlisted scopes vanish after that. Won't employers weaponize the audit trail? The protocol design — selective disclosure, user-owned wallet, explicit non-features around compensation and termination cause — addresses the most cited abuse paths.
Frequently asked questions
Does this change my employment contract? Yes, slowly. Expect a paragraph in salaried offers above $80k specifying role-declaration on AI-augmented work, audit-log retention, and IP attribution. The clauses look like the GDPR paragraphs every contract has carried for years — boring, ubiquitous, structurally important.
What about people who don't use AI? They keep working without changes. The protocol is opt-in at the action layer; an unsigned action is the default for any human who has not enrolled an agent. Adoption follows incentives, not mandates.
What happens to my work history when I change jobs? It stays with you. The attestations your employer signed are bound to your DID, not their tenant. The next employer can verify them in seconds; you can revoke their visibility at any time.
Where to start
From here, resume fraud ai era sets the broader work-history substrate and how to prove human 2026 addresses the hiring-side mechanics. Read those together and the policy questions get a lot more answerable.
The interview was the trust ceremony. It is now the attack surface.
Hiring teams have spent decades building rituals — the panel, the take-home, the reference call — that assumed both sides were human. Both sides are no longer human by default. Restoring the assumption requires cryptography, not better camera angles.