Manav.id
Investigation4 min read

The Laptop Farm Playbook

Laptop farm playbook

Over the past two years, at least 38 Fortune 500 companies hired employees who never existed. They paid the salaries. The work shipped. The income went to North Korean operators, Russian intermediaries, and a smaller cluster of opportunists in Lagos and Karachi. This is a reconstruction of the playbook — and the layer that ends it.

How it works, end to end

A real US-based identity is rented or stolen — sometimes purchased on the dark market for $1,500, sometimes recruited via a Telegram offer to "make passive income while studying." The renter passes the background check using the legitimate identity's data. The renter then ships a company-issued laptop to a third-party "facilitator" — the laptop farm — where it is set up in a rack of dozens to hundreds of identical machines. A different operator, often offshore, drives the laptop via remote-control software. The deepfake video interview is conducted by the same offshore operator, who appears on camera as the legitimate identity. The work product is real; the salary flows to the renter, then to the operator, with a cut to the facilitator.

The five tells, and why they fail

HR teams trained five red flags over the past several years: weekend-only Slack activity, off-hours git commits, accent-mismatch on calls, tax-document inconsistencies, IP geolocation. Each was defeated in the near term. Operators worked legitimate US hours by night-shifting. Commits went through scheduling tools. Voice cloning eliminated accent mismatches. Tax docs match because the renter is real. IP geolocation is solved by the laptop being physically in the US. The five tells caught roughly 11% of laptop-farm hires.

What the FBI's indictments revealed

The federal indictment of Christina Chapman (US) and her co-conspirators surfaced 309 victims — the companies — and identified 10 facilitators across the US. The pattern: a US-based facilitator manages 100+ laptops; offshore operators drive the keyboards. The $17.1M proven flow was a fragment; subsequent reporting put the cumulative North Korea-linked outflow above $400M over the past several years.

Why current background checks miss it

HireRight, Checkr, and Sterling verify the identity is real. They do not verify that the identity sitting on the interview is the same identity that shows up on the laptop on Monday. The verification is at one moment; the impersonation begins after. This is not a flaw in the products; it is a feature of the threat model the products were built for.

What ends it: continuous identity, not one-time check

A Manav DID is not a one-time check. It is a device-bound, hardware-attested identity that stays attached to the human across every action they take. The hiring company runs verification at offer; the device-bind happens at onboarding; every commit, every PR review, every Slack login, and every customer-data query is signed by the device's secure element. A laptop farm cannot impersonate the identity at run-time, because the secure element does not travel and behavioral biometrics catch the substitution within hours.

What the 38 companies have in common

The reporting reviewed by Manav across these cases shows a consistent profile. Each had three or more remote-only roles in engineering or data. Each used HireRight or Checkr for background checks. None had behavioral-biometric continuous verification. The median time from hire to discovery was 11 months. The median direct loss per hire was $89,000 in salary, plus an estimated $310,000 in IP and access risk that did not materialize because most of the impostors did, in fact, ship work.

What the 38 are doing now

Most have rolled out continuous identity verification on laptops, often via a vendor in the Manav ecosystem or via Manav directly. Three have published post-mortems; the rest are silent under settlement terms. The collective lesson is the one that took the industry three years to accept: identity is a stream, not a checkpoint.

Common objections

Two challenges to the framing. Are these cases representative? Yes — each shares the same architectural gap, and the gap is structural, not company-specific. Is the fix actually deployable today? Yes — the cases that got fixed used components shipping in production now, not roadmap bets.

Frequently asked questions

Are these incidents isolated or systemic? Systemic. Each case shares the same architectural gap — identity at one moment, no continuous verification — and each fix shares the same shape. The headlines vary; the mechanism repeats.

How do investigators reconstruct authority after the fact? Painfully. Without a signed audit trail, the reconstruction is interview-driven and takes weeks. With a signed trail, it is a single export and takes minutes. The forensic-cost differential is the practical case for the layer.

What would change the trajectory? Two things. Enterprise procurement requiring continuous identity in offer letters. And insurer underwriting conditioning premiums on signed audit trails. The first is starting; the second is converging.

Where to start

For the data behind the patterns, see resume fraud ai era. For the controls that prevent the next case, see deepfake hiring playbook. Both are linkable artifacts an investigator can hand a procurement team.

The laptop farm playbook works because hiring measures identity once. Manav measures it always. The economics of impersonation collapse the day the second model becomes default.