The first lawsuit naming an AI agent as a co-defendant

Sooner or later, a plaintiff is going to file a complaint that lists "GPT-7-orange-banana, an AI agent" as a co-defendant. It will not be a publicity stunt. It will be a serious attempt to extract money from somebody, and the court will have to decide who.
The probable fact pattern
A consumer-facing AI agent issues a series of confident, false medical recommendations. The user follows them. The user is harmed. The user sues the platform that hosted the agent, the model vendor that trained it, the publisher that integrated it — and, almost as an afterthought, the agent itself. The plaintiff's counsel knows the agent has no money; the inclusion is a procedural hook to draw out evidence about authority, training, and supervision.
What the court will not do
It will not grant the agent legal personhood. American common law has been bracingly uncomfortable with non-human personhood for two centuries; corporations earned a narrow, contested exception. There is no live current of legal thought willing to extend it to LLMs. The agent will be dismissed as a co-defendant; the case will proceed against the humans and corporations behind it.
What the court will do
It will demand discovery on three questions: who authorized the agent to operate, what scope did that authority cover, and what did the supervising human know about the agent's reliability at the time. The platform will produce internal documents. The model vendor will produce safety evaluation results. The publisher will produce its onboarding flow. None of them will produce the one thing the court actually needs: a verifiable record of which human signed for which agent action under which scope. Because, none of them keep that record by default.
Where Manav fits, before the lawsuit
The defense most plausibly available to a Manav-instrumented platform reads: "the agent acted under a delegation signed by the user, scoped to information:retrieve, with explicit disclaimer-of-medical-advice metadata, and the platform recorded each action in a tamper-evident audit log. Here is the log." The court still wrestles with the ultimate liability question, but the platform has shifted from defending against a black-box accusation to defending against an audited record. The latter is much, much cheaper.
What this changes for product
Three product decisions that look like compliance theater today become important in the near term. Logging every action against a delegation. Recording the disclaimer that was visible to the user at the time of action. Producing an exportable audit trail that a litigant or regulator can read without an engineer. Each one is small alone; together they are the difference between a $40M settlement and a $4M one.
The plaintiff's lawyers' next move
Once one such case is litigated and the audit trail proves decisive, plaintiffs' counsel will start demanding audit-trail production in pre-suit letters. Companies without trails will settle to avoid the discovery exposure. Companies with trails will move toward summary judgment. The audit log becomes the new "produce all emails between A and B" — except cleaner, faster, and more pivotal.
Common objections
Two questions readers raise. Couldn't this be prevented with better prompts? No — the failures were authority gaps, not prompt failures. Doesn't this just slow agents down? Only at the highest-stakes actions, by design. Velocity for safe work, friction for unsafe work, written into the delegation.
Frequently asked questions
Could the failure described have been prevented? At the delegation layer, yes. A scoped, magnitude-capped, witness-bound delegation would have refused the action at the relying party before the human even saw the request. The model behaved as instructed; the authority was the gap.
How common is this pattern in practice? More common than the press has caught. The cases that surface are the ones that produced headlines or lawsuits; the ones that did not surface are quietly absorbed as 'cost of running agents in production.' We expect the visible ratio to grow as audit trails make the invisible cases discoverable.
What's the immediate lesson? Authority is the bottleneck. Capability is the easy part — the model is good. Ship the delegation layer before the next agent goes into a system that touches dollars, data, or decisions.
Where to start
For the analytic frame behind the story, see agent identity legal. For the practical playbook the principals would have wanted in advance, see audit trail design.
What the next lawsuit will look like
The first agent-driven lawsuit established the doctrine that liability flows to the human upstream of the agent. The next lawsuit will establish the doctrine of what evidence the human can offer to limit that liability. The fact pattern that decides the next case will look like this: an agent took an action under what appeared to be authorization; the human disputes authorization; the audit trail either resolves the dispute or it does not. Cases where the audit trail names the human and the scope at the moment of action will resolve quickly with bounded liability. Cases where the audit trail does not exist or names "the system" will produce extended discovery and unbounded liability. The discovery cost alone, in the cases without an audit trail, is substantial. Defendants who expect to be in this position should pre-invest in audit infrastructure now. The next lawsuit is being prepared by counsel right now. We have read the early filings. The framing is consistent.
The first lawsuit naming an AI agent will be famous for naming it. It will be remembered for inventing the discovery request that forces every other defendant to produce an audit trail.