24 hours from now

A short fiction. One human, one Manav identity, twenty-four hours of an ordinary day three years from now. The infrastructure is invisible. The trust is total.
06:42 — Wake
The room senses she is up. Her morning agent — a Claude derivative she's used for two years — checks her calendar against her doctor's reminder service. The agent presents two delegation prompts on the bedside screen: refill the prescription her physician's note recommends, and confirm her son's school field trip. Her thumb on the secure element approves both. Two signatures, four seconds. The agent does not ask again.
09:14 — A new contract
Her client's procurement bot wants to onboard her for a three-month engagement. The bot requests her last twelve months of attested work in tax-strategy consulting; her wallet generates a selective-disclosure proof — count of engagements above $50k, average client satisfaction (signed by clients, no names), zero compliance flags. The bot accepts; the contract draft arrives. She signs with the same gesture.
11:30 — The deepfake call
A video call comes in. The face on the screen claims to be a senior partner at a firm she worked with last year, asking her to wire a small retainer to a "new account." Her wallet does not see the partner's Manav DID on the call — the line is unsigned. The interface does not block the call; it just renders the partner's name in grey instead of green. She hangs up, calls back through the partner's Manav handle, and learns there was no such request. A few years ago, that scam took $40k from her best friend. Tonight it takes nothing.
13:50 — Lunch with another human
She meets a friend at a cafe. They exchange Manav handles by tap; the wallets verify both are humans, both are who they claim to be, both have agreed to share light social-graph data with each other. No "are you on LinkedIn?" Three seconds.
15:30 — Her son needs a doctor
Her son fell at school. The clinic accepts her son's pediatric Manav DID — issued by his pediatrician, controlled by her with a guardian relationship — for instant access to his vaccine record, allergies, and prior visits. No paperwork. No clipboard. The clinic's agent drafts the visit note while the doctor examines; the note will be reviewed and signed by the doctor before it is finalized.
18:10 — A delegation expires
Her household-shopping agent's weekly delegation expires. The wallet asks if she wants to renew on the same terms or adjust. She raises the produce cap by $20, lowers the household cleaning cap by $30 — they have too much surplus — and signs.
21:45 — A vote
Her municipality's bike-lane referendum is on a Manav-attested poll. One human, one vote. Her wallet asks if she wants to deposit her vote. She does. The result will be tallied at midnight and audited against the registered-human roster. No identity beyond "verified resident" leaves her wallet.
23:30 — Sleep
She does not check her email before bed. Her morning agent will summarize anything urgent. Her wallet logs zero alerts. The infrastructure is doing its job; she is doing hers. She falls asleep without thinking about identity at all, which is the surest sign the infrastructure is working.
Common objections
Two questions readers raise. Couldn't this be prevented with better prompts? No — the failures were authority gaps, not prompt failures. Doesn't this just slow agents down? Only at the highest-stakes actions, by design. Velocity for safe work, friction for unsafe work, written into the delegation.
Frequently asked questions
Could the failure described have been prevented? At the delegation layer, yes. A scoped, magnitude-capped, witness-bound delegation would have refused the action at the relying party before the human even saw the request. The model behaved as instructed; the authority was the gap.
How common is this pattern in practice? More common than the press has caught. The cases that surface are the ones that produced headlines or lawsuits; the ones that did not surface are quietly absorbed as 'cost of running agents in production.' We expect the visible ratio to grow as audit trails make the invisible cases discoverable.
What's the immediate lesson? Authority is the bottleneck. Capability is the easy part — the model is good. Ship the delegation layer before the next agent goes into a system that touches dollars, data, or decisions.
Where to start
For the analytic frame behind the story, see 100 things agents did. For the practical playbook the principals would have wanted in advance, see dns of human trust.
What the day reveals about the substrate
The reason the day described feels seamless is that every consequential action passes through the same substrate without anyone noticing. The substrate's success criterion is invisibility under load. When delegation works correctly, the user experiences a smooth day; when it fails, the user experiences a cascade of explanations, escalations, and audit retrievals. The day is therefore a thought experiment about the substrate's success: if the architecture is right, the substrate disappears into the background of competent agency. Builders measuring substrate maturity by user-facing prominence are measuring the wrong axis. The right axis is invisibility per consequential action. The substrate that gets quieter as it gets more important is the substrate that earned its position in the stack. The day described is what that quietness feels like in practice.
The future of identity is not noisier. It is invisibly correct.