The Manav Trust Index (MTI): an open methodology

The Manav Trust Index is a 0–1000 score, privacy-preserving by construction, derived from witnessed work and decaying without contribution. This is the open methodology — the inputs, the math, the audit hooks, and the deliberate non-features.
What the score measures
Three things, in plain English. Have you done attributable work that real people witnessed? Have you done it consistently? Have you done it without dispute? The score expresses these three as a single integer between 0 and 1000.
The components
Six inputs, weighted. Witness count (25%). Independent witnesses on your attestations over the last 24 months. Magnitude (20%). Sum of magnitude across attested artifacts, log-scaled, capped at the 95th percentile. Consistency (20%). Variance of contribution across rolling 90-day windows; lower variance scores higher. Disputes (15%, negative). Resolved disputes against your work; only resolved disputes affect the score. Endorsement diversity (10%). The Gini coefficient of who has witnessed you; concentration in one network is penalized. Recency (10%). Exponential decay; a contribution from 18 months ago counts about half what a contribution from this quarter does.
What the score does not encode
Three deliberate exclusions. Income. The protocol has no input for compensation. Demographic data. No age, gender, race, geography, or any other category beyond optional jurisdiction-of-verification (which is required for some regulated relying parties and otherwise omitted). Sentiment. No "rating" of your work as good or bad; only witness signatures, which are present-or-absent rather than scored.
How it is verifiable
The score is a deterministic function of public-key-signed attestations. Anyone can recompute it given the input attestations. Manav publishes the implementation in Rust, Python, and TypeScript; reproducibility is not a marketing claim, it is a property of the math. A relying party can verify the score independently in 12 ms.
Privacy properties
The score is presentable as a selective-disclosure proof. A holder can prove "MTI above 700 in software-engineering attestations" without revealing the underlying attestations. The verifier learns the predicate; the holder retains the data.
Decay and recovery
A career break does not destroy the score. A 12-month gap with zero attestations decays the recency input but not the witness count or magnitude inputs; the score might fall from 720 to 580, not to zero. A return to attested work rebuilds the score quickly because the historical signals re-amplify with new recent ones.
What MTI is not
It is not credit. It is not employability. It is not "social trust." It is a scoped read on your witnessed work history. A relying party that wants more than this should look at more than this.
Audit hooks
Every MTI calculation produces a deterministic Merkle root of its inputs. A holder may publish the root for transparency; relying parties may demand it. The root does not reveal inputs; it commits the score to the inputs that produced it. Tampering after the fact is detectable.
Common objections
Two methodological objections we take seriously. Selection bias in the respondent pool — addressed by reporting industry/size mix and weighting where appropriate. Vendor incentive to inflate the gap — addressed by publishing the raw data and source code so anyone can re-run the model with assumptions friendlier to inaction.
Frequently asked questions
How is the methodology auditable? The data, the analysis, and the code are published. Every chart can be reproduced from source. We name our partners (with their permission) and disclose every conflict of interest at the top of the report.
What are the confidence intervals on the headline numbers? Reported per metric in the gated PDF. The 4.6× year-over-year delta on hiring fraud, for instance, has a 95% CI of 3.8× to 5.4×; the median time-to-detection has a CI of 9.2 to 13.1 months.
Why publish numbers your competitors will use? Because the category needs them. The longer the only data is vendor anecdote, the longer the buyer's procurement team waits. We benefit when the category is sized; sizing requires shared numbers.
Where to start
The dataset opens at trust score vs reputation. The control set — which infrastructure changes the curve — is at proof of human work spec. Re-fit the model with your own assumptions; we publish the source.
How the index moves, week to week
The Manav Trust Index moves on three categories of input. The first is incident frequency: agent-driven failures with public reporting. The second is regulator activity: new guidance, enforcement actions, draft frameworks. The third is integration completion: substrate-grade deployments that produce verifiable artifacts. The index rises when integrations outpace incidents; it falls when regulator activity outpaces both. Reading the weekly movement is therefore less about predicting the substrate's success and more about reading the lead-lag between the three categories. Weeks where the regulator category dominates tend to precede weeks where integration completion accelerates, which is the lead the substrate is designed to capture. We publish the methodology because the index is more useful as a tool for readers than as a marketing artifact. Other indices will likely emerge with different methodologies; the comparison across indices is itself a useful indicator of where the substrate-adoption signal is converging across observers.
A trust score whose math you cannot read is a credit score with better marketing.