Manav.id
Research4 min read

The MCP adoption survey: 800 AI engineers' real numbers

MCP adoption survey

800 working AI engineers, ten questions, six weeks of fielding, public methodology. The Model Context Protocol moved from cool-but-niche to plumbing-by-default. Here's where engineers really are with it — and the integration gap they cite most often.

Adoption

74% of respondents have shipped at least one MCP-server-backed integration to production. 52% of those running production MCP have done so within the last 12 months. The protocol's Anthropic's donation to the Linux Foundation's Agentic AI Foundation accelerated adoption visibly: post-donation MCP integrations grew 2.3× compared to the prior six months.

What's plugged in

Top MCP servers in production by mention count. (1) Database (Postgres, MySQL, Snowflake): 68% of respondents. (2) Filesystem and code: 61%. (3) GitHub / GitLab: 47%. (4) Slack: 41%. (5) Search and web: 38%. (6) Stripe: 22%. (7) Internal proprietary tools: 19%. (8) Identity (Manav, Auth0 M2M, Microsoft Entra): 12%. The identity layer is the most-cited gap.

The identity gap, in their words

"We have MCP everywhere. We have no idea who authorized what." (Engineer at a Series-C fintech.) "Audit logs are model-output, not action-output." (Staff engineer at a Fortune 100 retailer.) "The agent ran 12,000 actions overnight; the legal team asked who signed for them; we sent them a screenshot." (Tech lead at a pre-IPO SaaS.) The gap is universally felt and rarely solved.

Hosts

Claude Desktop and Claude Agent SDK led with 49% of respondents using them as their primary MCP host. Cursor at 27%. Microsoft Copilot Studio at 12%. In-house hosts at 9%. Other (Continue, Goose, Zed, Replit) at 3%. Multi-host environments are common; respondents on average run 2.4 distinct hosts.

What engineers want next from MCP

Identity (44%). Streaming responses (28%). Per-tool authorization granularity (24%). Better observability (21%). Native rate-limit semantics (16%). The identity request comes up across every cohort — startup, enterprise, regulated, unregulated.

The deployment patterns we did not expect

Two surprises. First, many enterprises are running MCP servers as long-lived services with stable URLs, treating them more like APIs than per-conversation processes. This breaks the host-controlled-process pattern Anthropic shipped with; it also makes identity at the MCP-server boundary suddenly important. Second, MCP is showing up inside CI pipelines, not just developer tools. Build agents now reach for MCP servers to inspect database schema, run dependency audits, and write release notes. CI-bound agents have all the identity-gap properties of their interactive cousins, plus the burden of running unattended.

Methodology

800 working AI engineers, recruited via developer-community sampling (HN, Lobsters, MCP Discord, Anthropic Discord, recommendation-chain referrals). Ten-question survey, fielded March 6 to today. 12% response rate; n=800 from 6,667 invited. Margins of error reported in the gated PDF.

Common objections

Two methodological objections we take seriously. Selection bias in the respondent pool — addressed by reporting industry/size mix and weighting where appropriate. Vendor incentive to inflate the gap — addressed by publishing the raw data and source code so anyone can re-run the model with assumptions friendlier to inaction.

Frequently asked questions

How is the methodology auditable? The data, the analysis, and the code are published. Every chart can be reproduced from source. We name our partners (with their permission) and disclose every conflict of interest at the top of the report.

What are the confidence intervals on the headline numbers? Reported per metric in the gated PDF. The 4.6× year-over-year delta on hiring fraud, for instance, has a 95% CI of 3.8× to 5.4×; the median time-to-detection has a CI of 9.2 to 13.1 months.

Why publish numbers your competitors will use? Because the category needs them. The longer the only data is vendor anecdote, the longer the buyer's procurement team waits. We benefit when the category is sized; sizing requires shared numbers.

Where to start

The dataset opens at mcp identity 12 lines. The control set — which infrastructure changes the curve — is at manav mcp server. Re-fit the model with your own assumptions; we publish the source.

Where MCP adoption is fastest, and why

The MCP adoption rate is highest in three sectors that share an unexpected common feature: each has a regulator who has named external integrations as a specific risk surface. Financial services, healthcare, and government procurement lead adoption not because the technology is more applicable there but because the regulatory framing inside each makes the adoption case easier to fund internally. The sectors lagging adoption — retail, hospitality, education — share the inverse: regulators have not yet named external integrations as a risk surface, so the procurement case for the protocol primitives competes with other priorities and loses. The pattern predicts the next adoption wave: retail follows when consumer-protection regulators issue agent-specific guidance, hospitality follows when liability insurers price agent-driven incidents, education follows when accreditation bodies require declared AI augmentation. The sequencing is visible in the regulator's drafting calendars. The adoption follows the calendars by approximately six months.

MCP solved tools. The next question is authority. The 800 engineers we asked already know.