- Monday Momentum
- Posts
- The USB-C Moment
The USB-C Moment
Why AI rivals just agreed to stop fighting and what it means for the future of agentic AI systems
Happy Monday!
OpenAI and Anthropic just did something that should be impossible.
They donated their core integration infrastructure to the same neutral foundation, along with Block, AWS, Google, Microsoft, Bloomberg, and Cloudflare.
Anthropic's Model Context Protocol (MCP), the system connecting Claude to external tools and data. OpenAI's AGENTS.md, the instruction format agents use across 60,000+ projects. Block's Goose, the framework thousands of engineers use for agentic workflows.
All three just went to Linux Foundation governance under the new Agentic AI Foundation. Not licensing deals, not partnerships. Full donation with neutral oversight and no single company in control.
Competitors who spent the last year in benchmark wars just handed their plumbing to the same nonprofit.
Anthropic, OpenAI, and Block donated their key integration tools to the Linux Foundation's new Agentic AI Foundation (AAIF), joined by AWS, Google, Microsoft, Bloomberg, and Cloudflare. MCP went from Anthropic release to industry standard in 13 months, adopted by ChatGPT, Claude, Gemini, Copilot, VS Code, and Cursor. Open AI’s AGENTS hit 60,000+ project adoption in 4 months. The pattern: AI companies realized fragmentation kills faster than competition. The moral of the story is it’s better to control governance of an open standard everyone uses than own proprietary infrastructure nobody adopts.
The Standard That Moved Faster Than Anyone Expected
One year ago, Anthropic released something that looked like internal tooling: the Model Context Protocol.
The pitch was simple. AI models need to connect to external tools, databases, APIs, and data sources. At the time, every company was building custom integrations, often one connector per tool per AI platform. The math was brutal: 10 AI products × 100 enterprise tools = 1,000 different connectors to maintain.
MCP proposed a universal standard. Think USB-C for AI. Build once, work everywhere. Here's what actually happened:
November 2024: Anthropic releases MCP as open source. Early adopters include Block and Apollo. Developer tools like Zed, Replit, Codeium, and Sourcegraph start integrating.
March 2025: OpenAI officially adopts MCP across ChatGPT desktop app, Agents SDK, and Responses API.
By summer: Google DeepMind integrates. Microsoft adds support. Claude, Gemini, Copilot, Cursor, VS Code all running MCP.
December 2025: 10,000+ active public MCP servers. 97M+ monthly SDK downloads across Python and TypeScript. Registry with 2,000+ entries (up 407% since September launch).
MCP quickly went past widespread adoption to an industry stampede.
When we open sourced it in November 2024, we hoped other developers would find it as useful as we did. A year later, it's become the industry standard for connecting AI systems to data and tools.
Why Rivals Just Stopped Competing
OpenAI and Anthropic have been at war for two years.
Benchmark competitions. Researcher poaching. Enterprise deal battles. Anthropic positioning Claude as "safer." OpenAI pushing GPT as "more capable." Every model release framed as beating the other.
Yet both just contributed core infrastructure to the same foundation. With governance rules preventing either from controlling it.
Here's what changed: they realized fragmentation was going to kill the entire market.
The enterprise reality check came from pilot data. UiPath's 2025 report found lack of interoperability is the second most cited reason for pilot failures (right after data quality issues). 63% of executives cited "platform sprawl" as a growing concern.
Bain's 2025 Technology Report confirmed it: AI investment is up, but returns lag behind expectations. The reason? "Fragmented workflows, insufficient integration, and misalignment between AI capabilities and business processes."
Companies haven’t been able to effectively deploy agents at scale because every tool requires custom integration with every AI platform.
Boston Consulting Group put numbers to it. Without standards, integration complexity rises quadratically as agent deployments spread through organizations. With standards like MCP, integration effort increases only linearly.
That's the math that forced cooperation. Build proprietary connectors and watch the market stall, or standardize the plumbing and unlock deployment.
The Three Pieces of Infrastructure
AAIF launched with three donations that fit together like infrastructure layers.
Model Context Protocol (MCP) - Anthropic's contribution. The universal connector between AI models and external systems. Handles tool discovery, execution, context injection, and memory. Claude uses it. So does ChatGPT, Gemini, Copilot, VS Code, and Cursor.
Think of MCP as the API layer for agentic AI. Before MCP, connecting Claude to your Slack meant building a custom Anthropic-Slack connector. Then connecting ChatGPT to Slack meant building a second OpenAI-Slack connector. Now? Build one MCP server for Slack and it works with everything.
AGENTS.md - OpenAI's contribution. A markdown-based standard that gives AI coding agents consistent project-specific guidance. It's like a README doc from a codebase, but for machines instead of humans.
Before OpenAI’s AGENTS standard, every AI coding tool had its own way of understanding project structure, build systems, and coding conventions. Developers were writing the same instructions in different formats for Cursor, Copilot, Codex, Devin, and others.
AGENTS.md solves that with one file. It works across 60,000+ projects and agent frameworks including Cursor, Devin, GitHub Copilot, Gemini CLI, Jules, VS Code, and more.
Goose - Block's contribution. An open-source, local-first agent framework combining language models with extensible tools and standardized MCP-based integration.
Block built Goose for internal use; thousands of engineers use it weekly for coding, data analysis, and documentation. Now it's the reference implementation showing how MCP-based agents actually work in production.
The Linux Foundation Playbook
This move isn't new. It's a proven strategy the Linux Foundation has used to neutralize technology battles for two decades.
Kubernetes: Google donated container orchestration tech that became the cloud infrastructure standard. Google maintains influence through contributions, but doesn't own it. Kubernetes is now used everywhere, including competitors' clouds.
PyTorch: Meta/Microsoft collaborated under the Linux Foundation to create the ML framework standard. Competitors contribute but no single company controls the roadmap. PyTorch now dominates research and production.
Node.js: Neutralized through Linux Foundation governance, Node enabled JavaScript to run everywhere. Node now powers modern web infrastructure.
AAIF follows the exact model. Technical steering committees set roadmaps, no single member gets unilateral control, and it’s funded through a directed fund (membership dues).
The technology that will define the next decade can either remain closed and proprietary for the benefit of few, or be driven by open standards, open protocols, and open access for the benefit of all.
Anthropic and OpenAI aren't giving up competitive advantage. They're shifting competition from infrastructure to execution. It’s better to compete on models while sharing the plumbing than fragment the market into incompatible silos.
What This Means If You're Building
Stop building custom connectors for every AI tool. MCP is the layer. If you're maintaining separate integrations for Claude, ChatGPT, Gemini, and Copilot, you're solving a problem that's already been solved.
Build MCP servers instead
Expose your data, tools, or services via MCP and they work across the entire ecosystem. AWS, Cloudflare, Google Cloud, and Azure already support MCP deployment. The infrastructure exists.
AGENTS.md is your instruction file
If you're building AI coding tools or working in repositories where agents operate, adopt the standard. 60,000+ projects already use it and the ecosystem is converging.
Watch the governance carefully
The track record of the Linux Foundation is solid, but AAIF is new. Technical steering committees will determine if this stays vendor-neutral or gets captured by the biggest contributors.
Understand the competitive dynamics
Google supports AAIF as a platinum member, but also has its own A2A (Agent-to-Agent) protocol in development. Standards wars create risk, but MCP's 13-month adoption curve vs A2A's delays suggest which way the market is moving.
The real opportunity lies in the fact that agentic AI is moving from experiments to production deployments. AAIF removes the integration barrier that was blocking enterprise adoption. If you're building agent-based products, your go-to-market just got easier. If you're evaluating agent vendors, prioritize MCP compatibility.
Infrastructure standardization unlocks the next wave of deployment. The companies that built MCP just handed it to everyone.
The Bottom Line
One year ago, Anthropic released a protocol for connecting AI to external tools.
Today, it's the de facto standard across ChatGPT, Claude, Gemini, Copilot, VS Code, and Cursor. 10,000+ servers. 97M+ monthly downloads. 60,000+ projects using complementary standards.
And now it's under neutral governance, with eight of the biggest names in tech funding its development.
Anthropic and OpenAI are still fierce rivals. They'll still compete on model quality, safety, enterprise features, and benchmarks. But they just agreed: the plumbing layer should be neutral, open, and standardized.
MCP is the USB-C moment for AI: one connector that works everywhere, maintained by a neutral foundation.
The companies that built it realized it’s better to control governance of a standard everyone uses than own infrastructure nobody adopts. For anyone building on AI agents, the integration layer is solved.
And for the broader industry, the AAIF signals something bigger. The era of proprietary AI stacks is ending. The future is composable, interoperable systems where agents from different vendors coordinate through shared protocols.
In motion,
Justin Wright
If the biggest AI companies just agreed to stop competing on integration infrastructure and instead standardize on neutral, open protocols, what does that tell you about where the real competitive advantage lies?

Donating the Model Context Protocol and Establishing the Agentic AI Foundation - Anthropic
Linux Foundation Announces the Formation of the Agentic AI Foundation - Linux Foundation
One Year of MCP: November 2025 Spec Release - Model Context Protocol Blog
OpenAI, Anthropic, and Block Join New Linux Foundation Effort - TechCrunch
Model Context Protocol - Wikipedia
10 AI Agent Statistics for Late 2025 - Multimodal
Model Context Protocol (MCP) FAQs: Everything You Need to Know in 2025 - MarkTechPost
Agentic AI: A Strategic Forecast and Market Analysis (2025-2030) - PRISM MediaWire

I am excited to officially announce the launch of my podcast Mostly Humans: An AI and business podcast for everyone!
Episodes can be found below - please like, subscribe, and comment!