The $8.4 Billion Performance Treadmill

Why companies are spending more than ever as frontier models get less expensive

Happy Monday!

Last week, we explored how personal AI will reshape our relationship with technology. But while we debate the future of AI interfaces, a more immediate economic reality is unfolding in enterprise boardrooms: AI model prices have dropped 10x annually, yet companies are spending more on AI than ever before.

Menlo Ventures' latest report reveals that LLM API spending doubled from $3.5 billion to $8.4 billion in just six months. But here's the kicker: enterprises aren't capturing any savings from falling costs. Instead, they're trapped on a "performance treadmill" as they constantly upgrade to the latest, most expensive frontier models the moment they're released.

Claude 4 captured 45% of Anthropic's user base within one month of launch, while Claude 3.5 Sonnet's share plummeted from 83% to 16%. This instant mass migration reveals how enterprise software economics actually work.

Enterprise AI adoption follows a "performance treadmill" dynamic where companies immediately migrate to newer, more expensive models despite 10x annual price drops in older versions. This creates unsustainable upgrade cycles, concentrates market power among frontier model providers, and suggests AI democratization through cost reduction is largely a myth.

TL;DR

The Economics Paradox

Traditional software follows predictable adoption curves: new versions release, early adopters pay premium prices, costs gradually decline, and mainstream adoption follows as prices become accessible. AI has shattered this model entirely.

When new frontier models launch, 66% of enterprises upgrade within their existing provider, often within weeks. But here's what makes this unprecedented: they're not waiting for prices to drop or for proven ROI. They're immediately paying premium rates for incremental performance improvements that may not justify the cost.

Meanwhile, older models that delivered breakthrough capabilities just months earlier become virtually abandoned overnight. Claude 3.5 Sonnet went from powering 83% of Anthropic workloads to just 16% within a month of Claude 4's release, despite being significantly cheaper and perfectly capable for most enterprise use cases.

This creates an economic paradox: as individual model costs plummet 10x annually, total enterprise AI spending accelerates because companies chase performance improvements rather than capture cost savings.

The Meta Trend: Performance Addiction in Enterprise Software

The performance treadmill represents a broader shift in enterprise software economics. Unlike traditional software where "good enough" often wins, AI capabilities create addictive upgrade cycles where marginal improvements feel essential rather than optional.

This dynamic emerges because AI model performance directly impacts competitive advantage in ways previous software generations didn't. When Claude 4 generates better code or Anthropic's latest model produces more accurate analysis, enterprises fear falling behind competitors who upgrade immediately.

The result is a new category of enterprise software addiction: companies become psychologically dependent on having the latest AI capabilities, even when older versions would satisfy their actual needs.

Pattern Recognition: The Three Pillars of Performance Addiction

Pattern #1: The Instant Migration Effect

The speed of enterprise model adoption has no precedent in software history. Within one month of Claude 4's release, nearly half of Anthropic's enterprise customers had migrated from Claude 3.5 Sonnet. This is mass migration at unprecedented velocity.

The migration speed reveals that enterprises aren't conducting careful ROI analysis or gradual rollouts. They're making emotional rather than rational decisions, driven by fear of missing out on competitive advantages rather than measured assessment of actual business value.

Pattern #2: Performance-First Decision Making

Survey data shows enterprises consistently choose frontier models over cheaper, faster alternatives. Cost optimization, which drives most enterprise software decisions, becomes secondary to having access to cutting-edge capabilities.

This represents an inversion of traditional enterprise purchasing behavior. While CIOs typically optimize for cost-effectiveness and proven value, AI purchasing decisions prioritize potential capability over demonstrated ROI.

Pattern #3: The Platform Lock-In Paradox

Only 11% of enterprises switched AI vendors in the past year, while 66% upgraded models within their existing platform. This creates a unique form of vendor lock-in: not through technical integration complexity, but through performance addiction.

Companies become trapped in upgrade cycles where switching platforms feels risky, but not upgrading within their chosen platform feels equally dangerous. The result is predictable revenue streams for AI providers who can sustain rapid model iteration.

Contrarian Take: AI Democratization Is a Myth

The conventional wisdom suggests that falling AI costs will democratize advanced capabilities, enabling smaller companies to compete with tech giants who previously monopolized cutting-edge AI. The performance treadmill reveals this narrative as fundamentally flawed.

Instead of democratization, we're witnessing concentration. Anthropic has captured 32% enterprise market share while OpenAI's share dropped from 50% to 25%, but the total addressable market consolidated around fewer providers. Open-source adoption actually declined from 19% to 13% despite significant improvements in open-source model capabilities.

The performance treadmill benefits only companies that can sustain continuous frontier model development. Rather than enabling broader access, falling costs simply fund more expensive upgrade cycles that favor established AI labs with the resources to maintain rapid release schedules.

Small companies and startups can access powerful AI capabilities, but they become addicted to performance improvements they can't control or predict. This creates a different kind of digital divide: not between those who can and can't access AI, but between those who can afford constant upgrades and those trapped on increasingly obsolete models.

The Bigger Picture: Unsustainable Upgrade Economics

The performance treadmill creates several concerning dynamics for the broader AI ecosystem:

Escalating Customer Acquisition Costs: As upgrade cycles accelerate, AI providers must invest more heavily in R&D to maintain competitive differentiation, driving up the costs passed to enterprises.

Innovation Theater: The pressure to release frequent model improvements may prioritize marginal gains over breakthrough capabilities, leading to artificial differentiation rather than meaningful progress.

Enterprise Resource Drain: Companies allocate increasing resources to AI model management and upgrades rather than building applications and solving business problems.

Competitive Pressure Intensification: As model capabilities become table stakes rather than differentiators, companies must find new sources of competitive advantage while paying escalating AI infrastructure costs.

The sustainability question looms large: even though AI API spending doubled to $8.4 billion in six months, this growth rate cannot continue indefinitely. At some point, enterprises will reach a breaking point where performance improvements no longer justify upgrade costs.

In motion,
Justin Wright

If enterprises are addicted to frontier AI performance improvements regardless of cost, does this create the ultimate vendor lock-in through psychological dependence on having the latest capabilities?

Food for Thought
  1. Developers reinvented (Github)

  2. Gemini 2.5 Deep Think (Google)

  3. Monitoring and controlling character traits in language models (Anthropic)

  4. Introducing GPT-5 (OpenAI)

  5. Claude Opus 4.1 (Anthropic)