The $9 Million Slop Tax

Why AI's productivity promise is backfiring and how corporations can derive real value using AI tools

Happy Monday!

Last week, I explored how OpenAI's $100 billion infrastructure bet reveals the shift toward technological sovereignty in AI. But while companies race to deploy AI everywhere, a different story is emerging from actual workplace data: the productivity gains aren't materializing. Instead, organizations are discovering an expensive new problem called "workslop."

MIT's latest research found that 95% of AI pilot projects fail to deliver measurable returns. Now research from Stanford and BetterUp Labs reveals why: employees are using AI to create polished-looking work that lacks substance, forcing colleagues to spend hours fixing or redoing it. The cost: $186 per employee monthly, or $9 million annually for a 10,000-person organization.

The crux of the problem is that organizational incentives often reward output over outcomes. While the AI industry celebrates model improvements, companies are learning a harder truth: implementing AI successfully requires rethinking entire workflows and not just deploying tools.

"Workslop," AI-generated content that appears polished but lacks substance, costs companies $9 million annually per 10,000 employees. 41% of workers encounter workslop, spending nearly two hours per incident fixing it. The problem isn't AI capability but misalignment of organizational incentives. Successful AI adoption requires workflow redesign, not just tool deployment.

TL;DR

The Workslop Epidemic: AI's Hidden Productivity Tax

The numbers paint a troubling picture of AI adoption in practice. Researchers from Stanford and BetterUp Labs surveyed 1,150 U.S. desk workers and found that 41% have encountered AI-generated output that "masquerades as good work, but lacks the substance to meaningfully advance a given task."

The problem manifests across industries. Healthcare providers report receiving long AI-generated reports from patients diagnosing health problems using Fitbit data, without medical underpinning. Tech workers encounter code that looks functional but lacks proper architecture. Business teams receive polished presentations missing critical context.

Workers spend an average of one hour and 56 minutes per incident dealing with low-quality AI outputs. Based on respondents' self-reported salaries, this translates to $186 per month in lost productivity per employee. For organizations with 10,000 employees, the annual cost exceeds $9 million.

But the damage extends beyond direct productivity losses. "The insidious effect of workslop is that it shifts the burden of the work downstream," the researchers note. Recipients must interpret, correct, or redo the work which turns what should be collaboration into quality control.

The emotional and social costs compound the financial impact. When asked how workslop made them feel, 53% said annoyed, 38% confused, and 22% offended. About 50% of respondents viewed colleagues who sent workslop as less creative, capable, and reliable. 37% saw them as less intelligent and 42% as less trustworthy.

The Meta Trend: From Social Media Slop to Workplace Slop

Workslop represents the workplace manifestation of a broader internet phenomenon. "AI slop," defined as low-quality AI-generated content flooding social media, was added to the Cambridge Dictionary in July 2025.

The Guardian's analysis found that 9 of the top 100 fastest-growing YouTube channels feature AI-generated content like zombie football and cat soap operas. A fictional band called The Velvet Sundown released AI-generated songs on Spotify, accumulating over 850,000 listeners before being exposed. AI-generated misinformation even spread during Hurricane Helene, with fake images of displaced children used for political gain.

The pattern is identical: AI tools make content generation faster and cheaper, incentive structures reward volume over quality, and the result pollutes information ecosystems. On social media, slop producers exploit attention economics to gain engagement and ad revenue. In workplaces, employees exploit productivity metrics to appear busy while shifting actual work to colleagues.

A 21-year-old creator in the Philippines told NPR he produces AI kitten videos that take one to two hours each and generate hundreds of dollars monthly. His channel has nearly 600,000 subscribers and nearly 500 million total views. The economics work because platforms prioritize engagement instead of quality.

Similarly, 18% of workers who use AI admitted sending AI-generated content that was "unhelpful, low effort or low quality." The incentive structures rewarding apparent productivity over actual value creation drive the behavior.

Pattern Recognition: The 95% Failure Rate Decoded

MIT's NANDA initiative report "The GenAI Divide: State of AI in Business 2025" provides crucial context for understanding workslop's prevalence. Based on 150 interviews with business leaders, a survey of 350 employees, and analysis of 300 public AI deployments, the research found that only 5% of AI pilot programs achieve rapid revenue acceleration, with the vast majority delivering little to no measurable P&L impact.

Pattern #1: The Learning Gap

The MIT research reveals something counterintuitive: failures were less about AI model quality and more about how organizations attempt to use them. Lead author Aditya Challapally described the issue as a "learning gap" between tools and enterprise workflows.

Generic tools like ChatGPT excel for individuals because of flexibility, but they stall in enterprise use since they don't learn from or adapt to workflows. Most enterprise AI tools don't improve from user feedback or adapt to organizational context.

This explains workslop's prevalence. Without workflow integration, employees use AI as a shortcut to generate output rather than a tool to enhance quality. The AI produces something that looks right but lacks the context, nuance, and substance that comes from deep understanding.

Pattern #2: Misaligned Resource Allocation

More than half of generative AI budgets are devoted to sales and marketing tools, yet MIT found the biggest ROI in back-office automation. Real value came from eliminating business process outsourcing, cutting external agency costs, and streamlining operations.

Companies invest where AI seems most visible and impressive, not where it delivers most value. This misalignment encourages superficial adoption and deploying AI for appearances rather than genuine productivity gains.

Pattern #3: Build vs. Buy Failures

Purchasing AI tools from specialized vendors succeeds about 67% of the time, while internal builds succeed only one-third as often. Yet companies, especially in regulated sectors, continue building proprietary systems.

This "fetishizing control," as Fortune described it, leads to building AI systems on open source models that still lag proprietary rivals, creating more opportunities for workslop generation.

Pattern #4: Shadow AI Proliferation

The report highlights widespread use of "shadow AI," which are unsanctioned tools like ChatGPT that employees adopt without company approval. This creates a vicious cycle: official tools fail to meet needs, employees turn to consumer AI products, management lacks visibility into actual usage, and workslop proliferates unchecked.

Contrarian Take: The Problem Is Organizational Incentives

The workslop crisis reveals a key truth about workplace AI adoption: the technology works exactly as designed. The problem is that organizational incentives are designed for a pre-AI world.

Consider what gets rewarded in most organizations:

  • Volume of output rather than quality of outcomes

  • Speed of delivery rather than thoroughness of work

  • Visible activity rather than meaningful contribution

  • Individual productivity rather than team effectiveness

AI amplifies whatever behaviors organizations incentivize. If promotions go to people who send the most emails, respond fastest to requests, and produce the most documents, then AI becomes a tool for gaming those metrics. As researchers noted, "blanket mandates to use AI all the time just lead to workers mindlessly copying and pasting AI responses" into documents.

The 18% of AI users who admit sending workslop aren't necessarily lazy or incompetent. They're just responding rationally to organizational signals. When companies mandate AI usage ("AI Mondays," as one CEO instituted) without changing how they measure value, employees use AI to appear productive while actual work quality declines.

This mirrors the broader AI slop phenomenon perfectly. Social media platforms don't ban AI content outright because "they think that maybe this stuff is annoying now, but in five years, they imagine a world where most content on the internet is generated by AI," as one tech journalist noted. Platforms remain invested in AI because they're all betting on it even as it degrades user experience today.

Similarly, companies mandate AI adoption because they fear falling behind competitors, regardless of whether it improves actual outcomes. The result: expensive theater masquerading as innovation.

The Broader Economic Implications

The workslop phenomenon has implications far beyond individual companies' productivity losses. It reveals a mismatch between AI industry promises and workplace realities.

Despite $30-40 billion in enterprise AI investment, productivity gains remain elusive. This echoes historical patterns: MIT research on manufacturing found that AI adoption frequently leads to measurable but temporary declines in performance before eventual improvement, essentially following a "J-curve" trajectory.

The key difference is time horizon. Manufacturing AI adoption showed eventual recovery because physical processes forced thoughtful implementation. Digital work offers no such forcing function. Companies can generate infinite quantities of workslop without immediate consequences, masking underlying productivity declines.

The workslop crisis doesn't invalidate AI's potential, it simply reveals that realizing that potential requires organizational transformation, not just technology deployment. History suggests this pattern is normal for general-purpose technologies.

Past productivity surges from technologies like electricity required numerous complementary co-inventions that took years or even decades to materialize. The companies that will eventually succeed with AI aren't necessarily those adopting it fastest or most extensively. They're organizations willing to rethink workflows, realign incentives, and measure what actually matters.

For the 95% currently failing, the lesson is clear: stop mandating AI usage and start designing workflows where AI creates genuine value. Stop measuring adoption rates and start measuring outcomes. Stop rewarding output and start rewarding impact.

The alternative is continuing to pay the $9 million slop tax: an invisible drain on productivity disguised as innovation.

In motion,
Justin Wright

If 95% of AI pilot projects fail while employees increasingly send "workslop" that forces colleagues to spend hours fixing it, does this suggest that the biggest barrier to AI productivity isn't technological capability but organizational willingness to redesign incentive structures that currently reward quantity over quality?

Food for Thought
  1. Apple built its own ChatGPT-like app to test out new Siri AI revamp (Mashable)

  2. Instant Checkout and the Agentic Commerce Protocol (OpenAI)

  3. Introducing Claude Sonnet 4.5 (Anthropic)

  4. OpenAI Is Preparing to Launch a Social App for AI-Generated Videos (Wired)

  5. Sora 2 is here (OpenAI)

I am excited to officially announce the launch of my podcast Mostly Humans: An AI and business podcast for everyone!

Episodes can be found below - please like, subscribe, and comment!