The Pentagon's AI Ultimatum

What the Anthropic blacklisting and OpenAI's Pentagon deal reveal about the future of AI governance and why every practitioner should be paying attention

Happy Monday!

Last week, the U.S. government did something it has never done before: it designated an American technology company a "supply chain risk to national security." The designation wasn't aimed at a Chinese chipmaker or a Russian defense contractor. It was aimed at Anthropic, the company behind Claude, for refusing to let the Pentagon use its AI models without explicit restrictions on mass surveillance and autonomous weapons.

Within hours, OpenAI announced it had signed a deal to deploy its models on the Pentagon's classified networks. Sam Altman said the agreement includes the same two red lines Anthropic had been fighting for. The Pentagon accepted them from OpenAI after publicly destroying Anthropic for demanding them.

If that sounds contradictory, it's because it is. And the contradiction is the most important thing happening in AI right now.

The Pentagon blacklisted Anthropic after it refused to remove restrictions on mass surveillance and autonomous weapons. Hours later, OpenAI signed a deal with the same red lines baked in differently. The real story isn't which company won. It's that the U.S. government used a designation reserved for foreign adversaries against an American AI company for negotiating contract terms, and what that precedent means for every company building in this space.

TL;DR

What Actually Happened

The timeline matters. Anthropic signed a $200 million Pentagon contract last July and became the first AI lab to deploy models on classified military networks. For months, the company pushed for explicit contract language barring its technology from mass domestic surveillance and fully autonomous weapons systems.

On Tuesday, February 24, Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and demanded a signed document granting the military use of Claude for "all lawful purposes." Anthropic's position was that "lawful" isn't enough. Current law hasn't caught up with AI capabilities. Bulk collection of Americans' geolocation data, web browsing history, and financial information purchased from data brokers is technically legal. Anthropic wanted explicit protections against that kind of collection at scale, arguing AI would supercharge surveillance capabilities that existing law doesn't adequately constrain.

Amodei's response was direct: "We cannot in good conscience accede to their request."

By Friday, President Trump directed every federal agency to cease using Anthropic's technology. Hegseth labeled the company a supply chain risk, a designation historically reserved for entities like Huawei. Every military contractor and supplier was ordered to cut ties with Anthropic, with a six-month phase-out period.

Timeline of the Pentagon-Anthropic standoff: from $200M contract to supply chain risk designation in seven months.

The OpenAI Contradiction

Hours after Trump's announcement, Altman posted that OpenAI had struck a deal with the Pentagon. The contract includes three red lines: no mass domestic surveillance, no directing autonomous weapons systems, and no high-stakes automated decisions like social credit systems.

These are essentially the same protections Anthropic demanded. The difference is structural. Anthropic wanted the restrictions written explicitly into the contract as hard limits. OpenAI agreed the Pentagon could use its technology for "any lawful purpose" while embedding the red lines through what Altman called "technical safeguards." OpenAI retains control over which models are deployed, limits deployment to cloud environments rather than edge systems like drones, and keeps safety researchers in the loop on classified networks.

Altman even said publicly he agrees with Anthropic's position and that the Pentagon shouldn't be threatening the Defense Production Act against AI companies.

Anthropic's counter: OpenAI's language covers "unconstrained" collection of private information but doesn't address the bulk collection of public information. That's the gap. Geolocation data, browsing history, financial records purchased from data brokers. All technically public, all technically legal, all capable of enabling mass surveillance when processed by AI at scale.

The 430 Signatures That Complicate Everything

Here's where it gets interesting. More than 430 employees from Google and OpenAI signed an open letter supporting Anthropic's position. Over 60 of those signers work at OpenAI, the company that just took the deal.

The letter reads: "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."

The organizers pointed out the Pentagon's negotiation strategy explicitly: "They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand."

Google DeepMind's Chief Scientist Jeff Dean called mass surveillance a violation of the Fourth Amendment. At an OpenAI all-hands, staff were told the most challenging aspect of the deal was concern over AI-driven surveillance threatening democracy. The people building these models understand the stakes, even when their companies' strategies diverge.

The Precedent That Should Worry Everyone

Set aside which company handled this better. The supply chain risk designation is the real story.

This is the first time the United States has ever applied that label to an American company. It was used in apparent retaliation for a company not agreeing to specific contract terms. Anthropic called it "legally unsound" and announced it would challenge the designation in court.

The practical damage is immediate. Every Fortune 500 company with Pentagon exposure now faces a question from general counsel: is using Claude worth the risk? Even if Anthropic wins in court, litigation will take years. Meanwhile, their $14 billion revenue run rate and IPO timeline are both under pressure at the exact moment the company was hitting its growth inflection.

But the precedent extends far beyond Anthropic. If the government can designate a domestic company a supply chain risk for negotiating contract terms, that tool is available for any future dispute with any technology company. The chilling effect on vendor negotiations is the point. As the Center for American Progress put it, the administration is trying to make an example of Anthropic.

The AI Governance Fault Lines

Anthropic

OpenAI

Pentagon

Mass Surveillance

Explicit contract ban

Technical safeguards

"All lawful purposes"

Autonomous Weapons

Explicit contract ban

Technical safeguards

"All lawful purposes"

Public Data Collection

Explicit ban on bulk collection

Not addressed

No restriction

Contract Approach

Hard contractual limits

Soft technical controls

Full discretion demanded

Outcome

Blacklisted as supply chain risk

Got the deal

Got what it wanted

The Bottom Line

Both Anthropic and OpenAI took defensible positions. Anthropic drew a hard line on explicit contract protections and paid an enormous price. OpenAI found a pragmatic path that embeds similar protections through technical controls rather than contract language. Whether OpenAI's approach holds up under pressure from a government that just demonstrated its willingness to destroy a company for pushing back is the open question.

For practitioners, the implications are concrete. If you're building on Claude, you need contingency plans. If you're choosing an AI vendor for anything touching government work, the supply chain risk designation just became a procurement factor. And if you're building AI systems that could be used for surveillance or autonomous decision-making, the lines that seemed theoretical a month ago are now the subject of the most consequential AI governance fight in history.

The most revealing detail might be the simplest: 430 employees at the companies taking the Pentagon's deal signed a letter saying they shouldn't. The people closest to the technology understand something the contract negotiations missed. The question isn't whether AI should have red lines. Everyone agrees it should. The question is whether those red lines survive contact with a government willing to weaponize procurement policy against companies that insist on them.

In motion,
Justin Wright

If OpenAI's contract includes the same red lines Anthropic demanded, what was this fight actually about: the substance of AI safety, or the power to define its terms?

Food for Thought

If you haven’t listened to my podcast Mostly Humans: An AI and business podcast for everyone yet, new episodes drop every week!

Episodes can be found below - please like, subscribe, and comment!