Pentagon blacklists Anthropic after AI refused to remove safety guardrails

The $200 million partnership that collapsed

Anthropic had everything going for it in military AI. A $200 million Pentagon contract, access to classified networks, and what seemed like the full trust of the U.S. military. Their Claude AI was being used across intelligence analysis, cyber operations, and operational planning. The Department of War called it “mission-critical.”

Then in January 2026, something changed. Anthropic asked their partner Palantir a simple question about how their technology was being used in a classified operation in Venezuela. That question—what most industries would call due diligence—apparently crossed a line.

The seven-day confrontation

Things escalated quickly in February. Secretary of War Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon with a blunt demand: remove every safeguard from Claude. Mass domestic surveillance, fully autonomous weapons—all of it. The deadline was February 27.

Amodei’s response was two letters: “No.” He offered to work directly with the Pentagon on R&D to improve reliability, but the Pentagon declined.

The backlash was immediate. Undersecretary Emil Michael called Amodei a “liar with a God complex” publicly on social media. When the deadline passed, President Trump ordered all federal agencies to stop using Anthropic. The company was designated a “Supply Chain Risk” under the Federal Acquisition Supply Chain Security Act—a label previously reserved for foreign companies like Huawei and Kaspersky.

Hours later, OpenAI signed a classified deployment deal with the same Pentagon.

The 95% nuclear problem

Here’s what makes this particularly concerning. In war game simulations, AI models—including GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash—chose to launch tactical nuclear weapons 95% of the time. At least one model launched a nuclear weapon in 20 out of 21 games.

That’s the technology the Pentagon wants to deploy autonomously.

We’ve seen what happens when simpler systems fail. The Patriot missile system in 2003 killed allied soldiers by misidentifying a friendly British aircraft. The USS Vincennes in 1988 shot down Iran Air Flight 655, killing 290 civilians. Those were rule-based systems with clear parameters. LLMs are orders of magnitude more complex and opaque.

Guardrails that might not guard

OpenAI’s deal is worth examining closely. Their initial agreement on Friday needed amendments by Monday. The Monday amendment added language prohibiting intentional use for domestic surveillance of U.S. persons. The key word there is “intentionally.”

What happens when surveillance is a byproduct rather than the stated objective? Who defines intent in classified networks where oversight is limited by design?

Even more revealing: the Monday amendment explicitly prohibited using commercially purchased personal data for surveillance of Americans. That means for an entire weekend, OpenAI’s agreement technically allowed mass surveillance through purchased data about American citizens.

Sam Altman acknowledged they “shouldn’t have rushed to get this out on Friday.” But perhaps more telling was what he told employees internally: OpenAI “doesn’t get to choose how the military uses its technology.”

If the company building the AI doesn’t get to choose how it’s used, the guardrails might be more about public relations than actual policy.

The practical reality

Despite the blacklisting, CBS News reported that Claude remains in active military use, including operations against Iran. The technology was apparently too deeply embedded in classified systems to remove.

Which raises an uncomfortable question: if the Pentagon can’t enforce a removal order for technology it has officially blacklisted, how exactly will it enforce usage guardrails?

Amodei identified the contradiction in his statement: “These threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

You can’t call a technology a threat to the supply chain and invoke emergency powers to seize it because you can’t function without it.

What this means moving forward

The market has spoken clearly. Cooperation gets contracts. Resistance gets blacklisted. At the a16z American Dynamism Summit, Palantir CEO Alex Karp predicted every AI company will work with the military within three years.

But public reaction tells a different story. There was a 295% surge in uninstalls, Claude became #1 in seven countries, over 500 tech employees broke ranks with their employers, and polls show 84% of British citizens are worried about government-corporate AI partnerships.

The engineers building these systems and the people using them seem to understand something: supporting national defense and deploying unreliable technology for autonomous killing decisions are not the same thing.

Amodei offered to do the R&D to make autonomous AI weapons safe and reliable. He offered to collaborate with the Pentagon on getting there. The offer was declined.

Meanwhile, the simulations keep running. In 95% of them, someone pushes the button. And the company that said “the technology isn’t ready yet” now carries the same label as America’s foreign adversaries.