Two weeks ago I wrote about Anthropic’s safety lead resigning, warning the world is in peril. I ended with a question: does shipping while the safety team walks out make us pragmatic or complicit?
This week answered it. The safety team wasn’t just right about internal dynamics. The external pressure is worse than anyone described.
What Happened
Anthropic signed a $200 million Pentagon contract last July. Claude was the first AI model deployed on the military’s classified network. The deal was working. Then the Pentagon demanded more: unrestricted use across “all lawful purposes,” including autonomous weapons and mass domestic surveillance.
Anthropic drew two red lines. No fully autonomous weapons. No mass surveillance of Americans.
— Dario Amodei, Anthropic CEOWe cannot in good conscience accede to their request. In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.
Defense Secretary Pete Hegseth gave Anthropic until 5:01pm Friday to comply. The threat: lose the contract, get labeled a “supply chain risk to national security” (a designation historically reserved for foreign adversaries like Huawei), or face the Defense Production Act, a Korean War-era law to compel compliance.
Anthropic refused. Trump posted on Truth Social:
— Donald Trump, Truth SocialThe Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.
Within hours: federal agencies ordered to phase out Anthropic. Hegseth designated them a supply chain risk. Pentagon Under Secretary Emil Michael called Amodei “a liar” with “a God-complex” who “wants nothing more than to try to personally control the US Military.” Elon Musk added: “Anthropic hates Western Civilization.”
Then the part that makes all of this absurd.
The Same Red Lines, Different Outcome
Hours after Anthropic was blacklisted, OpenAI announced a Pentagon deal. Sam Altman’s post on X:
— Sam Altman, OpenAI CEOTwo of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
Read that again. The same two restrictions. No mass surveillance. Human in the loop on weapons. The Pentagon agreed to OpenAI’s terms - the same terms it spent a week publicly destroying Anthropic for requesting.
Axios confirmed: the department “appears to have accepted conditions similar to those put forth by Anthropic.”
Altman himself acknowledged the overlap. In an internal memo to OpenAI staff, he wrote that OpenAI would “largely follow Anthropic’s approach” and that they share the same “red lines.” He then asked the Pentagon to “offer these same terms to all AI companies.”
So what was the fight actually about?
Political, Not Policy
A Carnegie Endowment fellow put it plainly: “This is as much of a political fight as a military use issue.” Government officials had spent months criticizing Anthropic for being “overly concerned with AI safety.” The “Department of War” rebranding itself signals a posture where safety concerns are treated as weakness.
The nuclear hypothetical reveals the dynamic. The Washington Post reported that Pentagon technology chief Emil Michael pressed Amodei with a scenario: if an ICBM was launched at the United States, could the military use Claude to shoot it down? Michael characterized Amodei’s response as “you could call us and we’d work it out.” Anthropic called this account “patently false” and said they’ve always agreed to missile defense use cases.
It doesn’t matter who’s telling the truth about that specific exchange. The framing tells you everything: a tech CEO saying “maybe AI shouldn’t autonomously decide who dies” gets recast as “he wants to personally control the US Military.”
The Developer Angle
I use Claude every day. It’s in my IDE, my terminal, my workflow. I’ve written about how these tools are fundamentally stochastic, fallible, and unintelligible. I’ve argued for why non-deterministic systems fail at consistent execution.
The Pentagon wants to put these systems in autonomous weapons.
Salesforce couldn’t get an LLM to reliably handle customer service tickets beyond 8 instructions. War game simulations found that leading AI models, including Claude, ChatGPT, and Gemini, chose to deploy nuclear weapons in the vast majority of scenarios. And the response to a company saying “maybe we should have guardrails” is to label them a threat to national security.
Hegseth’s supply chain risk designation means no Pentagon contractor or supplier can conduct commercial activity with Anthropic. If enforced broadly, this could affect cloud providers, enterprise customers, and the entire ecosystem around Claude. Anthropic says the designation only legally applies to DoD contracts, and they’re challenging it in court.
What Actually Gives Me Hope
The industry response. Over 400 Google employees and 75+ OpenAI employees signed an open letter supporting Anthropic. Amazon and Microsoft employees reportedly pushed their executives to take a stand. The letter urged leaders to “stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”
Altman publicly sided with Anthropic before announcing his own deal. He told CNBC he doesn’t think “the Pentagon should be threatening DPA against these companies” and that he “mostly trusts” Anthropic “as a company.”
Bipartisan senators from the Armed Services and Defense Appropriations committees sent a private letter urging both sides to resolve this.
The safety teams that I wrote about two weeks ago, the ones walking out the door, they built the institutional muscle that made Anthropic capable of saying no. Sharma’s resignation letter warned about “pressures to set aside what matters most.” This week proved those pressures come from outside too, and they come with executive orders attached.
— Anthropic, official statementNo amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.
For Developers
If you build with Claude, or any frontier model, this affects you directly. Not abstractly. The company behind your tools just got labeled a national security threat for saying AI shouldn’t autonomously kill people without human oversight.
The cynical read: more inference capacity for the rest of us while the government phases out. The honest read: if the precedent holds that “safety-conscious” equals “supply chain risk,” every AI company learns the same lesson. Minimize guardrails. Don’t negotiate. Don’t ask questions. Ship and comply.
That’s the race to the bottom. As one policy expert noted: “That, incidentally, is how Beijing is approaching AI.”
Two weeks ago I asked whether shipping while safety teams leave makes us pragmatic or complicit. This week I’m asking a different question: what happens when the company that actually stood firm gets punished for it, and the one that waited gets rewarded?
I don’t have an answer. But I know which company I’m rooting for.


