OpenAI Employees Are Defending a Rival Company Against the US Government — That's Never Happened Before

AI Safety Anthropic Pentagon OpenAI Breaking News March 2026
OpenAI Employees Are Defending a Rival Company Against the US Government


30+$200M2$380B
Rival employees who signed the briefPentagon contract Anthropic lostRed lines Anthropic refused to dropAnthropic's current valuation

In the history of Silicon Valley, competing tech companies rarely defend each other in court. They sue each other. They poach each other's engineers. They spend billions trying to destroy each other's market share. So when more than 30 employees from OpenAI and Google DeepMind — Anthropic's two biggest rivals — walked into federal court and filed a legal brief supporting Anthropic against the U.S. government, something genuinely unprecedented had happened.

This isn't a story about one company's contract dispute. It's a story about who gets to set the rules for AI — a private company that built the technology, or a government that wants to use it without restrictions. And the fact that employees at competing companies feel strongly enough to put their own names on a court document tells you everything about the stakes involved.

Here is the full story — what happened, why it matters, and what it means for the future of AI in America.

How Anthropic and the Pentagon Ended Up in Court

In July 2025, Anthropic signed a $200 million contract with the U.S. Department of Defense. Under that contract, Claude became the first major AI model ever deployed on the Pentagon's classified networks — a milestone for the entire AI industry.

The contract came with two explicit restrictions: Claude would not be used for mass domestic surveillance of American citizens, and Claude would not be used to power fully autonomous weapons systems. These were not secret conditions buried in fine print. The Pentagon agreed to them. Operations ran smoothly under those terms for months.

Then, in early 2026, the Defense Department came back to renegotiate. The new demand: Anthropic should remove all restrictions and allow Claude to be used for "any lawful purpose." No exceptions. No red lines.

Anthropic said no.


⚠️ What Triggered the Breakdown: Tensions peaked over reports that Claude had been used through Palantir during a military operation. Anthropic reportedly inquired whether Claude was involved in an operation where "kinetic fire" occurred — where people were shot. The Pentagon took this as a sign that Anthropic might object to its AI being used in combat. Anthropic flatly denied objecting to any specific operation. According to Medianama's detailed breakdown, Pentagon officials admitted on record the designation was "ideological" with "no evidence of supply-chain risk."

The Two Red Lines Anthropic Refused to Cross

It is worth being precise about what Anthropic was actually refusing — because the headlines make it sound more absolute than it is.

Anthropic was not refusing to work with the military. Claude had already been used for intelligence analysis, operational planning, cyber operations, and modeling and simulation. The company had even agreed to allow Claude for missile defense work. What Anthropic refused were two specific applications.

Red Line Anthropic's Reasoning
Mass Domestic Surveillance of Americans AI can aggregate commercially available data about Americans' movements and associations at a scale existing privacy law was never designed to handle. No current law governs this use.
Fully Autonomous Weapons Today's AI models are not reliable enough to safely remove human judgment from targeting decisions. Deploying them autonomously would endanger American troops and civilians alike.

The Pentagon's response was that it has "always followed the law" and needed the freedom to use technology it licenses without a private company's restrictions. Defense Secretary Pete Hegseth announced the blacklist on social media, and President Trump publicly stated Anthropic had made a "disastrous mistake."

What "Supply-Chain Risk" Actually Means

This is the part most coverage glosses over — and it's the most important part of the story.

A "supply-chain risk" designation is not just a lost contract. Historically, this label has been reserved for foreign adversaries — Chinese companies, Russian firms — that pose a genuine threat to U.S. military systems. As Fortune reported, the designation was designed to prevent adversaries from sabotaging U.S. military systems — not to punish American firms for contract disagreements.

What the Designation Actually Does: Every defense contractor working with the Pentagon must now certify they do not use Anthropic's technology in defense-related work. This means Anthropic's enterprise customers — many of which hold government contracts — may need to drop Claude entirely to protect their own government relationships. Anthropic's CFO stated this could reduce the company's 2026 revenue by multiple billions of dollars. For the full legal breakdown, Medianama's comprehensive FAQ on the lawsuit is the best single resource.

The White House also directed all federal agencies — Treasury, State, Health and Human Services — to begin phasing out Claude within six months. The White House spokesperson called Anthropic a "radical left, woke company." Anthropic filed two separate lawsuits on March 9, 2026 in the U.S. District Court for the Northern District of California, describing the government's actions as "unprecedented and unlawful."


Why OpenAI and DeepMind Employees Stepped In

This is where the story becomes truly unprecedented.

Within hours of Anthropic filing its lawsuits, more than 30 current employees from OpenAI and Google DeepMind filed an amicus brief in support of Anthropic. They signed in a personal capacity and were clear they were not representing their employers. According to TechCrunch's original report, the brief's central argument was direct: the Pentagon's designation punishes Anthropic for holding principled positions on AI safety — and if the government can blacklist an American AI company for refusing to remove safety restrictions, every AI lab in the country faces the same risk.

"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry."
— Amicus Brief, signed by 30+ OpenAI and Google DeepMind employees, March 9, 2026

The brief also made a pointed argument about the alternative the Pentagon had available. As Android Headlines noted, the signatories pointed out a simple fact: if the government was unhappy with Anthropic's safety terms, it could have simply canceled the contract and moved to another provider — instead of weaponizing a national security label.

The brief warned of a "chilling effect" on the entire industry. If developers fear that setting safety boundaries will lead to a federal blacklist, they may stop building those guardrails entirely. As the filing notes, in the absence of formal laws governing AI use, the ethical guardrails set by developers are often the only thing standing between these powerful systems and catastrophic misuse.

Jeff Dean and the Names on the Brief

The most prominent signatory is Jeff Dean — Google's Chief Scientist and lead on the Gemini AI program. Jeff Dean is one of the most respected names in computer science: co-creator of MapReduce and TensorFlow, architect of Google Brain. His decision to put his name on this document carries weight that goes far beyond any single company.

Name Company Role
Jeff Dean Google DeepMind Chief Scientist & Gemini Lead
Zhengdong Wang Google DeepMind Researcher
Alexander Matt Turner Google DeepMind Researcher
Noah Siegel Google DeepMind Researcher
Pamela Mishkin OpenAI Researcher
Gabriel Wu OpenAI Researcher
Roman Novak OpenAI Researcher

These are not activists. These are engineers and scientists who compete against Anthropic every single day. The fact that they are willing to sign their names to a court document defending a rival company signals something the headlines are not fully capturing.

The OpenAI Deal — Opportunistic or Strategic?

Within hours of the Pentagon blacklisting Anthropic, OpenAI signed its own contract with the U.S. military. The timing looked, as Sam Altman himself later acknowledged, "opportunistic and sloppy."

Anthropic CEO Dario Amodei publicly called OpenAI's approach "safety theater" and described Sam Altman's public statements as "straight up lies." As Fortune described it, the mood between the two CEOs was anything but conciliatory — even as their employees showed solidarity in court.

But at the employee level, the picture is different. Many OpenAI employees internally protested the Pentagon deal. OpenAI's head of robotics resigned specifically over the contract. And then those same employees signed the brief supporting Anthropic.

⚠️ The OpenAI Contract Fine Print: OpenAI claimed its own Pentagon deal includes the same red lines as Anthropic's — no domestic mass surveillance, no autonomous weapons. But critics note a key difference: OpenAI's restrictions are tied to existing law and Pentagon policy, which the government can change unilaterally. Anthropic's restrictions were contractual and required Anthropic's own consent to modify. That distinction matters enormously.

What Happens Next in Court

Anthropic has asked the court for two things. First, a temporary restraining order allowing it to continue working with military contractors while the case plays out. Second, a full reversal of the supply-chain risk designation.

The amicus brief from the OpenAI and DeepMind employees specifically supports the temporary restraining order request. Their argument is practical: forcing Anthropic out of classified networks now — before the court even rules — would cause immediate and irreversible harm to U.S. AI competitiveness.

Meanwhile, as CNBC reported, Google is simultaneously deepening its own Pentagon AI relationship — introducing tools that let military personnel build custom AI agents for unclassified work. The DOD's workforce of over 3 million people will be able to use Google's Gemini-based Agent Designer tool. The irony: Google's chief scientist is defending Anthropic in court while Google's company is filling the gap Anthropic left.

One Detail Nobody Is Asking About: Even as this legal battle unfolds, Anthropic's models are reportedly still being used to support U.S. military operations — after the blacklist. The Pentagon itself hasn't fully cut the cord. The formal designation letter also states the ban applies to all Anthropic products, including any that "become available for procurement" in the future — meaning even models Anthropic hasn't built yet are already blacklisted. This operational contradiction is one of Anthropic's strongest legal arguments.

My Take

Most of the coverage on this story is treating it as a drama — Anthropic vs. Trump, Silicon Valley vs. Washington. That framing is missing the more important pattern. I've covered nearly a dozen AI policy clashes on this blog, and this one is structurally different from all of them. Every previous dispute was about regulation — governments trying to slow down or constrain AI companies from the outside. This is the first time a government has tried to reach inside a company's product architecture and demand that the safety guardrails be removed. That is a different kind of fight entirely.

The benchmark that matters here isn't a model score. It's the financial signal. Anthropic's CFO stated the designation could cost the company multiple billions in 2026 revenue. But Anthropic is currently valued at roughly $380 billion. The company is not fighting this battle because it can't afford to lose the contract. It's fighting it because losing on the principle — accepting that a government can strip safety restrictions from a private AI product by threatening a national security blacklist — would set a precedent that makes every AI safety investment worthless. That's the strategic calculation most coverage is skipping.

The uncomfortable truth is this: right now, the only thing standing between AI systems and mass deployment for surveillance or autonomous weapons is the voluntary guardrails built by the companies themselves. Congress has passed no laws. There is no regulatory framework. The safety restrictions in Anthropic's contract were not ideological — they were the entire legal architecture protecting American citizens from AI-enabled surveillance. The Pentagon's argument that it "always follows the law" is technically true. There is no law yet. That's the problem.

If you're wondering what to watch: the temporary restraining order hearing is the one that matters in the short term. If the court grants it, Anthropic survives operationally while the case proceeds, and this becomes a landmark AI rights trial. If the court denies it, Anthropic faces an immediate collapse of a large portion of its enterprise customer base — and every other AI lab quietly recalculates how much safety they can afford to build in. No court has ever ruled on anything quite like this before.

🔑 Key Takeaways

  • Anthropic refused to let the Pentagon use Claude for mass domestic surveillance and autonomous weapons — and was blacklisted as a "supply-chain risk" for it.
  • More than 30 OpenAI and Google DeepMind employees — including Google Chief Scientist Jeff Dean — filed a court brief defending Anthropic. This has never happened before between competing AI companies.
  • The supply-chain risk designation could reduce Anthropic's 2026 revenue by multiple billions of dollars and forces every defense contractor to certify they are not using Claude.
  • OpenAI signed its own Pentagon deal within hours of Anthropic's blacklisting — but its own employees protested, and the head of OpenAI robotics resigned over it.
  • The court battle centers on whether a company has the legal right to maintain safety restrictions in its own product when the government demands they be removed.
  • There are currently no federal laws governing AI use for surveillance or autonomous weapons — making Anthropic's contractual restrictions the only formal protection in place.

❓ Frequently Asked Questions

Q: What is a supply-chain risk designation and why is it unusual here?

A supply-chain risk designation is a label historically applied to foreign companies — typically Chinese or Russian firms — that pose a threat to U.S. military systems. Applying it to an American AI company for refusing to remove its own safety restrictions is completely without precedent.

Q: Why did OpenAI employees sign a brief supporting a rival company?

Because the principle at stake affects every AI company. If the government can blacklist an AI firm for maintaining safety guardrails, no AI company's restrictions are safe. The signatories argued this creates a chilling effect on the entire industry's ability to set ethical limits on how its technology is used.

Q: Can I still use Claude if I'm not a government contractor?

Yes. The blacklist applies specifically to companies doing business with the Pentagon. Anthropic's consumer and enterprise products remain available. Claude is still accessible on claude.ai and via the API for non-defense customers.

Q: What are Anthropic's two red lines exactly?

Anthropic refuses to allow Claude to be used for (1) mass domestic surveillance of American citizens, and (2) fully autonomous weapons systems where AI makes lethal targeting decisions without human authorization. The company supports other military uses including intelligence analysis, cyber defense, and missile defense.

Q: What happens if Anthropic loses in court?

If the court denies the temporary restraining order, Anthropic faces immediate loss of a significant portion of its enterprise customer base. A full loss would set a legal precedent allowing the government to strip safety restrictions from private AI systems — a dangerous outcome for the entire industry.

Q: Is Claude still being used by the Pentagon despite the blacklist?

Yes, reportedly. According to Medianama's lawsuit analysis, Anthropic's models continued to support U.S. military operations even after the blacklist — and the Pentagon simultaneously ordered six months of continued service while banning the company. This contradiction is one of Anthropic's strongest legal arguments.

📌 Also Read on Revolution In AI

Post a Comment

0 Comments