Friday, March 13, 2026

Big Tech Backs Anthropic in Fight Against Trump Administration | AI Policy 2026

Illustration of major tech company logos supporting Anthropic as it faces opposition from the Trump administration. On the left, logos of Google, Microsoft, Amazon, Meta Platforms, and a cloud symbol appear over a futuristic cityscape. In the center is the Anthropic logo connected by glowing lines. On the right, the White House stands under stormy skies with lightning, while Donald Trump is shown from behind looking toward the building, representing a political and technology conflict.

 

Big Tech Backs Anthropic in Fight Against Trump Administration | AI Policy 2026
🔴 BREAKING: Court hearing underway in San Francisco, March 12, 2026  |  Jump to FAQ
Tech Policy Review
Thursday, March 12, 2026  ·  AI & Government  ·  Vol. XII

Big Tech Backs Anthropic in Fight Against Trump Administration

Google, Amazon, Apple, and Microsoft have all publicly rallied behind Anthropic's landmark lawsuit — after the Pentagon took the unprecedented step of labeling the AI firm a "supply chain risk" for refusing to let its tools be used in mass surveillance and autonomous weapons.

In a case that has rapidly become one of the defining legal and political battles of the artificial intelligence era, America's biggest technology companies have thrown their collective weight behind Anthropic — an AI safety startup whose refusal to bend to military demands triggered a government crackdown unlike anything the domestic tech sector has seen before.

Since Monday, Google, Amazon, Apple, and Microsoft have all publicly supported Anthropic's legal action to overturn Defense Secretary Pete Hegseth's unprecedented decision to label it a "supply chain risk." The designation, a label that had previously only been applied to foreign companies and designed to prevent adversaries from sabotaging U.S. military systems, has never in American history been applied to a domestic firm — until now.

Key Context Anthropic's AI assistant Claude had been deployed across classified U.S. government networks, national nuclear laboratories, and military intelligence platforms since 2024. It was, until this dispute, the only AI system of its kind approved for use on classified networks.

How the Dispute Began

The conflict traces back to a contract renegotiation that spilled dramatically into public view in February 2026. While officials claimed they did not want to use Anthropic's technology for either mass surveillance or autonomous weapons, Anthropic says Hegseth started to insist that language in its government contracts specifying such prohibitions be removed.

The government issued Anthropic an ultimatum: hand over unrestricted access to its AI chatbot within 48 hours, or face sanctions. Anthropic CEO Dario Amodei responded by drawing two explicit "red lines" — that its technology must not be used for domestic mass surveillance, and that it must not be embedded in fully autonomous weapons systems capable of acting without human control.

Amodei went public with his refusal to remove the guardrails entirely. It led to Trump berating the company and announcing on his Truth Social platform that Anthropic tools like Claude — in use by government and military agencies since 2024 — would be removed from the entire government. Hegseth then designated Anthropic a supply chain risk, branding it not secure enough for government use: the first time an American company has ever received such a label.

  • February 2026 Pentagon insists Anthropic remove AI safety guardrails from government contracts. Negotiations stall and become public.
  • Late Feb / Early March Dario Amodei publicly refuses. Trump announces removal of Anthropic tools from all federal agencies on Truth Social.
  • March 7–8, 2026 Hegseth formally designates Anthropic a "supply chain risk" — a first for any American company.
  • Monday, March 9 Anthropic files a complaint against the Trump administration in the U.S. District Court for the Northern District of California, calling the administration's actions "unprecedented and unlawful."
  • Tuesday, March 10 Microsoft, Google, Apple, Amazon, and retired military officials file amicus briefs in support.
  • Wednesday, March 11 Court hearing held in San Francisco. Department of Justice declines to rule out further actions against Anthropic.
  • Thursday, March 12 Proceedings continue. Federal judge Rita Lin presides. Case gains global attention.

Who Is Standing With Anthropic

The coalition of support that has emerged around Anthropic is remarkable both for its breadth and for the awkward position it places many of the companies involved. Several of the same tech executives backing Anthropic have donated generously to Trump since his return to office.

Microsoft
Amicus brief — filed Tuesday
Google
Amicus brief + employee filing
Amazon
Public support
Apple
Public support
OpenAI employees
Personal-capacity amicus brief
22 Retired Military Chiefs
Separate amicus — incl. ex-CIA Dir. Hayden
Cato Institute
1st Amendment amicus brief
EFF
Free speech amicus brief

Microsoft's court filing said the supply chain risk designation "may bring severe economic effects that are not in the public interest," and asked the judge to order a temporary lifting of the designation to allow for more "reasoned discussion" between Anthropic and the Trump administration.

"When the government starts to overreach and step on basic levers of capitalism, the alarm bells go off."

— Gary Ellis, CEO of Remesh AI and former U.S. political operative, speaking to the BBC

More than 30 employees from OpenAI and Google DeepMind — including Google chief scientist Jeff Dean — filed an amicus brief warning that a Pentagon blacklist of Anthropic threatens to damage the entire American AI industry. The employees stated that punishing one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness.

The Military Angle

Perhaps the most striking intervention came from a group of 22 retired senior military officials. The group includes former CIA director Michael Hayden and former secretaries of the Air Force, Army, and Navy. They alleged in their own court filing that Hegseth's actions are a misuse of government authority for "retribution against a private company that has displeased the leadership."

The retired officials warned that the "sudden uncertainty" of targeting a technology widely embedded in military platforms could disrupt planning and put soldiers at risk during ongoing operations. The filing noted that Hegseth's conduct "threatens the rule-of-law principles that have long strengthened our military."

This military dimension carries weight. The current commander of U.S. Central Command confirmed in a video posted to social media that the military was using "advanced AI tools" to process data rapidly, though he stressed that humans will always make final decisions on targeting. Disrupting that infrastructure mid-conflict, the retired officials argued, is a national security risk in itself.

The Free Speech Dimension

Anthropic's lawsuit goes beyond contract law. The company claims its free speech rights have been violated through government retaliation for its public statements, as Hegseth, Trump, and others accused the company of being "woke" or politically at odds with the administration.

A separate filing from groups including the Electronic Frontier Foundation and the Cato Institute argued that the government's actions violate the First Amendment, warning it is not hard to "imagine a world in which the government effectively controls what all Americans do and say" if the government is able to dictate a private company's policies.

The joint amicus brief called the department labelling Anthropic a risk "a potentially ruinous sanction" for businesses. The brief further argued that such a sanction "imposes a culture of coercion, complicity, and silence, in which the public understands that the government will use any means at its disposal to punish those who dare to disagree."

The Government's Counter-Position

The Trump administration's position is that the supply chain risk designation was a legitimate national security determination. However, legal experts have raised doubts about its staying power in court.

Lawyers Michael Endrias and Alan Z. Rozenshtein called it "political theater: a show of force that will not stick," arguing the government cannot credibly claim an emergency threat while simultaneously planning a six-month phaseout of the technology.

At the court hearing in San Francisco, a lawyer for Anthropic said the DoD had gone so far as to "affirmatively reach out to Anthropic customers, urging them to stop working with Anthropic." A Department of Justice lawyer representing the government did not deny such actions and refused to say the government would take no further action.

Notable Absence Meta is a notable holdout. Facebook's parent company left the industry group Chamber of Progress in 2025 after years of membership and has not joined the coalition backing Anthropic.

What Is Actually at Stake

Beyond the immediate legal contest, analysts say the case carries implications that reach far further than one AI company's government contracts.

Anthropic's label as a supply chain risk means the government can exclude it from all contract awards, remove its products from consideration, and prevent prime contractors from using the company as a supplier — effectively severing it from the entire federal marketplace. For a company whose technology is embedded in nuclear labs and classified intelligence operations, that amounts to an existential threat.

For the broader industry, the precedent is equally alarming. Microsoft warned that the Pentagon's action forces government contractors to comply with "vague and ill-defined directions that have never before been publicly wielded against a U.S. company." If the designation stands, any AI or technology firm that takes a public ethical stance could face similar treatment.

And for AI governance globally, the outcome may shape how democratic governments negotiate the ethical boundaries of AI deployment for years to come. The question at the heart of this case — who sets the terms for how AI may be used, corporations or governments — has no settled answer.

Conclusion

The coalition arrayed against the Trump administration in this case is extraordinary: Apple, Google, Amazon, Microsoft, Google and OpenAI researchers, retired CIA directors, former military secretaries, civil liberties organizations, and free-market think tanks — all united by a shared concern that the government has used an instrument of national security law as a tool of political retaliation.

Whether or not Anthropic prevails in court, the case has already clarified a boundary. The AI industry — including companies whose executives donated to and celebrated Trump's return to power — has decided that using supply-chain risk designations to punish a company for its public ethics crosses a line that cannot be ignored. The proceeding in Judge Rita Lin's San Francisco courtroom will determine whether the law agrees.

This article will be updated as the case develops.

Editorial Disclaimer This article is compiled from publicly reported information and legal filings as of March 12, 2026. It does not constitute legal, financial, or investment advice. The views expressed in quoted materials are those of the individuals and organizations cited and do not represent the editorial position of this publication. Anthropic is the company that develops Claude, the AI model. This article was written with the assistance of AI-powered research tools and reviewed for editorial accuracy.

Frequently Asked Questions

Why did the Pentagon label Anthropic a supply chain risk?

The Pentagon designated Anthropic a supply chain risk after the company refused to remove contractual restrictions preventing its AI from being used for domestic mass surveillance or deployed in fully autonomous weapons. It is the first time such a designation has ever been applied to a domestic American company.

Which companies and organizations are supporting Anthropic?

Google, Amazon, Apple, and Microsoft have all publicly backed Anthropic. More than 30 Google DeepMind and OpenAI employees (in a personal capacity), 22 retired senior U.S. military officials including former CIA Director Michael Hayden, the Electronic Frontier Foundation, and the Cato Institute have also filed supporting legal briefs.

What are Anthropic's two "red lines" in AI use?

Anthropic drew two firm ethical limits: its AI must not be used for domestic mass surveillance of American citizens, and it must not be embedded in fully autonomous weapons systems that can engage targets without human decision-making in the loop.

Where is the lawsuit being heard?

The primary case is before U.S. District Judge Rita Lin in the Northern District of California, San Francisco. Anthropic also filed a separate, narrower challenge in the federal appeals court in Washington, D.C.

What does the supply chain risk designation actually mean for Anthropic?

The designation allows the government to exclude Anthropic from all contract awards, remove its products from active government use, and bar prime contractors from working with the company — cutting it off entirely from the federal marketplace.

Is Meta supporting Anthropic?

No. Meta is a significant holdout. The company left the industry group Chamber of Progress in 2025 and has not joined the coalition supporting Anthropic's legal challenge.

Could Trump issue an executive order against Anthropic?

Axios reported in early March 2026 that President Trump was considering an executive order to eradicate Anthropic from the federal government entirely. No such order had been issued as of publication time.

© 2026 Tech Policy Review  ·  Privacy Policy  ·  Terms of Use  ·  Contact
This article is for informational purposes only and does not constitute legal or financial advice.

Wednesday, March 11, 2026

OpenAI Workers Support Anthropic in $5B Pentagon Feud | AI Industry Crisis

The image shows a digital illustration of a stylized "PENTAGON" building with large "$5" symbols and satellite dishes on the left. On the right, three human figures in circuit-patterned suits hold up a shield displaying the OpenAI logo and an abstract Anthropic symbol. A large, shadowed businessman in the background represents the "PENTAGON FEUD" or government authority. Floating labels like "AI SAFETY COALITION" and "INDUSTRY UNITY" connect the concepts, emphasizing an "AI INDUSTRY CRISIS" in the banner's bottom sub-headline.
OpenAI Workers Support Anthropic in $5B Pentagon Feud | AI Industry Crisis
Breaking: AI Industry Crisis

OpenAI Workers Defy Corporate Lines to Back Anthropic in $5 Billion Pentagon Showdown

In an unprecedented alliance, researchers from rival labs unite against the Defense Department's "supply-chain risk" designation, warning of catastrophic consequences for AI ethics and national competitiveness

Published: March 10, 2026
Reading Time: 8 min
Category: Defense & AI Policy

In a stunning display of cross-industry solidarity, more than 30 employees from OpenAI and Google DeepMind—including Google's chief scientist Jeff Dean—have filed an amicus brief supporting rival Anthropic's legal battle against the Pentagon. The move comes as Anthropic warns that the Defense Department's "supply-chain risk" designation could cost the company $5 billion in revenue and fundamentally reshape how AI companies engage with national security contracts.

The $5 Billion Stakes

The financial fallout from the Pentagon's decision is already materializing. According to court filings by Anthropic CFO Krishna Rao, hundreds of millions of dollars in expected 2026 revenue tied to Pentagon-related work are immediately at risk. If the government's pressure campaign succeeds in discouraging broader commercial partnerships, Anthropic could lose up to $5 billion in sales—roughly equivalent to its total revenue since commercializing its Claude AI models in 2023 [^4^].

Financial Impact at a Glance

$5B Total Revenue at Risk
$10B+ Infrastructure Investment
37+ Tech Workers Supporting
$80M Deals Already Paused

The economic impact extends beyond military contracts. Anthropic's chief commercial officer Paul Smith revealed in court statements that a financial services customer paused negotiations on a $15 million deal, while two leading financial companies refused to close contracts worth $80 million combined unless granted unilateral cancellation rights [^7^]. A grocery store chain canceled sales meetings, and a Fortune 20 company reported its attorneys were "freaked out" about maintaining relationships with the AI startup.

The Ethics Divide: Autonomous Weapons and Mass Surveillance

At the heart of the dispute lies a fundamental disagreement over AI ethics and military applications. Anthropic CEO Dario Amodei had refused Pentagon terms that would have allowed the Trump administration to deploy Claude AI for mass domestic surveillance or to power fully autonomous weapons systems—AI with the capability to kill without human involvement [^2^].

Mass domestic surveillance powered by AI poses profound risks to democratic governance—even in responsible hands.

The amicus brief filed by OpenAI and Google employees argues that these "red lines" represent legitimate safety concerns requiring robust guardrails. The engineers warn that while surveillance data on Americans exists in fragmented silos—location history, financial transactions, facial recognition—AI systems could dissolve these barriers, creating a "unified, real-time surveillance apparatus" capable of correlating behavioral patterns across hundreds of millions of people simultaneously [^3^].

The Autonomous Weapons Debate

Regarding lethal autonomous weapons, the brief emphasizes that current AI systems "cannot be trusted to identify targets with perfect accuracy" and lack the capacity for "subtle contextual tradeoffs between achieving an objective and accounting for collateral effects" that human operators provide [^3^]. The risk of AI hallucinations—false outputs presented as fact—makes human oversight essential before lethal munitions are deployed.

Industry Realignment: OpenAI's Controversial Pivot

While its employees support Anthropic's ethical stance, OpenAI itself has moved in the opposite direction. Within moments of the Pentagon designating Anthropic a supply-chain risk, OpenAI signed its own contract with the Defense Department—reportedly with fewer restrictions on "lawful use" [^1^]. This corporate decision sparked internal protests, with nearly 1,000 OpenAI and Google employees signing public letters urging the DOD to withdraw the label and calling on their leaders to refuse unilateral military use of AI systems.

OpenAI CEO Sam Altman has publicly acknowledged the danger of the Pentagon's approach, stating on social media that enforcing the supply-chain risk designation "would be very bad for our industry and our country" [^4^]. Yet this corporate positioning stands in stark contrast to the actions of his own researchers.

Legal Strategy and Immediate Fallout

Anthropic has launched a two-front legal assault, filing lawsuits in both San Francisco federal court and the DC federal appeals court. The San Francisco suit alleges First Amendment violations, while the DC case accuses the Defense Department of unfair discrimination and retaliation [^7^]. The company is seeking an emergency hearing as early as Friday for a temporary order allowing continued Pentagon contractor relationships during litigation.

Defense Secretary Pete Hegseth has taken an aggressive posture, posting on X that "effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" [^7^]. This interpretation far exceeds the statutory scope of supply-chain risk designations, which traditionally apply only to foreign adversaries and narrow defense supply chains.

If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States' industrial and scientific competitiveness.

Cloud Providers Navigate the Crossfire

Major cloud infrastructure providers face complex decisions. Amazon and Microsoft have announced they will continue offering Anthropic's Claude models to civilian customers while excluding Pentagon-tied work [^7^]. This bifurcated approach attempts to maintain commercial relationships while avoiding the Defense Department's broad interpretation of the supply-chain prohibition.

President Trump has personally intervened in the dispute, telling Politico: "I fired Anthropic. Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that" [^2^]. This characterization of a contractual negotiation as a personal firing underscores the politicized nature of the conflict.

Frequently Asked Questions

What is a "supply-chain risk" designation? +
A supply-chain risk designation is typically reserved for foreign companies or entities deemed potential national security threats. It restricts defense contractors from using designated companies' products or services. In Anthropic's case, the Pentagon has interpreted this broadly to pressure all military contractors into severing commercial ties, far exceeding traditional statutory limits.
Why are OpenAI employees supporting a rival company? +
The amicus brief represents individual employees acting in their personal capacity, not corporate policy. Signatories cite concerns that the Pentagon's retaliation against Anthropic's ethical stance threatens the entire AI industry's ability to implement safety guardrails. They view this as a precedent-setting case that could chill open deliberation about AI risks across all labs.
What are Anthropic's "red lines" for military use? +
Anthropic has drawn two primary ethical boundaries: (1) prohibition on using AI for mass domestic surveillance of American citizens, and (2) prohibition on fully autonomous weapons systems that can kill without meaningful human oversight. The company argues current AI technology is not capable of safely undertaking these tasks.
How does this affect existing Pentagon AI use? +
Despite the supply-chain designation, reports indicate the U.S. military continued using Claude AI in operations, including the campaign that killed Iranian leader Ayatollah Ali Khamenei, occurring hours after Secretary Hegseth announced the ban. This highlights the deep integration of Anthropic's technology in existing defense infrastructure and the complexity of immediate disengagement.
What happens next in the legal battle? +
Anthropic has requested an emergency hearing in San Francisco federal court scheduled for March 13, 2026, seeking a temporary restraining order to maintain Pentagon contractor relationships during litigation. Simultaneous appeals are proceeding in DC federal court. The outcome could establish precedent for how AI companies negotiate ethical constraints in defense contracting.

Conclusion: A Defining Moment for AI Governance

The Anthropic-Pentagon dispute represents a watershed moment in the relationship between artificial intelligence developers and government power. With $5 billion in revenue at stake and the unified opposition of the industry's top technical talent, the case exposes the dangerous vacuum of legal frameworks governing AI military applications. As OpenAI researchers wrote in their amicus brief, without public law to regulate these systems, contractual restrictions imposed by developers serve as the only safeguard against catastrophic misuse. The outcome will determine whether AI ethics can withstand political pressure—or whether the race to military adoption will override safety considerations that engineers across rival labs agree are essential for democratic governance and human survival.

Legal & Financial Disclaimer

This article is provided for informational and educational purposes only and does not constitute legal, financial, or investment advice. The information regarding Anthropic's financial status, legal proceedings, and Pentagon contracts is based on publicly available court filings and news reports as of March 10, 2026. Financial figures cited are claims made in legal documents and have not been independently verified. Legal proceedings are ongoing and subject to change. Readers should consult qualified legal counsel for advice regarding defense contracting regulations and financial advisors for investment decisions. The views expressed regarding AI ethics and safety represent reported positions of cited individuals and do not necessarily reflect the views of this publication.