Anthropic Defense Department Lawsuit Draws Microsoft and Rival AI Support

Key Takeaway

The Anthropic Defense Department lawsuit centers on Anthropic’s claim that the Pentagon unlawfully labeled the company a supply-chain risk after it refused certain military uses of Claude. The case then broadened as Microsoft, rival AI researchers, retired military leaders, and civil-rights groups all backed Anthropic in court.

Anthropic Defense Department lawsuit widens as Microsoft and rivals back the case (Credit - ChatGPT, The AI Track)
Anthropic Defense Department lawsuit widens as Microsoft and rivals back the case (Credit - ChatGPT, The AI Track)

Anthropic Defense Department lawsuit – Key Points

The Story

The Anthropic Defense Department lawsuit began after the Pentagon formally labeled the company a supply-chain risk last Thursday, in what supporters describe as the first reported use of that tool against a U.S. company. Anthropic argues the designation could block contractors from using Claude in Defense Department work and force government-linked partners to cut ties with it more broadly. The dispute escalated on Tuesday, March 11, when Microsoft asked a federal court in San Francisco to temporarily lift the designation while the litigation proceeds. Separate amicus briefs were also filed by 37 researchers and engineers from OpenAI and Google, retired military leaders, and civil-rights groups.

The Facts

  • Anthropic is challenging a Pentagon designation that could restrict use of Claude in government contracting.

    The dispute centers on the Department of Defense labeling Anthropic a “supply chain risk,” a move that Anthropic says could force companies doing business with the government to stop using Claude in Defense Department work and cut ties with the company more broadly.

  • Anthropic filed two lawsuits on Monday in California and Washington, D.C.

    The company sued in the Northern District of California and filed a separate narrower case in the federal appeals court in Washington, D.C., arguing the designation is unlawful and asking courts to block enforcement.

  • Anthropic says the conflict began over limits on military use.

    The company says it refused to allow unrestricted use of its AI systems for mass domestic surveillance of Americans and fully autonomous lethal weapons, while still remaining committed to national-security work and willing to revise contract language for legitimate defense needs.

  • Anthropic argues the designation violates its First Amendment rights and exceeds government authority.

    In its California lawsuit, the company says the government is unlawfully using state power to punish protected speech and force changes to Anthropic’s values and model restrictions. The BBC text says the suit names the Trump executive office, senior officials including Pete Hegseth, Marco Rubio, and Howard Lutnick, and 16 government agencies.

  • The Pentagon formally issued the designation last Thursday.

    The reporting cited in the article says this was the first reported use of the blacklisting tool against a U.S. company. Microsoft’s brief similarly argues the underlying authority under 10 U.S.C. § 3252 has only been publicly invoked once before, against foreign company Acronis AG.

  • Anthropic has already been deeply involved in U.S. national-security work.

    Claude has been used by the U.S. government and military since 2024 and was, until recently, the only frontier AI system approved for classified military networks. Anthropic also won a $200 million Department of Defense contract in July 2025 to prototype frontier AI capabilities for national security, and in 2024 partnered with Palantir to integrate Claude into U.S. intelligence software.

  • Microsoft filed an amicus brief on Tuesday, March 11.

    Microsoft asked U.S. District Judge Rita Lin in San Francisco to temporarily lift the designation to allow more reasoned discussion while the litigation continues. AP reports Lin has scheduled a March 24 hearing.

  • Microsoft says Anthropic technology is embedded in its military offerings.

    In its brief, Microsoft says Anthropic products serve as a “foundational layer” in some of its own offerings to the U.S. military, meaning immediate enforcement could force contractors to rework existing systems. Microsoft also says vague directions never before publicly used against a U.S. company could cause severe economic effects that are not in the public interest.

  • A group of 37 employees from OpenAI, Google, and Google DeepMind also filed in support.

    The signatories include senior researchers and engineers, including Google Chief Scientist Jeff Dean, and they filed in their personal capacity rather than on behalf of their employers.

  • Those researchers argue Anthropic’s safety red lines reflect real technical limits.

    Their brief says current AI systems hallucinate, remain opaque even to their creators, and can make irreversible mistakes in lethal contexts. They also warn that AI-enabled mass surveillance can create a chilling effect on journalists, academics, activists, and other democratic functions.

  • Retired military leaders say the Pentagon may be stretching a narrow authority beyond its purpose.

    A separate brief from 22 former high-ranking military officials, including Michael Hayden and Thad Allen, argues the move amounts to retribution against a private company and creates sudden uncertainty around technology widely embedded in military platforms, which could disrupt planning and put service members at risk during ongoing operations.

  • Civil-rights groups argue the designation raises First Amendment concerns.

    FIRE, the Electronic Frontier Foundation, the Cato Institute, Chamber of Progress, and the First Amendment Lawyers Association argue Anthropic’s design choices, including its usage policy and “Claude’s Constitution,” are protected speech, and that forcing different values into the system would amount to compelled speech.

Timeline / What Changed

  • Last Thursday: The Pentagon formally designated Anthropic a supply-chain risk.
  • Monday, March 9: Anthropic filed two lawsuits challenging the designation.
  • Tuesday, March 11: Microsoft filed an amicus brief seeking temporary court relief.
  • Same day: 37 researchers and engineers from OpenAI and Google filed a separate supporting brief.
  • Separate filings: Retired military leaders and a civil-rights coalition also backed Anthropic.
  • March 24: A hearing is scheduled in San Francisco federal court before Judge Rita Lin.

Key Risk Factors

  • For Anthropic: The company says current and future private contracts worth hundreds of millions of dollars could be jeopardized, on top of reputational and constitutional harms.
  • For contractors: If the designation stands, companies using Anthropic models in government work may have to reconfigure products quickly or cut ties with Anthropic.
  • For military operations: Former military leaders warn that sudden uncertainty around widely embedded AI tools could disrupt planning during ongoing operations.
  • For AI policy: Supporters argue the case could set a precedent for punishing companies over model-use restrictions and safety-based product design.
  • For public trust: Former military officials say using a narrow national-security authority this way could erode confidence in lawful military governance.

Background / Context

The case is a clash between AI safety limits and government procurement power. Anthropic says it supports national-security uses and has worked with the Defense Department on tailored systems, but drew red lines at mass domestic surveillance and fully autonomous lethal weapons. The company was also, until recently, the only one of its peers approved for use in classified military networks, though military officials have reportedly said they are now looking to shift that work to competitors including Google, OpenAI, and xAI.

Industry Reaction

The unusual part of the second phase of the story is the breadth of support behind Anthropic. Microsoft’s intervention matters because it is both a major Pentagon contractor and a business partner of Anthropic. Support from employees at rival labs including OpenAI and Google DeepMind suggests the dispute is not being viewed only as a commercial fight, but also as a test case about technical guardrails and government pressure. At the same time, Google has expanded its Pentagon role through GenAI.mil: according to Google’s March 2026 announcement cited in the article, Gemini for Government now includes Agent Designer, a no-code and low-code tool for building AI agents for unclassified administrative work across a Defense Department workforce of more than 3 million people.

Why This Matters

The Anthropic Defense Department lawsuit is fundamentally about whether the U.S. government can use supply-chain and procurement powers against an AI company over safety-based product limits. The support coalition matters because it shows the Anthropic Defense Department lawsuit now reaches beyond one company, touching defense operations, commercial contracting, free-speech questions, and the broader boundary of acceptable military AI use.


This article was drafted with the assistance of generative AI. All facts and details were reviewed and confirmed by an editor prior to publication.

OpenAI signs Pentagon AI deal for cloud-only classified deployments with three redlines, as Trump orders a six-month federal phaseout of Anthropic tools.

AI in military: Unleashing game-changing potential in intel, surveillance, precision, and autonomy. Yet, it sparks deep questions.

Read a comprehensive monthly roundup of the latest AI news!

The AI Track News: In-Depth And Concise

Scroll to Top