Anthropic, the creator of the widely used Claude AI assistant, claims that Defense Secretary Pete Hegseth overstepped his jurisdiction by labeling the company as a national security supply-chain risk due to its refusal to eliminate specific usage guardrails on its products. This designation prevents Anthropic from securing Pentagon contracts and may lead to a broader government-wide blacklisting.
Executives at Anthropic have indicated that this classification could result in billions of dollars in lost revenue and damage to their reputation.
A panel of judges from the U.S. Court of Appeals for the District of Columbia Circuit rejected Anthropic’s request to pause the designation while the legal proceedings continue. This decision is not considered a final judgment.
An Anthropic spokesperson stated after Wednesday’s ruling that the company is optimistic that the court will ultimately find the supply-chain risk designation to be unlawful.
Acting Attorney General Todd Blanche characterized the ruling as a win for military readiness in a social media post on Wednesday.
“Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company,” Blanche remarked, referring to Trump’s new label for the Defense Department.
This lawsuit is one of two that Anthropic filed in response to Hegseth’s unprecedented action, which was taken after Anthropic declined to allow the military to utilize the AI chatbot Claude for U.S. surveillance or autonomous weaponry due to safety and ethical concerns.
Hegseth issued orders categorizing Anthropic under two different statutes, with Anthropic contesting each designation separately.
A federal judge in California blocked one of the orders on March 26, stating that it appeared the Pentagon may have unlawfully retaliated against Anthropic for its stance on AI safety.
Also Read: Jeff Shell steps down as Paramount president after allegations of SEC violations
This designation marked the first instance of a U.S. company being publicly labeled a supply-chain risk under obscure government procurement laws intended to protect military systems from enemy sabotage or infiltration.
In its legal actions, Anthropic argues that the government violated its First Amendment right to free speech by retaliating against its stance on AI safety. The company contends it was not afforded an opportunity to challenge its designation, infringing upon its Fifth Amendment right to due process.
The lawsuits assert that the designations were unlawful, factually unsupported, and inconsistent with the military’s previous commendations of Claude.
The Justice Department contends that Anthropic’s refusal to lift the restrictions may introduce uncertainty within the Pentagon regarding the use of Claude and could jeopardize military operations, as indicated in a court filing.
The government maintains that its decision arose from Anthropic’s unwillingness to accept contractual terms, rather than its opinions on AI safety. The D.C. case pertains to a law that could expand the blacklist to encompass the broader civilian government following an interagency review.
The California lawsuit involves a narrower statute that disqualifies Anthropic from Pentagon contracts linked to military information systems.
First Published: Apr 9, 2026 6:41 AM IST