This prohibition imposed by the Trump administration on utilizing Anthropic’s AI has been overturned thanks to a recent court ruling, following the company’s assertion that the ban would result in billions of dollars in lost revenue.
A preliminary injunction was granted by US District Judge Rita F. Lin on Thursday, postponing the administration’s plan to sever ties with Anthropic as a legal dispute unfolds in a federal court in San Francisco. The order was temporarily suspended for seven days to allow the government to file an appeal.
Also Read: Crude oil prices remain at elevated levels despite Trump pause for Iran strikes
Earlier this month, Anthropic initiated a lawsuit to prevent the Defense Department from labeling the company as a threat to the US supply chain, escalating a critical conflict regarding military AI technology protections.
The government argued it could not impose any restrictions, citing national security concerns, while Anthropic sought assurances that its AI would not be employed for autonomous weaponry or extensive surveillance of American citizens.
Judge Lin expressed skepticism about the justification for the ban, indicating that it did not appear to be motivated by national security issues.
In a statement, Anthropic lauded the judicial decision. “Our focus remains on collaborating effectively with the government to ensure that all Americans benefit from safe, reliable AI. Although this case was necessary to safeguard Anthropic, our customers, and our partners,” the company remarked.
Anthropic claims that the legal principles involved affect all federal contractors whose viewpoints the government finds disagreeable, asserting that it is being excluded from government contracts due to its opposition to the administration’s stance.
The Trump administration has pledged to persist in its legal efforts to remove Anthropic from all US government repositories.
A government attorney mentioned during a hearing before Judge Lin earlier this week that trust is a vital component in any military relationship with service-providing businesses, arguing that Anthropic had compromised that trust during contract discussions by attempting to influence Pentagon policies on AI technology usage.
The attorney raised concerns about Anthropic’s potential for “future sabotage,” which may involve changes to the AI software acquired by the agency from the company.
Nevertheless, Judge Lin remarked in her ruling that the US Justice Department did not possess a “legitimate basis” to assert that Anthropic could “become a saboteur” because of its firm stance on restrictions regarding its AI technology.
(Edited by : Juviraj Anchil)