This prohibition imposed by the Trump administration on the utilization of Anthropic’s AI has been overturned by this court decision, after the Claude chatbot developer argued that it could result in billions in lost revenue.
A preliminary injunction was issued by US District Judge Rita F. Lin on Thursday, halting the administration’s plans to sever ties with Anthropic while a legal dispute unfolds in a federal court in San Francisco. She also paused the order for seven days to allow the government time to file an appeal.
Also Read: Crude oil prices remain at elevated levels despite Trump pause for Iran strikes
Earlier this month, the firm initiated a lawsuit to prevent the Defense Department from asserting that Anthropic posed a risk to the US supply chain, escalating a critical conflict over safeguards for military AI technology.
The government argued it could not accept any restrictions, citing national security concerns, while the company sought assurances that its AI would not be employed for autonomous weapon usage or extensive surveillance of American citizens.
Lin expressed skepticism regarding the rationale behind the ban, indicating that it did not appear to address national security issues.
In a statement, Anthropic commended the judge’s ruling. “We remain focused on collaborating productively with the government to ensure all Americans benefit from safe, reliable AI, even though this legal action was essential to protect Anthropic, our clients, and our partners,” the company stated.
Anthropic argues that the legal principles involved affect all federal contractors whose views the government finds objectionable, and claims it is being excluded from government contracts due to its disagreement with the administration.
The Trump administration has pledged to legally challenge Anthropic’s presence in all US government agencies.
During a hearing with Judge Lin earlier this week, a government lawyer asserted that trust is vital to the military’s dealings with businesses providing services and that Anthropic had compromised that trust during contract talks by trying to influence Pentagon policies regarding AI technology usage.
The lawyer expressed concerns about Anthropic’s potential for “future sabotage,” which could entail alterations to the AI software purchased by the agency from the company.
However, Lin stated in her ruling that the US Justice Department did not have a “legitimate basis” to conclude that Anthropic might “become a saboteur” due to its firm stance on restrictions concerning its AI technology.
(Edited by : Juviraj Anchil)