GlobalFlyingNews
Economy

Federal Judge Questions Government's Ban on Anthropic

A federal judge expresses concerns over the government's ban on Anthropic, suggesting it may be retaliatory. A ruling is expected soon on the company's injunction request.

Anthropic logo displayed on a computer screen in New York.

A federal judge in San Francisco indicated on Tuesday that the government's ban on the AI company Anthropic appears to be punitive. This remark was made during a hearing regarding Anthropic's request for a preliminary injunction against the Pentagon's actions, which have labeled the company as a supply chain risk, effectively blacklisting it.

U.S. District Judge Rita F. Lin stated at the hearing's start, "It looks like an attempt to cripple Anthropic," expressing concern that the government might be retaliating against the company for its public criticism of the Pentagon's use of its AI model, Claude.

Judge Lin indicated that a ruling on whether to temporarily lift the government's ban is anticipated within the next few days, pending further examination of the case's merits. This hearing represents the latest development in the ongoing conflict between one of the top AI firms and the Trump administration, which has broader implications for the government's AI usage policies.

In late February, Anthropic CEO Dario Amodei declared that the company's AI model, Claude, would not be utilized for autonomous weaponry or surveillance of American citizens. In response, President Trump ordered all U.S. government agencies to cease using Anthropic products.

Earlier this month, the Pentagon classified Anthropic as a "supply chain risk" due to national security concerns, a designation typically applied to foreign adversaries that may jeopardize U.S. interests.

Anthropic has initiated two federal lawsuits asserting that this classification constitutes illegal retaliation for its commitment to AI safety. The company contends that the designation will adversely affect its customer base and revenue, as it will prevent Pentagon contractors from engaging with Anthropic.

These lawsuits, filed in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C., claim that the Trump administration violated the company's First Amendment rights and overstepped the bounds of the supply chain risk law.

During the Tuesday hearing, Anthropic's attorneys noted that this designation against a U.S. company appears unprecedented. Judge Lin acknowledged the Pentagon's authority to determine which AI products it prefers to use but raised questions about the legality of the ban and the Defense Secretary's directive that anyone seeking Pentagon contracts must sever ties with Anthropic.

She described the government's actions as "troubling," suggesting they seemed disproportionate to the national security concerns presented, which could be resolved simply by the Pentagon discontinuing the use of Claude. Rather, it appeared that the government was attempting to penalize Anthropic.

Conversely, a government attorney contended that the actions were not retaliatory but rather based on Anthropic's disagreement with the government regarding the application of its AI model. The government also posited that Anthropic poses a potential risk due to the possibility of future updates to Claude that could threaten national security.

Anthropic did not provide an immediate response to an email request for comment. A spokesperson for the Pentagon remarked that the agency refrains from commenting on ongoing litigation.