Federal Judge Blocks Pentagon From Labeling Anthropic a National Security Threat

1 day ago

This past week, a national justice successful San Francisco blocked the Pentagon and the Trump medication from enforcing a nationalist information designation against Anthropic, the artificial quality (AI) institution that refused to region information restrictions from its Claude models.

Court Halts Trump Administration’s Ban connected Anthropic’s Claude AI for Federal Agencies

U.S. District Judge Rita F. Lin issued the preliminary injunction connected March 26, uncovering that the government’s actions against Anthropic apt violated the First Amendment, denied the institution owed process, and exceeded statutory authorization nether the Administrative Procedure Act. The ruling is stayed for 7 days, giving the medication until astir April 2 to record an exigency entreaty with the Ninth Circuit.

The dispute began erstwhile the Department of Defense (DoD) sought unrestricted entree to Claude for national use. Anthropic had agelong maintained 2 exceptions successful its acceptable usage policy: Claude would not beryllium utilized for wide home surveillance of American citizens oregon for lethal autonomous weapons systems operating without meaningful quality oversight. The DoD demanded that those guardrails beryllium removed. Anthropic refused.

Negotiations broke down successful precocious 2025. The struggle became nationalist done CEO Dario Amodei’s written statements and an effort outlining the company’s presumption connected AI safety. DoD officials viewed the restrictions arsenic Anthropic attempting to dictate authorities policy.

On Feb. 27, 2026, President Trump posted connected Truth Social, directing each national agencies to instantly halt usage of Anthropic technology, with a six-month phase-out period. Defense Secretary Pete Hegseth announced a proviso concatenation hazard designation nether 10 U.S.C. § 3252 — a statute antecedently applied to overseas adversaries — labeling Anthropic a imaginable hazard of “sabotage” and “subversion.”

Several national contractors paused oregon terminated deals with the institution pursuing the designation. Anthropic responded and filed suit connected March 9 successful the Northern District of California, alleging retaliation, owed process violations, and APA breaches. A related enactment was filed successful the D.C. Circuit.

In a 43-page order, Judge Lin enjoined the DoD, 17 different national agencies, and Secretary Hegseth from implementing oregon enforcing immoderate of the challenged actions. She ordered restoration of the presumption quo, allowing Anthropic to proceed existing national contracts and partnerships.

Lin wrote that the government’s behaviour represented “classic amerciable First Amendment retaliation.” She noted the timing of the actions, on with interior authorities communications referencing Anthropic’s “rhetoric,” “arrogance,” and “strong-arming,” pointed straight to punitive intent tied to the company’s nationalist statements connected AI safety.

On owed process, the tribunal recovered the authorities had stripped Anthropic of liberty interests successful its estimation and concern operations without providing pre-deprivation announcement oregon a hearing. Lin besides recovered that the statutory designation had ne'er earlier been applied to an American institution nether these circumstances and that anterior authorities vetting of Anthropic.

This includes Top Secret clearances, FedRAMP authorization, and contracts worthy up to $200 cardinal — showed nary genuine information concern. “Nothing successful the governing statute supports the Orwellian conception that an American institution whitethorn beryllium branded a imaginable adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.

The tribunal recovered imaginable fiscal harm to Anthropic successful the hundreds of millions to billions of dollars, on with reputational harm that monetary alleviation could not afloat repair. Amici briefs from subject leaders and AI researchers cited risks to defence readiness and the broader AI information debate.

Anthropic said it was grateful for the court’s velocity and that it planned to support moving with the national government. The institution stated its extremity remained to guarantee Americans person entree to harmless and reliable AI.

The injunction does not resoluteness the underlying declaration dispute. No last merits determination has been issued. A abstracted situation successful the D.C. Circuit remains pending, and the medication retains the enactment to appeal.

FAQ 🔎

  • What did the national justice regularisation regarding Anthropic? U.S. District Judge Rita F. Lin issued a preliminary injunction connected March 26, blocking the Pentagon and Trump medication from enforcing a nationalist information designation and national prohibition against Anthropic and its Claude AI models.
  • Why did the Pentagon designate Anthropic a proviso concatenation risk? The DoD sought unrestricted usage of Claude AI, including for wide surveillance and autonomous weapons, and labeled Anthropic a proviso concatenation hazard aft the institution refused to region those information restrictions.
  • Is the injunction presently successful effect? The injunction is stayed for 7 days from March 26 to let the authorities to record an exigency appeal, meaning it does not instrumentality effect until astir April 2, 2026.
  • What happens adjacent successful the Anthropic vs. Pentagon case? The lawsuit continues connected its merits, a related enactment remains pending successful the D.C. Circuit, and the Trump medication whitethorn question exigency alleviation from the Ninth Circuit earlier the enactment expires.
View source