A national appeals tribunal successful Washington connected April 8 rejected Anthropic’s bid to instantly halt the Pentagon’s blacklisting of its Claude artificial quality (AI) models from U.S. subject contracts.
Key Takeaways:
- The D.C. Circuit denied Anthropic’s exigency enactment connected April 8, 2026, allowing the Pentagon’s blacklist of Claude AI to stay successful force.
- Pentagon proviso concatenation hazard designation affects large DoD contractors, including Amazon, Microsoft, and Palantir.
- Expedited oral arguments are acceptable for May 19, 2026, a ruling that could reshape U.S. authorities AI procurement policy.
Appeals Court Rules DoD Can Keep Claude AI Blacklist During Litigation
The U.S. Court of Appeals for the D.C. Circuit, successful a four-page order, denied the San Francisco-based AI company’s exigency question to intermission a “supply concatenation risk” designation issued by Defense Secretary Pete Hegseth. The ruling allows the Department of Defense to proceed barring contractors from utilizing Claude portion litigation proceeds. Oral arguments were expedited to May 19, 2026.
The sheet acknowledged Anthropic would “likely endure immoderate grade of irreparable harm,” citing some fiscal and reputational damage. Judges Gregory Katsas and Neomi Rao, some Trump appointees, concluded the equilibrium of equities favored the government, citing judicial absorption of however the Pentagon secures AI exertion “during an progressive subject conflict.”
The designation itself traces to a breakdown successful negotiations betwixt Anthropic and Pentagon officials successful precocious February 2026. At contented were 2 restrictions successful Anthropic’s presumption of service: a prohibition connected afloat autonomous weapons systems, including equipped drone swarms operating without quality oversight, and a prohibition connected wide surveillance of U.S. citizens.
Emil Michael, Undersecretary for Research and Engineering and the Pentagon’s main exertion officer, called those restrictions “irrational obstacles” to subject competitiveness, peculiarly against China. Officials cited programs specified arsenic the Golden Dome rocket defence inaugural and the request for accelerated effect capabilities against hypersonic threats.
Anthropic offered limited, case-by-case exceptions but refused to destruct the halfway information guardrails, citing reliability concerns with existent AI for high-stakes autonomous decisions. Talks collapsed. President Trump past directed each national agencies to halt utilizing Anthropic’s technology, with a six-month phase-out for existing deployments.
Hegseth’s proviso concatenation hazard designation followed, an enactment typically applied to overseas entities specified arsenic Huawei. The statement required contractors, including Amazon, Microsoft, and Palantir, to cease utilizing Claude successful immoderate DoD-tied work. Anthropic called the determination an “unlawful run of retaliation” for its refusal to fto the authorities override its AI information policies.
Anthropic filed parallel lawsuits successful March 2026. One was filed successful the U.S. District Court for the Northern District of California; the different targeted the circumstantial procurement statute governing supply-chain hazard successful the D.C. Circuit.
On March 26, U.S. District Judge Rita F. Lin granted Anthropic a preliminary injunction successful the California case. She ruled that the administration’s actions appeared much punitive than protective, lacked capable statutory justification, and overstepped authority. That bid temporarily lifted enforcement of the designation, allowing authorities and contractor usage of Claude to proceed pending afloat litigation. The Trump medication appealed to the Ninth Circuit.
The April 8 D.C. Circuit determination runs antagonistic to Lin’s ruling, creating a ineligible hostility implicit whether the designation is presently enforceable. The 2 courts are reviewing antithetic statutory frameworks, which explains the procedural split.
Anthropic said successful a connection that it remains assured successful its position. “We’re grateful the tribunal recognized these issues request to beryllium resolved rapidly and stay assured the courts volition yet hold that these proviso concatenation designations were unlawful,” the institution said.
Industry observers flagged the lawsuit arsenic a informing motion for U.S. AI development. Matt Schruers, CEO of the Computer and Communications Industry Association, said the Pentagon’s actions and the D.C. Circuit ruling “create important concern uncertainty astatine a clip erstwhile U.S. companies are competing with planetary counterparts to pb successful AI.”
The lawsuit present moves toward the expedited May 19 oral statement successful the D.C. Circuit, with the Ninth Circuit entreaty inactive pending. The result volition apt specify the limits of national powerfulness to designate home AI firms arsenic nationalist information risks and find however acold the authorities tin spell successful pressuring backstage companies to change their AI information policies.

2 hours ago








English (US)