Judge rules Pentagon's action toward Anthropic a 'classic First Amendment retaliation'

Watch on YouTube ↗  |  March 27, 2026 at 18:54  |  1:45  |  CNBC

Summary

  • A U.S. judge ruled the Pentagon's designation of AI company Anthropic as a "supply chain risk" was illegal "classic First Amendment retaliation."
  • This is a major legal win for Anthropic, resulting in the risk designation being frozen and a government-wide ban across 17 agencies being paused.
  • However, the injunction does not take effect for seven days, giving the government a window to appeal.
  • Anthropic's CFO stated the company has already suffered "undeniable financial and reputational damage," losing "billions in business."
  • The ruling's key practical effect is protecting Anthropic's contractor ecosystem; companies using its Claude AI in their products can continue government work without removing it.
  • Investors Amazon and Microsoft were asked to certify they were not using Claude in defense contexts due to the initial ban.
  • On the consumer side, the controversy drove the Claude app to surge to the top of the App Store, showing unexpected user stickiness.
  • A second, parallel legal case is ongoing in Washington D.C., presenting a statutory challenge to the government's actions.
Up Next