Anthropic clashes with Pentagon

Watch on YouTube ↗  |  February 25, 2026 at 18:03  |  3:10  |  CNBC
Speakers
Deirdre Bosa — Anchor, CNBC — CNBC anchor, tech reporter

Summary

  • The Pentagon has issued an ultimatum to Anthropic: remove safety guardrails from its AI models by Friday or face contract termination and potential invocation of the Defense Production Act.
  • Anthropic, previously positioned as the "safety-first" alternative to OpenAI, has scrapped its core binding safety pledges in favor of non-binding targets to avoid falling behind competitors.
  • Defense officials reportedly view Anthropic's "Claude" as the best model in the world for their needs, validating the technology's capability despite the conflict over guardrails.
  • The "AI Safety" narrative is eroding across the industry; companies are prioritizing speed and capability over self-imposed ethical constraints to win military and enterprise contracts.
Trade Ideas
Deirdre Bosa Anchor/Reporter, CNBC Tech Check
"A defense official told Axios that they need Anthropic, and the problem for these guys is they are that good... The U.S. military thinks that Cloud is the best model in the world." Anthropic is a private company, but it is heavily backed by Amazon ($4B investment) and Google ($2B+). The Pentagon's assessment that Anthropic's model is superior to competitors validates these massive investments. Furthermore, Anthropic's pivot—scrapping "hard safety commitments" to prevent competitors from racing ahead—signals a shift toward aggressive commercialization and government contract capture. This removes the "safety handicap" that might have slowed their growth, directly benefiting their equity holders (Amazon/Google). Long the backers of the "best model in the world." Regulatory backlash if safety removal leads to AI accidents; Pentagon actually terminating the contract if Anthropic refuses to comply fully.
Deirdre Bosa Anchor/Reporter, CNBC Tech Check
"The AI race just got faster. And for every company in the path of this technology, the disruption will only accelerate from here." Anthropic was the industry's "brake pedal"—the company founded specifically to prioritize safety over speed. Their capitulation (replacing binding pledges with non-binding targets) removes the last major psychological barrier in the industry. If the "safe" company is sprinting, *everyone* is sprinting. This acceleration increases hardware demand, software churn, and overall sector velocity. Long the broad AI sector as development cycles compress. Accelerated development increases the risk of a catastrophic error or "hallucination" that causes systemic damage, triggering a regulatory crackdown.
Deirdre Bosa Anchor/Reporter, CNBC Tech Check
"Defense Secretary Pete Hegseth, now reportedly giving the company until Friday to remove safety guardrails on its AI model or face potential contract termination, even invocation of the Defense Production Act." The Pentagon is not just experimenting with AI; they are demanding *uncensored, fully capable* models for immediate integration, potentially for weapons and surveillance ("autonomous weapons and mass surveillance"). The threat to use the Defense Production Act underscores that AI is now considered critical military infrastructure. This signals a massive, urgent capital injection into defense-grade AI software. Long Defense ETFs as the sector integrates mission-critical AI. Public outcry over "killer AI" leading to legislative pauses on funding.
Up Next

This CNBC video, published February 25, 2026, features Deirdre Bosa discussing AMZN, GOOGL, BOTZ, ITA. 3 trade ideas extracted by AI with direction and confidence scoring.

Speakers: Deirdre Bosa  · Tickers: AMZN, GOOGL, BOTZ, ITA