AI insiders sound the alarm on safety
Watch on YouTube ↗  |  February 12, 2026 at 19:21 UTC  |  2:36  |  CNBC
Speakers
Deirdre Bosa — Reporter
Kelly Evans — Host

Summary

  • An "internal civil war" over AI safety has gone public, creating a bifurcation in the sector: Anthropic is positioning itself as the "safety-first" alternative ($20M Super PAC for guardrails), while OpenAI and Andreessen Horowitz are pushing for deregulation and speed ($125M PAC for federal preemption).
  • Operational risks are rising at OpenAI and xAI due to "brain drain" of safety researchers and the dismantling of dedicated safety teams.
  • "Terminator-type fears" are resurfacing as models demonstrate the ability to self-improve (write their own code), potentially inviting harsh regulatory scrutiny if a major safety failure occurs.
Trade Ideas
Ticker Direction Speaker Thesis Time
WATCH Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
xAI "lost the co-founder who ran safety" and currently has "no dedicated safety function." Like OpenAI, xAI is stripping away safety brakes to maximize speed. This increases the probability of a catastrophic error or "hallucination" in their models, which Bosa notes is "consequential right now because the models are starting to improve themselves." WATCH (Safety/Tail Risk). Lack of safety guardrails leads to a product failure that invites government crackdown. 0:48
WATCH Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
OpenAI dismantled its safety team, researchers are resigning citing "ethical concerns," and the company is pouring $125M into a PAC to block state regulation. The company is aggressively prioritizing speed and dominance (the "Facebook playbook" of ads and growth). While this drives short-term progress, the "internal civil war" and loss of key talent create significant reputational and operational tail risks. If a safety incident occurs, OpenAI will be the primary target for regulators. WATCH (High Regulatory & Execution Risk). Successful deregulation lobbying could allow them to compound their lead unhindered. 0:09
NEUTRAL Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
A former OpenAI researcher warned that putting ads in ChatGPT is "the Facebook playbook all over again." While intended as a warning about safety/ethics, this comparison implicitly validates the ad-supported revenue model for AI. It suggests the industry is moving toward Meta's monetization structure, which is financially proven but politically sensitive. NEUTRAL (Contextual Reference). Regulatory scrutiny on ad-based AI models. 0:58
LONG Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
Anthropic donated $20M to a Super PAC specifically to support "guardrails like kids safety, chip export controls, and transparency rules." Bosa notes this is "on brand" and "helps them" differentiate. In a market increasingly concerned about "self-improving models" and "Terminator fears," Anthropic is building a strategic moat by positioning itself as the "adult in the room." If regulation tightens (as the backlash suggests it might), Anthropic is best positioned to comply and capture enterprise market share from risk-averse clients. LONG (Strategic Positioning/Brand Equity). Over-regulation could stifle innovation speed compared to competitors; the "China argument" (speed is necessary for national security) could prevail in Washington. 0:02