| Ticker | Direction | Speaker | Thesis | Time |
|---|---|---|---|---|
| WATCH |
Deirdre Bosa
Anchor/Reporter, CNBC Tech Check |
An "internal safety civil war" is going public. Safety researchers are quitting OpenAI and Anthropic, warning that models are now "improving themselves" and posing existential threats. OpenAI has dismantled its "mission alignment team." The industry is pivoting from "safety first" to "commercialization first" (the "Facebook playbook"). While this accelerates revenue (bullish), the exodus of safety talent and warnings of "chemical weapons" capabilities creates significant tail risk. If a model causes real-world harm, the regulatory pendulum could swing violently back toward restriction. WATCH. The "Acceleration" camp (OpenAI/PLTR) currently has the momentum and capital, but the "Safety" camp (Anthropic) is highlighting risks that could trigger a black swan event. Regulatory crackdown following a safety failure; loss of key talent slowing innovation. | — | |
| LONG |
Deirdre Bosa
Anchor/Reporter, CNBC Tech Check |
Palantir co-founder Joe Lonsdale, alongside OpenAI and Andreessen Horowitz, poured $125 million into a PAC to lobby for a "single federal AI standard" that overrides state laws. Bosa notes they "have the White House behind them." Palantir is actively shaping the regulatory environment to favor federal preemption. A unified federal standard (vs. a patchwork of 50 state laws) significantly lowers compliance friction for enterprise/government AI deployment. The massive funding advantage ($125M vs. Anthropic's $20M) and White House alignment suggests the "accelerationist" camp is winning the political battle. LONG. Palantir is positioning itself as a rule-maker, not just a rule-taker. Public backlash if a major AI safety incident occurs; potential for strict federal regulations instead of loose ones. | 1:01 |