| Ticker | Direction | Speaker | Thesis | Time |
|---|---|---|---|---|
| WATCH |
Deirdre Bosa
Anchor/Reporter, CNBC Tech Check |
"If the high volume, everyday workloads, if they're moving off of Nvidia hardware, that is important. It changes the investment thesis." Nvidia remains the "gold standard" for training (building models), but the real long-term volume lies in inference (running models). If OpenAI and others successfully shift inference to cheaper competitors like Cerebras or internal chips, Nvidia loses the largest segment of future AI compute demand. Watch for signs of eroding market share in the inference segment, which could compress margins or slow growth despite training dominance. Nvidia's CUDA moat remains strong, and they are still "foundational" to OpenAI's business. | 0:17 | |
| LONG |
Deirdre Bosa
Anchor/Reporter, CNBC Tech Check |
"Google's serving Gemini on its own custom AI chips TPUs. Microsoft just launched its own, and Meta is rolling out custom chips across its data centers." The shift to custom silicon for inference allows these hyperscalers to decouple their cost structure from Nvidia's pricing power. This vertical integration improves gross margins and operational control as AI scales to "hundreds of millions of people." Long the hyperscalers as they successfully execute on hardware independence, reducing CAPEX intensity relative to compute output. Custom chip development is capital intensive and may lag Nvidia's performance improvements. | 0:59 |