| Ticker | Direction | Speaker | Thesis | Time |
|---|---|---|---|---|
| LONG |
Mandeep Singh
Senior Analyst, Bloomberg Intelligence |
Meta is not only deploying Nvidia GPUs but is also "talking about using NVIDIA CPUs," moving away from the traditional x86 architecture. Historically, an AI server rack consisted of Nvidia GPUs paired with Intel or AMD CPUs. By switching to Nvidia's own CPUs (Grace), Nvidia captures 100% of the silicon value in the rack. This increases revenue per unit and deepens the competitive moat by creating a tightly integrated, proprietary ecosystem. Long NVDA as they successfully expand from being a GPU accelerator company to a full-stack data center provider. Regulatory intervention regarding market dominance or supply chain bottlenecks at TSMC. | 0:00 | |
| SHORT |
Mandeep Singh
Senior Analyst, Bloomberg Intelligence |
Mandeep states, "Previously, AMD or Intel was the supplier of CPUs to the data centers for Meta. Now, with NVIDIA selling CPUs, that could have an impact on AMD." This represents a direct loss of socket share in the highest-growth segment of the market (AI Hyperscalers). If the industry standard shifts to Nvidia GPUs paired with Nvidia CPUs (Grace-Hopper/Blackwell), the Total Addressable Market (TAM) for AMD and Intel in AI data centers shrinks significantly. Short/Avoid legacy CPU makers as they face displacement in the AI value chain. Nvidia's CPUs could underperform, or hyperscalers might maintain vendor diversity to avoid vendor lock-in. | 2:28 | |
| LONG |
Mandeep Singh
Senior Analyst, Bloomberg Intelligence |
Meta is aggressively "locking in that NVIDIA supply" to ensure their frontier models are trained on the latest clusters. In the AI arms race, compute capacity is the primary bottleneck. By securing millions of processors, Meta ensures it remains competitive with OpenAI and Google. The "circularity" of the deal suggests a strategic partnership that prioritizes Meta's access to hardware. Long META as they secure the necessary infrastructure to maintain a leading position in AI model development. Massive CapEx spend could weigh on free cash flow if AI monetization lags. | 0:00 | |
| WATCH |
Mandeep Singh
Senior Analyst, Bloomberg Intelligence |
Anthropic models are trained on Google TPUs and Amazon chips, not Nvidia GPUs. While Nvidia is dominant, big tech peers are actively building defensive moats via custom silicon to reduce reliance on Nvidia. However, Mandeep notes Nvidia's new chips offer "30x more token output," suggesting custom silicon still lags in raw performance for frontier training. Watch these names to see if their internal silicon can close the performance gap with Nvidia's Blackwell architecture. If their custom chips fail to scale, they will be forced to pay a premium to Nvidia, hurting margins. | 0:58 |