| Ticker | Direction | Speaker | Thesis | Time |
|---|---|---|---|---|
| LONG | — | Meta is one of Nvidia's top two customers and is now purchasing Nvidia CPUs (Grace) alongside GPUs, whereas "Intel and AMD used to own that market of server CPUs." Nvidia is using its dominance in GPUs to cross-sell CPUs by promising better throughput and performance. Hyperscalers like Meta are accepting this bundle to ensure they remain on the priority list for scarce GPU allocation. Long NVDA as it captures more data center wallet share; Long META as it secures the necessary compute for AI dominance. Regulatory scrutiny on Nvidia's bundling practices; over-ordering by Hyperscalers leading to an eventual inventory digestion phase. | 0:10 | |
| AVOID | — | The speaker notes that "Intel and AMD used to own that market of server CPUs" but Nvidia is now telling customers they get better performance using Nvidia CPUs with their GPU clusters. As Hyperscalers (Meta) shift CPU spend to Nvidia to optimize AI workloads and secure supply, the Total Addressable Market (TAM) for traditional x86 server CPUs (Intel/AMD) shrinks in the high-growth AI segment. Avoid legacy chipmakers as they lose their stronghold in the hyperscale data center market. Nvidia's CPU performance fails to meet benchmarks; antitrust intervention prevents Nvidia from bundling CPUs with GPUs. | 0:10 | |
| LONG | — | Uber completes 13 billion rides per year compared to Waymo's 20 million. Uber is investing $100M into robotaxi charging stations. The hardest part of ride-hailing is managing wait times. Robotaxi hardware companies (like Waymo) lack the network density to offer low wait times globally. Therefore, they must eventually plug into Uber's network to be viable. Uber becomes the aggregator/platform rather than the hardware manufacturer. Long UBER as the "winner-take-most" network utility for the autonomous future. Expansion of Chinese autonomous fleets (like Didi) into Europe and other non-US markets could challenge Uber's global dominance. | 0:58 |