AI firms like OpenAI seek Nvidia alternatives
Watch on YouTube ↗  |  February 13, 2026 at 17:37 UTC  |  5:17  |  CNBC
Speakers
Deirdre Bosa — Tech Check Anchor
Carl Quintanilla — Anchor

Summary

  • OpenAI is diversifying its hardware stack, running its first model entirely on chips from startup Cerebras, signaling a shift away from total reliance on Nvidia.
  • A clear bifurcation is emerging in AI compute: Nvidia retains the "Prestige Play" (Training/Flagship models), but is losing ground in the "Revenue Play" (Inference/Volume) to custom silicon and competitors like Broadcom and AMD.
  • The "Inference" market is financially more significant long-term than training because it represents recurring revenue rather than a one-time cost; hyperscalers are prioritizing cost-efficiency here.
  • Walmart (WMT) remains a standout in the consumer sector, up ~20% YTD, while peers like Target, Nike, and Lululemon face leadership turnover and uncertainty.
Trade Ideas
Ticker Direction Speaker Thesis Time
WATCH Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
OpenAI is now running inference for specific models on non-Nvidia chips (Cerebras). Deirdre notes that while Nvidia wins the "Flagship" (Training), it is "ceding ground in the volume game" (Inference). Inference is a recurring cost, whereas training is a one-time cost. As AI scales to millions of users, the bulk of CAPEX shifts to inference. If Nvidia loses the monopoly on inference to cheaper alternatives or custom silicon, their pricing power and volume dominance deteriorate. WATCH/CAUTIOUS. The narrative is shifting from "Nvidia takes all" to "Nvidia for training, others for inference." Nvidia's Blackwell chips remain the gold standard for high-end compute, and demand still outstrips supply. 0:15
LONG Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
Companies are "smiling at Nvidia with one hand and signing deals with Cerebras, Broadcom, and AMD with the other." Hyperscalers and AI labs are desperate to diversify supply chains and reduce costs. As the market bifurcates into Training (Nvidia) vs. Inference (Efficiency), alternative chipmakers (AMD) and custom silicon partners (Broadcom) capture the high-volume inference market share that Nvidia is "ceding." LONG. These are the direct beneficiaries of the "Anyone but Nvidia" trade for inference workloads. Nvidia could aggressively price-cut older generation chips to defend inference market share. 2:07
LONG Deirdre Bosa
Anchor/Reporter, CNBC Tech Check
Google shipped Gemini 3 on its own TPUs; Meta and Microsoft are shipping their own silicon. OpenAI is "dating other people." By successfully deploying internal custom silicon for inference, these hyperscalers reduce their blended compute costs and reliance on external vendors. This margin expansion is bullish for their long-term profitability. LONG. Vertical integration of silicon improves unit economics for AI services. Internal chip development is capital intensive and technical failure could set them back years against competitors using Nvidia. 0:29
WMT
LONG Carl Quintanilla
Anchor, CNBC
Walmart shares are "on fire," up roughly 20% YTD, with earnings approaching next Thursday. In a consumer environment defined by volatility and leadership turnover elsewhere, Walmart's consistent stock performance suggests they are capturing market share and managing the macro environment better than peers. LONG. Momentum leader in the consumer staples/discretionary blend. High expectations going into earnings; any guidance miss could lead to a sharp pullback. 4:41
WATCH Carl Quintanilla
Anchor, CNBC
There is significant "leadership turnover" in these consumer names. Executive churn often signals internal strife or a need to pivot strategy due to underperformance. Unlike Walmart's stability, these firms are in a transition phase. WATCH. Avoid until new leadership demonstrates a clear turnaround strategy. New CEOs often "kitchen sink" the bad news, leading to short-term stock drops before recovery. 5:08
WATCH Carl Quintanilla
Anchor, CNBC
The memory chip shortage/demand story has "crept into Apple" and "Cisco." If memory supply is tightening (or demand increasing) to the point it impacts hardware giants like Cisco and Apple, it implies strong underlying hardware cycle demand but potential margin pressure from component costs. WATCH. Monitor for supply chain constraints or margin compression due to rising memory prices. Inability to source components could delay shipments. 3:42