Meta Will Deploy Four New In-House Chips to Handle AI Workloads

Watch on YouTube ↗  |  March 11, 2026 at 14:54  |  1:38  |  Bloomberg Markets

Summary

  • Meta is deploying four generations of its in-house Meta Training and Inference Accelerator (MTIA) through 2027.
  • MTIA 300 is currently in production and is being used to run the inference phase for ranking and recommendation algorithms (e.g., targeted Instagram ads).
  • While Meta remains one of the largest buyers of NVIDIA and AMD chips for training AI models, custom silicon offers better economics for the inference phase.
  • The broader trend of hyperscalers (Meta, Google, Amazon) developing proprietary silicon is creating long-term pressure on merchant silicon providers like NVIDIA.
Trade Ideas
Ed Ludlow Reporter 0:34
Meta is releasing four generations of its MTIA accelerator through 2027 because they see a better opportunity to do their own chips for the economics of running models in the inference phase. AI workloads are split into two phases: training (creating the model) and inference (running the model). By designing custom silicon specifically for inference tasks like ad targeting, Meta drastically reduces its reliance on expensive third-party GPUs. This lowers operational expenditures and directly expands profit margins for its core advertising business. LONG META as vertical integration of AI hardware improves unit economics and protects long-term profitability. Designing and fabricating custom silicon requires massive upfront R&D and CapEx; if the chips underperform merchant alternatives, it becomes a costly sunk investment.
Ed Ludlow Reporter 1:08
Hyperscalers are looking for homegrown chips to rely less on the Nvidias of the world, which puts more pressure on NVIDIA, though Meta is still one of the biggest buyers of both NVIDIA and AMD chips. Hyperscalers are currently forced to buy NVDA and AMD chips to train massive AI models. However, as these tech giants successfully shift the inference phase (which represents the bulk of long-term computing volume) to their own custom chips, the total addressable market for merchant silicon will face a ceiling. The largest customers are slowly becoming competitors. WATCH NVDA and AMD. Short-term revenues are secure due to the ongoing AI training arms race, but long-term pricing power and volume growth face headwinds from hyperscaler self-sufficiency. AI model sizes could grow exponentially, requiring so much compute power that hyperscalers are forced to buy every available NVDA/AMD GPU regardless of their in-house chip programs.
Dani Burger Anchor 1:08
We already saw a wave being made by Google chips, and of course, Amazon has its own chips, so it's getting to be a big fight. Google (TPUs) and Amazon (Trainium/Inferentia) are executing the exact same playbook as Meta. By controlling their own silicon destiny, these cloud providers can offer cheaper AI compute to their enterprise customers compared to clouds that solely rent out NVIDIA GPUs. This vertical integration defends their cloud market share and insulates them from hardware supply chain bottlenecks. LONG GOOGL and AMZN as their proprietary silicon provides a structural cost advantage in the cloud wars. If NVIDIA's CUDA software ecosystem remains the absolute standard for developers, enterprise customers may refuse to use Google or Amazon's custom chips, forcing the hyperscalers to buy NVIDIA anyway.
Up Next

This Bloomberg Markets video, published March 11, 2026, features Ed Ludlow, Dani Burger discussing META, NVDA, AMD, GOOGL, AMZN. 3 trade ideas extracted by AI with direction and confidence scoring.

Speakers: Ed Ludlow, Dani Burger  · Tickers: META, NVDA, AMD, GOOGL, AMZN