Trade Ideas
A hacker demonstrated that the M-series chip architecture is "80 times more efficient" for inference and training than an Nvidia A100 GPU for specific transformer tasks. Apple is the "only real threat to Nvidia's hardware moat." While Nvidia dominates data center training, Apple is cornering the market on inference (running the models). If the world shifts to local inference to save costs/energy, demand for data center inference GPUs could soften. WATCH. This is a long-tail risk to Nvidia's absolute dominance, specifically in the inference segment of their revenue. Nvidia's Blackwell/Rubin chips may vastly outperform Apple's silicon in raw power, maintaining the need for cloud compute for advanced tasks.
Apple is spending very little on AI CapEx ($1.4B) compared to peers ($630B combined) and instead pays Google ~$1B/year to license Gemini for the operating system. Apple's strategy is to own the distribution (3 billion devices) while outsourcing the model costs to Google. This partnership cements Google as the default "intelligence engine" for the world's largest consumer hardware base, guaranteeing usage volume that Microsoft/OpenAI cannot access natively on the OS level. LONG. Google benefits from Apple's distribution monopoly without having to win the hardware war itself. Apple eventually replaces Gemini with a proprietary in-house model; regulatory scrutiny on the Apple/Google search and AI deal.
Apple released the M5 chip (4x faster than M4) and $600 entry-level devices (MacBook Neo, iPhone 17e) capable of running LLMs locally. Mac Minis and Studios are currently "sold out everywhere" due to demand for local AI compute. Apple is "accidentally" winning the AI hardware race by controlling the edge. While Hyperscalers spend billions on CapEx, Apple is selling the "shovels" for local inference. The "sold out" status indicates a massive hardware supercycle is underway, driven by privacy and local compute needs rather than just device upgrades. LONG. Apple is becoming the dominant platform for "Personalized Intelligence," justifying a valuation expansion comparable to Nvidia's rise. Failure to deliver a cohesive software experience (Siri AI delayed); Google or others capturing the software layer despite Apple's hardware lead.
New local models on Apple M5 chips allow users to run frontier intelligence on-device without paying subscription fees. Ejaaz asks, "Why would you pay $200 a month on a Claude subscription... if you could get frontier intelligence... on your mobile phone?" The rise of capable local inference (Edge AI) commoditizes the subscription models of OpenAI (Microsoft) and Anthropic (Amazon). If users can run "OpenClaw" or Llama locally for free with better privacy, the Total Addressable Market (TAM) for cloud-based AI subscriptions ($20/mo) collapses. AVOID. The "Edge Compute" thesis is a direct deflationary force on Cloud AI revenue and SaaS pricing power for the major hyperscalers backing these labs. Local models may hit a performance ceiling compared to massive cloud clusters (GPT-6/7); consumers may prefer the convenience of cloud despite the cost.
This Bankless video, published March 05, 2026,
features Ejaaz Ahamadeen
discussing NVDA, GOOGL, AAPL, MSFT, AMZN.
4 trade ideas extracted by AI with direction and confidence scoring.
Speakers:
Ejaaz Ahamadeen
· Tickers:
NVDA,
GOOGL,
AAPL,
MSFT,
AMZN