Four CEOs on the Future of AI: CoreWeave, Perplexity, Mistral, and IREN

Watch on YouTube ↗  |  March 23, 2026 at 18:11  |  1:37:39  |  All-In Podcast

Summary

  • CoreWeave's Financing & Scaling Model: The company innovated GPU-backed, non-recourse project finance ("the box"), allowing it to raise $35B in 18 months. Long-term (5-year) contracts with creditworthy counterparties de-risk lenders and have driven its cost of capital down 600 bps.
  • GPU Lifespan & Depreciation Debate: CoreWeave's CEO directly counters the "16-month obsolescence" narrative, citing 5-year customer contracts, a 6-year depreciation schedule, and appreciating secondary market prices for older chips (e.g., A100) as new use cases and companies emerge.
  • Infrastructure Demand & Constraints: Demand for AI compute is described as "relentless" and overwhelms global capacity. The constraint has shifted from GPUs to power availability, data center construction, and skilled labor, creating a multi-year "time to compute" bottleneck.
  • Perplexity's "Orchestration" Edge: Its strategy as a "Switzerland" that routes queries and tasks across multiple AI models (GPT, Claude, Gemini, open-source) is a defensible moat against larger, single-model competitors. This multi-model harness enables positive gross margins.
  • Shift Towards Local/Edge Compute: A trend towards powerful local workstations (Mac Studio, Dell/NVIDIA) running open-source models is emerging, driven by cost savings, privacy, and latency. This will create a hybrid orchestration model between local and server-side compute.
  • Mistral's Open-Source Verticalization Thesis: Open-source models allow for deeper customization and data control for enterprises. Mistral's "forge" product deploys engineers on-site to build bespoke, vertically specialized models while keeping customer data segregated.
  • Enterprise AI Adoption Hurdles: Critical barriers include data governance, context engines for access control, and the need for deterministic, observable agent workflows—areas where open-ended tools like OpenClaw fall short for mission-critical systems.
  • IREN's Real-Asset Arbitrage: The company's 8-year head start in securing land and grid connections (4.5 GW capacity) near excess renewable energy sources (wind/solar in West Texas) is a fundamental scaling advantage in the AI data center race.
  • Labor & Supply Chain Bottlenecks: Scaling physical infrastructure requires thousands of skilled tradespeople, stressing local labor markets and supply chains (e.g., memory), making execution speed and local community integration key competitive factors.
  • Demand Elasticity (Jevons Paradox): Industry participants believe that as compute becomes cheaper and more abundant (e.g., faster image generation), it will induce significantly more usage, creating a self-reinforcing demand cycle rather than a saturation point.
Trade Ideas
Mike Intrator CEO, CoreWeave 18:30
The CoreWeave CEO states that GPU depreciation fears ("obsolete in 16 months") are "nonsense" pushed by short sellers. He notes A100 prices have appreciated, customer contracts are for 5+ years, and his company uses a 6-year depreciation schedule. He asserts NVIDIA's latest architectures (H100, H200, GB200) are brought to scale first by CoreWeave and have very long useful lives in inference and other workloads. The narrative of rapid obsolescence contradicts the commercial reality of long-term contracts and the emergence of new companies/use cases for older chips. If demand is structural and multi-year, and NVIDIA maintains its architecture leadership, its hardware retains value and drives recurring revenue. LONG because the core bear thesis on inventory depreciation is directly challenged by a major infrastructure customer's on-the-ground data. Sustained demand across the hardware stack (bleeding-edge to legacy) supports NVIDIA's financial model and ecosystem dominance. A genuine, rapid technological breakthrough that makes current GPU architectures obsolete faster than the 5-6 year cycle, or a collapse in AI application demand.
Daniel Roberts Co-Founder & Co-CEO, IREN 80:30
The IREN CEO states the company cannot meet current AI compute demand, with its $9.7B Microsoft contract representing only 5% of its capacity. He emphasizes their 8-year lead in securing land and power (4.5 GW) as a "huge" scaling advantage, with the constraint being "time to compute" (construction speed), not power. In a market constrained by power and data center build-out speed, a company with a multi-gigawatt pipeline of secured, renewable-energy-connected sites holds a formidable moat. This asset base allows it to capture a disproportionate share of the exploding demand described by all speakers. LONG because the company possesses the critical, scarce real assets (power, land, grid connections) needed to scale AI infrastructure. Their early mover advantage in site development is difficult to replicate quickly and positions them as a key bottleneck supplier. Execution risk in building out data centers at the required pace, or a sudden, sharp downturn in demand for AI compute that leads to overcapacity.
Up Next

This All-In Podcast video, published March 23, 2026, features Mike Intrator, Daniel Roberts discussing NVDA, IREN. 2 trade ideas extracted by AI with direction and confidence scoring.

Speakers: Mike Intrator, Daniel Roberts  · Tickers: NVDA, IREN