The CoreWeave CEO states that GPU depreciation fears ("obsolete in 16 months") are "nonsense" pushed by short sellers. He notes A100 prices have appreciated, customer contracts are for 5+ years, and his company uses a 6-year depreciation schedule. He asserts NVIDIA's latest architectures (H100, H200, GB200) are brought to scale first by CoreWeave and have very long useful lives in inference and other workloads. The narrative of rapid obsolescence contradicts the commercial reality of long-term contracts and the emergence of new companies/use cases for older chips. If demand is structural and multi-year, and NVIDIA maintains its architecture leadership, its hardware retains value and drives recurring revenue. LONG because the core bear thesis on inventory depreciation is directly challenged by a major infrastructure customer's on-the-ground data. Sustained demand across the hardware stack (bleeding-edge to legacy) supports NVIDIA's financial model and ecosystem dominance. A genuine, rapid technological breakthrough that makes current GPU architectures obsolete faster than the 5-6 year cycle, or a collapse in AI application demand.
The IREN CEO states the company cannot meet current AI compute demand, with its $9.7B Microsoft contract representing only 5% of its capacity. He emphasizes their 8-year lead in securing land and power (4.5 GW) as a "huge" scaling advantage, with the constraint being "time to compute" (construction speed), not power. In a market constrained by power and data center build-out speed, a company with a multi-gigawatt pipeline of secured, renewable-energy-connected sites holds a formidable moat. This asset base allows it to capture a disproportionate share of the exploding demand described by all speakers. LONG because the company possesses the critical, scarce real assets (power, land, grid connections) needed to scale AI infrastructure. Their early mover advantage in site development is difficult to replicate quickly and positions them as a key bottleneck supplier. Execution risk in building out data centers at the required pace, or a sudden, sharp downturn in demand for AI compute that leads to overcapacity.