GPUs are zero sum and if you don't have more GPUs you really have to figure out how do you make very very hard trades... demand keeps going up even as prices go down. When you just look at token consumption per user... you see a lot of very GPU hungry workflows. As AI models evolve from simple text generation to reasoning (test-time compute) and autonomous agents, the compute required per user query scales exponentially. This guarantees sustained, insatiable demand for the underlying silicon provided by Nvidia and manufactured by TSMC, regardless of which software layer ultimately wins the consumer war. LONG. Compute remains the fundamental bottleneck and the most valuable, zero-sum resource in the AI economy. Geopolitical tensions affecting Taiwan (TSMC) or a sudden breakthrough in algorithmic efficiency that drastically reduces hardware compute requirements.