There are three major semiconductor conferences each year, IEDM, VLSI and finally ISSCC. We have covered the former two in great detail over the past few years. Today, we finally complete the trinity with our roundup on ISSCC 2026. Compared to IEDM and VLSI, ISSCC has a much bigger focus on integration and circuits. Almost every paper comes with some form of circuit diagram, together with clear ...
ISSCC 2026 reveals that the AI hardware bottleneck is rapidly shifting from pure compute node shrinks to advanced packaging, memory density, and optical interconnects. The market is currently pricing in perpetual dominance for SK Hynix in memory and pure-play AI names, but technical data shows legacy laggards like Samsung (HBM4), Kioxia/WDC (NAND), and Intel (Packaging/Interconnects) are making aggressive, highly competitive leaps that threaten to compress the valuation premiums of current market darlings.
Model: gemini-3.1-pro-preview | Cost: $0.0431Anthropic’s Claude 4.6 Opus and Claude Code have soared in demand. Anthropic’s ARR has more than tripled in just a single quarter from $9B at the end of last year to over $30 today. Open models such as GLM and Kimi K2.5 caused open model use cases to soar. Capital raises by firms like Anthropic, OpenAI, and various Neolabs also demand GPUs. This inflection point means that demand has spiked and t...
{ "tldr": { "summary": "The article analyzes the severe shortage in the GPU rental market, where demand from AI labs and agentic workloads has spiked, causing prices for H100 1-year contracts to surge nearly 40% and capacity to be sold out. It argues that public market sentiment is overly pessimistic on Neocloud providers despite this tight supply, which is likely to drive further price increases and improve returns for operators with shorter-duration contracts and existing H100 fleets. The author also announces the public launch of SemiAnalysis's H100 1-year rental price index to provide greater market transparency.", "key_points": [ "GPU rental demand has spiked due to explosive growth in AI model usage, open-weight models, and multi-agent workloads, leading to a run on capacity.", "H100 1-year rental contract prices jumped from $1.70/hr/GPU in October 2025 to $2.35/hr/GPU by March 2026, a nearly 40% increase.", "On-demand and contract GPU capacity is largely sold out across all major GPU types, creating a supply crunch reminiscent of past shortages.", "The GPU rental market is segmented into short-term (on-demand/spot), mid-term (1-3 year contracts), and long-term (4-5 year offtakes), with most volume in contracts.", "Public market sentiment remains negative on Neocloud providers (e.g., CoreWeave, Nebius, IREN) despite the favorable supply-demand dynamics and pricing power.", "The author expects GPU rental prices to continue rising due to sustained demand, component shortages (memory, logic wafers), and the high ROI of AI tools.", "Shorter-duration contracts and existing H100 install bases allow providers to reprice faster, capturing immediate margin expansion.", "The article introduces SemiAnalysis's publicly available H100 1-year rental price index, built from survey and transaction data, to track contract market trends." ] }, "trade_ideas": [] }
Model: gemini-3.1-pro-preview | Cost: $0.0261Nvidia’s Datacenter Blackwell GPU (SM100) represents one of the largest GPU microarchitecture change in a generation, yet no detailed whitepaper exists. Until today, there is no public datacenter Blackwell architecture microbenchmarking study on PTX and SASS instructions, such as UMMA and TMA, with a focus on AI workloads. After our in-depth Nvidia Tensor Core Evolution: From Volta To Blackwell ...
{ "tldr": { "summary": "The article provides a detailed technical analysis of Nvidia's Blackwell GPU architecture, focusing on low-level microbenchmarking of tensor cores, PTX/SASS instructions, and memory subsystems. It aims to establish performance upper bounds and offer insights for ML systems and kernel developers, with no discussion of financial markets or trading positions.", "key_points": [ "The article is a deep dive into Blackwell's microarchitecture, benchmarking tensor core operations, asynchronous memory copies, and Tensor Memory Accelerator (TMA) performance.", "It explores new Blackwell features like tensor memory (TMEM), TPC-scoped MMA, and cluster-based execution models, including floorsweeping and GPC mapping.", "Benchmark results show how memory throughput scales with different load sizes and configurations for LDGSTS and TMA, with TMA excelling at larger data transfers.", "The analysis covers TMA multicast capabilities and their impact on L2 traffic reduction and SMEM fill throughput.", "Tensor core MMA performance is evaluated across various shapes, data types, and CTA groups, revealing that larger instruction shapes achieve near-peak throughput.", "The article is the first in a planned series on low-level benchmarking of AI accelerators, with future work targeting TPU, Trainium, and AMD CDNA4." ] }, "trade_ideas": [] }
Model: gemini-3.1-pro-preview | Cost: $0.0231