TK

Tae Kim 5.0 6 ideas

Substack author, Key Context by Tae Kim
After 1 day
N/A
12/15 min ideas
After 1 week
N/A
6/15 min ideas
After 1 month
N/A
No data yet
Not enough evaluated ideas yet
Recent positions
TickerDirEntryP&LDate
TSM LONG $368.72 Apr 16
NVDA LONG $199.22 Apr 16
SNPS LONG $440.68 Apr 16
INTC SHORT $67.54 Apr 16
XLU LONG $46.15 Apr 16
EWY LONG $145.56 Apr 14
By sector
Stock
4 ideas
ETF
2 ideas
Top tickers (by frequency)
NVDA 1 ideas
TSM 1 ideas
INTC 1 ideas
EWY 1 ideas
XLU 1 ideas
Ignore short-term positioning; 3-year capex guide and 50%+ AI CAGR imply structural, multi-year earnings beats.
The market is mispricing TSMC's terminal growth by focusing on consumer electronics softness, missing that AI demand is forcing unprecedented mid-cycle 3nm capacity expansion. The second-order effect is that TSMC's pricing power will expand as they remain the sole bottleneck for the "higher 50s" AI accelerator CAGR. Risk is a broader macro slowdown impacting non-HPC segments.
TSM HIGH Apr 16, 14:41
"So we expect the CapEx in the next few years, in the next 3 years, will be significantly higher than the past 3 years."
TLDR
TSMC's massive capex guide and Nvidia's commentary indicate the AI infrastructure build-out is accelerating, driven by a shift from generative queries to compute-heavy agentic actions. The second-order market implications are profound: custom ASICs are struggling to gain broad traction outside of Anthropic, while AI agents will soon drive an exponential, unpriced explosion in seat licenses for EDA software providers.
Key Context by Tae Kim ⏲ medium-term Source ↗
April 16, 2026 at 14:41
Tae Kim
Substack author, Key...
The threat of custom silicon (ASICs/TPUs) eating Nvidia's margins is a mirage heavily concentrated in one client.
The market overestimates the competitive threat of hyperscaler custom silicon, failing to realize that TPU/Trainium volumes are artificially propped up by Anthropic's compute-for-equity deals. Nvidia's programmable architecture remains the only viable platform for rapid algorithmic invention, securing their moat. Risk is regulatory intervention or a sudden drop in hyperscaler capex.
NVDA HIGH Apr 16, 14:41
"Without Anthropic, why would there be any TPU growth at all? It’s 100% Anthropic."
TLDR
TSMC's massive capex guide and Nvidia's commentary indicate the AI infrastructure build-out is accelerating, driven by a shift from generative queries to compute-heavy agentic actions. The second-order market implications are profound: custom ASICs are struggling to gain broad traction outside of Anthropic, while AI agents will soon drive an exponential, unpriced explosion in seat licenses for EDA software providers.
Key Context by Tae Kim ⏲ medium-term Source ↗
April 16, 2026 at 14:41
Tae Kim
Substack author, Key...
AI agents will act as synthetic engineers, massively inflating seat licenses for EDA software.
Wall Street models software revenue based on human headcount growth, completely failing to price in machine-to-machine software licensing. As AI agents begin autonomously using tools like Synopsys, EDA companies will see an exponential, unmodeled explosion in "seat" instances. Risk is the timeline for agentic reliability taking longer than expected.
SNPS HIGH Apr 16, 14:41
"It’s very likely that the number of instances of Synopsys Design Compiler is going to skyrocket, along with the number of agents using the floor planners..."
TLDR
TSMC's massive capex guide and Nvidia's commentary indicate the AI infrastructure build-out is accelerating, driven by a shift from generative queries to compute-heavy agentic actions. The second-order market implications are profound: custom ASICs are struggling to gain broad traction outside of Anthropic, while AI agents will soon drive an exponential, unpriced explosion in seat licenses for EDA software providers.
April 16, 2026 at 14:41
Tae Kim
Substack author, Key...
Intel Foundry Services (IFS) turnaround is structurally capped by a 4-5 year physical reality check.
TSMC's commentary is a direct shot at Intel's aggressive foundry turnaround narrative, reminding the market that physical fabs require 4-5 years to build and ramp. The market is likely pricing in an Intel catch-up that is physically impossible in the near term, especially as TSMC locks up next-gen LPU business now. Risk is massive US government subsidies artificially propping up Intel's margins.
INTC HIGH Apr 16, 14:41
"Again, let me say that it takes 2 to 3 years to build a new fab, no shortcuts. And it takes another 1 to 2 years to ramp it up."
TLDR
TSMC's massive capex guide and Nvidia's commentary indicate the AI infrastructure build-out is accelerating, driven by a shift from generative queries to compute-heavy agentic actions. The second-order market implications are profound: custom ASICs are struggling to gain broad traction outside of Anthropic, while AI agents will soon drive an exponential, unpriced explosion in seat licenses for EDA software providers.
Key Context by Tae Kim ⏲ medium-term Source ↗
April 16, 2026 at 14:41
Tae Kim
Substack author, Key...
With CoWoS packaging bottlenecks resolved, the primary constraint on AI scaling shifts to US grid capacity.
Jensen explicitly noted that CoWoS supply is now in "fairly good shape," meaning the physical bottleneck for AI scaling has shifted from silicon packaging to power generation. The second-order trade is going long utilities and power infrastructure, as hyperscalers will be forced to fund massive grid upgrades and secure premium-priced power purchase agreements. Risk is rising interest rates compressing utility multiples.
XLU HIGH Apr 16, 14:41
"We’re limited by energy, but we’ve got a lot of people working on that. We’ve got to not make energy a bottleneck for our country."
TLDR
TSMC's massive capex guide and Nvidia's commentary indicate the AI infrastructure build-out is accelerating, driven by a shift from generative queries to compute-heavy agentic actions. The second-order market implications are profound: custom ASICs are struggling to gain broad traction outside of Anthropic, while AI agents will soon drive an exponential, unpriced explosion in seat licenses for EDA software providers.
April 16, 2026 at 14:41
Tae Kim
Substack author, Key...
Used as a vehicle to gain exposure to memory chip stocks, which are benefiting from rapidly improving fundamentals, explosive AI demand, and constrained supply.
EWY HIGH Apr 14, 18:26
"Ever since I made a bull call on memory chip stocks using the iShares MSCI South Korea ETF (ticker: EWY) as the vehicle in mid-February, the ETF is up modestly - up about 9%."
TLDR
The author argues that the fundamentals for memory chip stocks are rapidly improving due to explosive AI-driven demand and constrained supply. While the author previously used the South Korea ETF (EWY) to play this theme, they note that the fundamental backdrop has only strengthened, teasing a new, more direct investment vehicle for memory exposure. • Samsung Electronics recently reported a massive earnings beat, with operating profit jumping more than eightfold year-over-year. • Dell CEO Michael Dell projects a 625x increase in memory demand for advanced silicon accelerators over the next few years. • Memory suppliers remain cautious about expanding capacity due to the severe industry downturn in 2023, leading to short-term supply constraints. • Major hardware OEMs and hyperscalers are prioritizing AI memory demand, signaling impending price hikes. • SK hynix has confidentially filed for a US ADR listing, which could re-rate memory stock valuations higher.
Key Context by Tae Kim ⏲ medium-term Source ↗
April 14, 2026 at 18:26
Tae Kim
Substack author, Key...
Tae Kim (Substack author, Key Context by Tae Kim) | 6 trade ideas tracked | NVDA, TSM, INTC, EWY, XLU | Substack | Buzzberg