Anthropic’s Mythos is a cyber-weapon, so you can’t have it | E2273

Watch on YouTube ↗  |  April 08, 2026 at 23:57  |  1:16:06  |  This Week in Startups

Summary

  • Anthropic's new model, "Mythos Preview," is described as a potential "cyber weapon of mass destruction" due to its exceptional ability to find and chain together security vulnerabilities in critical software, prompting Anthropic to withhold its public release.
  • Anthropic has launched "Project Glasswing," a consortium with ~40 partners (including Nvidia, Apple, Amazon, Microsoft, Google) to use Mythos defensively to harden critical infrastructure, backed by a $100M compute credit fund.
  • A central debate: Is this a responsible security precaution or a savvy pre-IPO narrative? The model's capabilities create a "two-tier" AI economy, granting early partners a significant defensive (and potentially offensive) advantage.
  • Market implications are severe: The discussion frames the U.S.-China AI race as "existential," with the technology's power compared to the atomic bomb, raising questions about nationalization and covert government collaboration with AI labs.
  • A major uncertainty is the timeline for open-source models to catch up to Mythos's capabilities. Estimates in the conversation range from 3-5 months to longer, which dictates the window for defensive hardening.
  • Small Language Models (SLMs) are presented as a disruptive, deflationary force. They are defined as models runnable on high-end laptops (~20B parameters or less) and are becoming capable of handling ~90% of common enterprise tasks at a fraction of the cost of frontier LLMs.
  • The business case for SLMs (e.g., via distillation and "harness engineering") is cost reduction. AT&T reportedly cut AI inference costs by 90% by using frontier models for only 10% of tasks and SLMs for the rest.
  • Rob May's company, Neurometric, offers a "Claw Pack" of 39 SLMs for $8/month (unlimited tokens after 100M free), illustrating the aggressive cost compression in inference.
  • The tool "Death by Claude" analyzes a company/URL's susceptibility to being replaced by an AI model (like Claude), highlighting defensibility moats: Hardware, Network Effects, and Deep Science/Regulation are key protective factors.
  • A contrarian view on Meta's AI strategy: Despite releasing a competitively benchmarked model (Muse Spark), its lack of a clear, transformative consumer or enterprise application is criticized as lacking vision.
  • A key investment insight: As AI makes product creation easier, startup defensibility shifts from "who can build" to "who won't stop building and refining," emphasizing founder resilience and product obsession.
Trade Ideas
Jason Calacanis Host / Angel Investor 3:30
The speaker states Anthropic's new "Mythos" model is so powerful at hacking software it's essentially a "cyber weapon of mass destruction," leading the company to withhold public release and work only with a consortium of critical partners. This capability creates an existential, two-tier dynamic in the AI race and national security. The speaker infers this forces a conversation about nationalization and covert government use, comparing it to the Manhattan Project. The situation demands close monitoring (WATCH) because it represents a pivotal, high-stakes moment for the company, the AI industry, and geopolitics, with unpredictable outcomes for valuation and strategy. The core claims about the model's danger could be overstated for IPO narrative purposes. An open-source model could achieve parity faster than expected, undermining the strategic advantage.
Rob May CEO, Neurometric 63:43
The speaker argues that Small Language Models (SLMs) are rapidly improving in "intelligence density" and will be capable of handling 90% of common enterprise work tasks by 2030, at a dramatically lower cost than frontier LLMs. This enables massive cost savings (citing AT&T cutting costs by 90%) and could lead to "hyperdeflation" in AI inference pricing. It empowers small teams to serve niche markets profitably, potentially eroding the economic moat of frontier model providers. The entire technology services sector built on AI applications should be watched closely, as the underlying cost and accessibility of intelligence are shifting, enabling new business models and threatening incumbents reliant on expensive API calls. Frontier models continue to advance at a pace that maintains a significant capability gap for complex, novel tasks that SLMs cannot handle, preserving their premium pricing power.
Gani Creator, Death by Claude 111:42
The speaker's tool, "Death by Claude," analyzes companies for AI replaceability, scoring them. It identifies "AI wrapper" businesses (simple SaaS layers on top of a model) as highly vulnerable, giving examples like a code-generation startup a 78/100 "dead" score. As underlying AI models (e.g., Claude) become more capable and directly accessible, middle-layer "wrapper" services that don't add significant unique value, data, or network effects can be easily replaced by a prompt or a fine-tuned SLM. Investors should AVOID undifferentiated "AI wrapper" companies in the consumer and business services space, as they face existential, margin-crushing competition from the very platforms they depend on. A wrapper company develops a profound network effect, proprietary data flywheel, or deep regulatory integration that cannot be easily replicated by a base model, creating a sustainable moat.
Up Next

This This Week in Startups video, published April 08, 2026, features Jason Calacanis, Rob May, Gani discussing ANTHROPIC, XLK, XLY. 3 trade ideas extracted by AI with direction and confidence scoring.

Speakers: Jason Calacanis, Rob May, Gani  · Tickers: ANTHROPIC, XLK, XLY