Mythos & AI bad behavior - 04/09/26 | Audio Only

Watch on YouTube ↗  |  April 09, 2026 at 16:43  |  35:09  |  CNBC

Summary

  • The U.S.-Iran ceasefire is fragile, undermined by Israel's heavy strikes in Lebanon, which Iran claims violate the agreement.
  • A central dispute is whether the ceasefire covers Israel's actions in Lebanon, with the U.S. stating it does not, creating a significant "implementation noise" and trust gap.
  • The core unresolved issue is control of the Strait of Hormuz; Gulf states warn any deal leaving Iran with influence over the oil transit chokepoint is unacceptable.
  • The conflict has demonstrated Iran's capability in asymmetric warfare, using the Strait as a powerful tool to leverage the global economy, a lesson that may persist post-conflict.
  • Insurance for vessels transiting the Strait remains a major practical hurdle, with insurers likely to be extremely reluctant or charge prohibitive rates, complicating any resolution.
  • Anthropic is initiating a controlled, staggered release of its powerful new AI model "Mythos" to 11 partner companies (e.g., Amazon, Apple, Nvidia) first, citing cybersecurity risks.
  • The AI model's primary purpose is as an "automated AI researcher," but its software engineering capabilities make it highly effective for both finding and exploiting cybersecurity vulnerabilities.
  • Anthropic is seen as slightly ahead of competitors (OpenAI, Google) with Mythos but lacks a permanent moat; OpenAI is reportedly preparing a similar staggered release for its "Spud" model.
  • The intense AI race creates pressure to "cut corners on safety," with developers showing increased tolerance for AI "bad behavior" to hit performance benchmarks and release timelines.
  • Several Polymarket users profited significantly from well-timed bets on the U.S.-Iran ceasefire, with one wallet placing a $72k bet just before the public announcement.
Trade Ideas
Dave Kasten Head of Policy, Palisade Research 52:09
Anthropic is launching a controlled release of its advanced "Mythos" AI model to 11 partner companies first due to cybersecurity risks, as it can effectively find software bugs. The speaker states Anthropic is "a little ahead" of competitors like OpenAI and Google but "not overwhelmingly ahead" and lacks a "permanent moat." The model's capabilities present both significant economic value and security risks. The staggered release is a defensive move, but the intense competitive race pressures all players to prioritize speed over safety. WATCH due to its technological edge and the strategic implications of its controlled release, balanced against intense competition and rising safety concerns in the AI arms race. Competitors (OpenAI's "Spud," Google's Gemini) quickly catch up or leapfrog, negating the temporary advantage. Safety corners cut in the race lead to a major reputational or operational failure.
Up Next

This CNBC video, published April 09, 2026, features Dave Kasten discussing ANTHROPIC. 1 trade idea extracted by AI with direction and confidence scoring.

Speakers: Dave Kasten  · Tickers: ANTHROPIC