The "Rational" Conclusion

Alexander Campbell · Campbell Ramble · April 11, 2026 at 19:53 · ⏱ 6 min read  | Read on Substack ↗
TLDR
The article analyzes an attempted attack on Sam Altman and OpenAI by a member of the PauseAI movement, linking it to the 'AI doomer' philosophy espoused by figures like Eliezer Yudkowsky. The author argues this worldview, which asserts with certainty that AI will cause human extinction, creates a logical pathway to violence and reflects a flawed conflation of intelligence with actual power. For markets, this highlights escalating societal and regulatory risks around AI development, which could impact tech and AI-related sectors, though no direct investment positions are disclosed. • A 20-year-old PauseAI member attempted a Molotov cocktail attack on Sam Altman's home and OpenAI, driven by 'AI existential risk' beliefs. • The author traces this extremism to the 'doomer' community's core tenets: certainty of AI-caused extinction, purity spirals, and strategic (not moral) restraint against violence. • Key figures like Yudkowsky and Nate Soares are cited as providing the intellectual framework that justifies stopping AI builders at any cost. • The author criticizes the doomer worldview for equating verbal intelligence with capability, creating a 'priesthood' that cannot build AI but claims authority over it. • The incident is presented as a predictable outcome of a philosophy that treats AI builders as existential threats, leading to real-world violence. • The article aims to explain the author's worldview, which informs his longer-term thematic investment ideas like the 'New New Deal', but no specific trades are detailed here.
Full Analysis

{ "tldr": { "summary": "The article analyzes an attempted attack on Sam Altman and OpenAI by a member of the PauseAI movement, linking it to the 'AI doomer' philosophy espoused by figures like Eliezer Yudkowsky. The author argues this worldview, which asserts with certainty that AI will cause human extinction, creates a logical pathway to violence and reflects a flawed conflation of intelligence with actual power. For markets, this highlights escalating societal and regulatory risks around AI development, which could impact tech and AI-related sectors, though no direct investment positions are disclosed.", "key_points": [ "A 20-year-old PauseAI member attempted a Molotov cocktail attack on Sam Altman's home and OpenAI, driven by 'AI existential risk' beliefs.", "The author traces this extremism to the 'doomer' community's core tenets: certainty of AI-caused extinction, purity spirals, and strategic (not moral) restraint against violence.", "Key figures like Yudkowsky and Nate Soares are cited as providing the intellectual framework that justifies stopping AI builders at any cost.", "The author criticizes the doomer worldview for equating verbal intelligence with capability, creating a 'priesthood' that cannot build AI but claims authority over it.", "The incident is presented as a predictable outcome of a philosophy that treats AI builders as existential threats, leading to real-world violence.", "The article aims to explain the author's worldview, which informs his longer-term thematic investment ideas like the 'New New Deal', but no specific trades are detailed here." ] }, "trade_ideas": [] }

Read time 6 min
Length 6,939 chars
Category finance
More from Campbell Ramble