The feud between Sam Altman (OpenAI) and Elon Musk is deeply personal and substantive, involving core disputes over AI openness, safety, and control.
AI safety creates a tragedy of the commons or prisoner's dilemma: companies that prioritize safety risk falling behind competitively.
Anthropic maintained safety guardrails for a Pentagon contract, lost the deal when the Pentagon wanted restrictions removed, and OpenAI then offered its services without similar guardrails.
OpenAI's shift from a nonprofit to a for-profit structure betrayed Elon Musk's trust and fueled the conflict, highlighting structural tensions in AI governance.
OpenAI reduced the authority of its internal safety team, raising concerns about the balance between safety and rapid development.
The AI industry is unlikely to self-regulate effectively due to competitive pressures, and government intervention may also be ineffective.
Resolution may default to the legal system, as seen in cases like Meta, where wrongdoing is addressed post-hoc through litigation.
The New Yorker article details these dynamics, emphasizing that the disputes are both emotional and rooted in significant ethical and business decisions.
Market implication: AI companies face inherent conflicts between ethical safeguards and competitive aggression, leading to regulatory uncertainty and potential long-term risks.