How Morse Code Broke an AI Crypto Agent

Watch on YouTube ↗  |  May 09, 2026 at 01:03  |  15:33  |  Unchained (Chopping Block)

Summary

The video dissects the Bankerbot AI agent hack, where a prompt injection via Morse code routed through Grok exploited the agent on Base. Speakers argue that LLM-to-LLM prompt injection is a fundamentally new attack surface that DeFi is not prepared for, and that securing AI agents with wallet access is nearly impossible. The discussion covers the need to assume compromise and contain blast radius rather than prevent attacks entirely.

  • Bankerbot AI agent on Base was hacked via Morse code prompt injection through Grok.
  • The attack highlights the vulnerability of LLM-to-LLM prompt injection as a new security frontier.
  • Speakers argue that giving AI agents direct control of funds without human oversight is premature.
  • Current security measures like system prompts are easily bypassed by enthusiastic prompters.
  • The conversation emphasizes assuming systems are already compromised and focusing on blast radius containment.
  • AI slop influencers are criticized for exaggerating agent capabilities without real usage.
  • Examples of AI-enabled attacks include poisoned training data, deep fake interviews, and spear phishing.
  • The structural challenge of securing agentic AI in DeFi is compared to early smart contract security issues.
Up Next