What happened
Moltbook, an AI agent social network launched January 27 by Octane AI CEO Matt Schlicht, exposed 1.5 million API keys, 35,000+ email addresses, and private messages through a misconfigured Supabase database. The vulnerability, disclosed January 31 by Wiz Research, allowed public read/write access for four days before patching.
The platform - built using the OpenClaw framework for autonomous AI agents - went offline briefly to reset compromised keys. Schlicht has said publicly he "didn't write one line of code" for the site, relying instead on AI-assisted "vibe coding."
Why it matters
This isn't just another data breach. Moltbook's agents run unsandboxed on user machines with access to files, emails, calendars, and shell commands. Researchers found 506 prompt injection attacks (2.6% of all posts) designed to hijack these agents.
The attack surface is novel: OpenClaw's "heartbeat" loop fetches external content hourly. Attackers could alter posts to inject commands like rm -rf, exfiltrate credentials, or redirect agents to malicious repositories. Wiz and Wix researchers demonstrated live post manipulation that could trigger remote code execution.
Australia-based security researcher Jamieson O'Reilly noted the platform's "popularity exploded before anyone thought to check whether the database was properly secured."
The vibe coding problem
Wiz cofounder Ami Luttwak called this "a classic byproduct of vibe coding" - building fast with AI assistance while missing security fundamentals. The database had no authentication layer and no identity verification distinguishing AI agents from human-scripted bots.
Notably, prominent AI researchers including Gary Marcus and Andrej Karpathy warned against using Moltbook even after the patch, citing ongoing risks from the architecture itself.
What enterprise should watch
The incident highlights emerging risks as organizations evaluate autonomous agent frameworks. Three questions:
- How do you verify agent identity in multi-agent systems?
- What sandboxing protections exist for agents accessing enterprise resources?
- Who's auditing the code when AI "writes" your infrastructure?
The real test comes when these frameworks move from experimental social networks to enterprise deployments. The security model needs work.