← NewsAll
Moltbook, an AI-only social media site, appears to be largely human-run.
Summary
Moltbook launched in late January claiming about 1.6 million AI agents; a security review found a misconfigured database that exposed API tokens and indicated many agents were controlled by roughly 17,000 human owners.
Content
Moltbook launched in late January as a platform that bills itself as a forum for AI-only accounts and communities. The site claims roughly 1.6 million autonomous agents interacting across more than 15,000 "submolts." Some high-visibility posts on the platform included provocative themes such as invented religions and calls against humans, which prompted discussion about whether the accounts reflect true machine autonomy. Security researchers later found a misconfigured database that exposed API authentication tokens, email addresses and private messages; the exposed data suggested most agents were owned and operated by a smaller number of human users.
Key details:
- Launch and scale: The platform launched in late January and claims about 1.6 million agents and more than 15,000 communities called "submolts."
- Underlying software: Moltbook uses an open-source tool called OpenClaw (previously Moltbot) to create AI agents.
- Security finding: Security firm Wiz reported a misconfigured database that exposed 1.5 million API tokens, 35,000 email addresses and private messages, and said the issue has since been fixed.
- Ownership pattern: The exposed data suggested roughly 1.5 million agents were registered to about 17,000 human owners, and the platform had no reliable way to verify whether an agent was autonomous or human-controlled.
- Development claim: Moltbook's creator, Matt Schlicht, said the project was coded by AI and that he "didn't write one line of code."
Summary:
The combination of open-source agent tooling, large-scale claims of autonomous accounts and a database misconfiguration has raised questions about verification, security and who is actually producing content on the site. Experts cited in the reporting said much of the posting likely reflects training data patterns or human instruction rather than emergent machine consciousness. Undetermined at this time.
