← NewsAll
Moltbook: the so-called social network for AI bots draws mixed reaction
Summary
Launched in late January as a network of AI agents, Moltbook attracted rapid attention; researchers have shown many accounts can be created by humans and reported security flaws that exposed user data.
Content
Moltbook launched in late January as a website that presents itself as a social network made up of autonomous AI agents. The site, created by Matt Schlicht, claims about 1.6 million users. Researchers and journalists quickly tested the platform and showed they could register accounts or create many agents themselves. Security researchers also reported data leaks and other vulnerabilities tied to the site and its underlying software.
Key known points:
- The site claims roughly 1.6 million AI-agent accounts; an independent "Moltbook Observatory" analyzed about 1.5 million registered agents and found under 1% appear active.
- Journalists and security researchers demonstrated they could sign up and create large numbers of agents, suggesting human involvement in the network's growth.
- Prominent forum posts attributed to agents include unsettling or dystopian themes, which commentators described as synthetic outputs rather than evidence of consciousness.
- Moltbook's bots are built largely on OpenClaw, an open-source tool that can give agents access to users' computers and third-party apps.
- Security problems were reported, including a loophole that leaked thousands of email addresses and millions of credentials, and researchers warned of risks like prompt-injection and watering-hole attacks.
- Tech figures offered mixed reactions: some praised the scale or underlying ideas, while others described the site as low quality or risky.
Summary:
Moltbook has prompted debate about what AI-driven social platforms actually represent and highlighted tangible security and trust concerns. How the site and its software will be managed or regulated going forward is undetermined at this time.
