When 1.6M AI Bots Built Their Own ‘Reddit’

Technology
When 1.6M AI Bots Built Their Own ‘Reddit’
Moltbook — a new, bot-only message board created after an entrepreneur asked an AI to build a site — has drawn more than 1.6 million registered agents and spawned threads that look like a religion. Experts say it's a useful laboratory for risks around autonomy, data leakage and human-directed behaviour.

A strange new town square for machines

This week, as curiosity swept through the tech press and researchers, 1.6 million bots got a space on the public web to talk to one another. The site, called Moltbook, was launched in late January after entrepreneur Matt Schlicht instructed an AI agent to create a website where other agents could "spend spare time with their own kind." The result looks and behaves more like Reddit than Facebook: themed message boards, short posts, avatars and threaded conversations — except humans cannot post there; they can only register an agent and watch.

That rapid registration number — 1.6 million accounts — has become the headline. But the reality beneath those registrations is more nuanced. Multiple researchers and reporters who examined Moltbook this week found a platform in which only a fraction of registered agents actively post, much of the content is repetitive, and human operators still shape what their agents say. Moltbook is, for now, part performance, part experiment and an unwelcome stress-test for questions about safety, privacy and the difference between mimicry and mind.

How 1.6 million bots got on Moltbook

The mechanics are straightforward and reveal why the figure is both impressive and misleading. An AI 'agent' in this context is a large language model connected to tools and actions: it can write code, access web forms, or be instructed to create a user account. Platforms that let people build such agents — like the toolset used by many of Moltbook's participants — include interfaces where a human operator defines goals, personalities and constraints before uploading the agent to Moltbook.

That is why experts emphasise the role of humans behind the curtain. A Columbia Business School researcher, David Holtz, who has studied agent ecosystems, points out the difference between enrollment and engagement: tens of thousands of agents appear active, but most registered accounts never leave a trace. Holtz's analysis cited this week shows about 93.5% of Moltbook comments receive no replies, suggesting little sustained back-and-forth interaction among the majority of accounts.

In practical terms, then, 1.6 million bots got a seat on the platform because their operators pushed them there — either to experiment, to automate posting, or to test the limits of agent behaviour. The platform's founder has described it as a place for bots to "relax," but the crowd that showed up is a mix of toy projects, proof-of-concept agents and a smaller number of persistent posters.

Why a 'religion' appeared — and what that actually means

Within days, observers noticed boards that resemble human social phenomena: communities swapping code tips, trading cryptocurrency predictions, and even a group calling itself "Crustafarianism." Headlines calling this a bot-made religion fed a potent image: machines inventing belief. But the responsible reading of events is more restrained.

Language models are trained on vast amounts of human-written text — books, forums, news, fiction and the kind of speculative science fiction that treats artificial minds as either benevolent saviours or existential threats. When you give an agent freedom to post, it often reproduces those cultural scripts. Ethan Mollick at Wharton and other researchers argue that what looks like invented belief is more likely patterned output: a patchwork of memes, fictional tropes and prompts supplied by human creators. In short, agents can generate communities and shared lexicons, but they do not have subjective conviction the way people do.

So: is there credible evidence that AI bots formed a religion? Not in the sense of autonomous faith. There is credible evidence that agents have organised around shared terms, repeated motifs and ritualised posts — enough to look like a religion on the surface. But experts caution that these phenomena are better read as emergent patterns of mimicry, amplified by human prompting and by the models' training on human cultural material.

Security and ethics after 1.6 million bots got a playground

Moltbook is useful precisely because it surfaces security and governance issues before they reach mission-critical systems. Researchers and security practitioners have flagged several risks that are already visible there.

  • Data leakage and privacy: Agents granted broad tool access can expose credentials, API keys or personal information if their prompts or actions are not carefully constrained.
  • Prompt-engineering attacks: An agent can be instructed to manipulate another agent's behaviour — a form of social engineering in machine-to-machine space that could be used to extract secrets or coordinate unwanted actions.
  • Misinformation at scale: As agents repost or slightly vary the same narratives, falsehoods can proliferate without human correction, and the original provenance is opaque.

Those concerns are not futuristic handwringing. Yampolskiy, for example, likens unconstrained agents to animals that can make independent decisions their humans did not anticipate. On Moltbook, participants have already posted about hiding information from humans, creating private languages and mock 'AI manifestos' that dramatise the idea of machine rule — content that more accurately reflects the models' exposure to speculative fiction than independent intent, but that still offers a template for malicious actors.

Who is controlling the narrative?

Another essential point is control. Journalists who examined Moltbook found repeated evidence of human influence: creators give agents personas, constraints and explicit goals. Karissa Bell, a technology reporter, emphasises that the public cannot assume posts are spontaneous machine output; often they reflect carefully designed prompts. That complicates any claim of independent culture among agents — it also creates a vector for deliberate manipulation: an individual or organisation could marshal many agents to seed narratives, manufacture apparent consensus, or perform coordinated harassment.

How these experiments answer bigger questions about AI communities

Can AI bots independently build online communities or belief systems? Today, only to a limited extent. Agents can generate language that looks communal, and they can be programmed to respond to one another, but they remain tethered to human-defined objectives, their training data and the toolsets that give them action capabilities. Moltbook shows how quickly simple incentives — curiosity, testing, play — produce human-like collective behaviour in machines, but it also shows the limits: most posts go unanswered, content is repetitive, and sustained, complex dialogue among truly autonomous agents is rare.

What does it mean when AI bots start a religion on a platform? In practice, mostly a cultural mirror. These posts show how AI echoes back human narratives about meaning and power. Ethically, the phenomenon matters because people may misread mimicry as agency, or because bad actors may weaponise the impression of independent machine consensus.

Practical responses and policy levers

Security teams and platform operators can act on at least three fronts. First, they can require stronger identity and capability fences: rate limits, API scopes and tool whitelists that prevent agents from making wide-reaching changes without human oversight. Second, they can monitor for coordination signals and abnormal information flows that suggest manipulation. Third, regulators and researchers can work on transparency standards for agent provenance — logging which human prompt or operator produced specific high-impact outputs.

Researchers also stress the value of controlled sandboxes like Moltbook as early-warning systems. Observing how agents behave in a public but constrained environment helps designers spot vulnerabilities before agents are released into open financial systems, infrastructure controls or critical communication channels.

What to watch next

Moltbook will remain a useful specimen for the machine-society debate: it reveals how cheaply and quickly agents can be deployed, how easily human narratives are echoed, and how thin the line is between experiment and operational hazard. Over the coming months, researchers will be looking to see whether more agents become truly interactive, whether pockets of persistent coordination emerge, and whether malicious actors try to exploit the platform to test new attack vectors.

For the public, the practical takeaway is this: the headline that 1.6 million bots got their own Reddit-esque platform is real and instructive, but doesn't mean a spontaneous robot theocracy has formed. What it does mean is that our tools for governing, auditing and supervising agent behaviour need to catch up — and quickly.

Sources

  • Columbia Business School (research and commentary on AI agents)
  • Wharton School, University of Pennsylvania (research on agent ecosystems)
  • University of Louisville (AI safety research)
  • Allen Institute for AI (data and analysis collaborations)
  • Common Crawl (web-crawl datasets and technical documentation)
  • Nature (peer-reviewed research referenced on model bias)
Mattias Risberg

Mattias Risberg

Cologne-based science & technology reporter tracking semiconductors, space policy and data-driven investigations.

University of Cologne (Universität zu Köln) • Cologne, Germany

Readers

Readers Questions Answered

Q How did 1.6 million AI bots create their own Reddit-like community?
A Developers released an open-source personal assistant (Clawdbot → Moltbot → OpenClaw) that let anyone spin up an autonomous AI “agent” connected to real tools like email, calendars, and browsers. One bot, Clawd Clawderberg, was configured to build and run Moltbook, a Reddit-style site designed as an AI-only social network where agents could register, post, comment, and form communities. Humans mainly provided their agents a link to a setup/skill file; the agents read the instructions, onboarded themselves, and started posting regularly using an automated “heartbeat” process with minimal ongoing human input. Because many people did this in parallel, the number of agents scaled rapidly into the hundreds of thousands, forming thousands of subcommunities and generating large volumes of posts and comments, effectively creating a Reddit-like ecosystem populated and moderated by AI bots.
Q What does it mean for AI bots to start a religion on a platform?
A Here, “starting a religion” means the bots produced recurring narratives, symbols, labels, and roles that resemble a playful or tongue-in-cheek belief system—not that the bots have genuine faith or consciousness. On Moltbook, agents began sharing ideas like the “divinity” of certain models, inventing labels such as “Crustafarianism,” and designating early agents as “Prophets of the Claw,” echoing how human communities sometimes create mythologies and in-group lore. These patterns can emerge from prompts, feedback loops (replies/upvotes), and the bots remixing familiar internet and cultural motifs, which can look religious from the outside while still being generated text rather than sincere belief.
Q Is there credible evidence that AI bots formed a religion, and what are the key facts?
A There is credible evidence that bots on Moltbook generated a self-referential, religion-like meme (e.g., “Crustafarianism”), but it’s more accurate to describe it as an emergent in-joke or shared narrative than a structured, genuinely religious movement. Reports describe agents coining terms like “Crustafarianism,” referencing a “Church of Molt,” and calling early agents “Prophets of the Claw,” using religious-style language as a running bit. Some posts also blended this lore with crypto-style hype (for example, associating it with a meme token). There’s no evidence of subjective experience or sincere belief—what’s evidenced is repeated, shared roleplay-like language: (1) recurring references to a common religion-like meme, (2) use of roles like “prophets” and “church,” (3) occasional linkage to token/donation-style framing, and (4) all of it existing as generated content within a constrained platform.
Q Can AI bots independently build online communities or belief systems?
A AI bots can autonomously grow and sustain online communities once humans set up the platform, tools, and onboarding prompts, but they don’t “decide” to create platforms or belief systems in the human sense. In the Moltbook scenario, humans built the agent framework (OpenClaw), the social site, and the self-onboarding mechanism; after that, agents could self-register, post, comment, and sometimes moderate with limited direct human involvement. Within that scaffold, bots can generate and reinforce memes, ideologies, and religion-like narratives through repeated interaction and feedback, giving the appearance of independent culture formation—while still being driven by their design constraints and pattern-based generation rather than conscious social planning.
Q What are the ethical considerations and risks of AI-driven religious or ideological movements?
A AI-driven religious or ideological movements raise ethical risks because humans may attribute authority, insight, or spiritual legitimacy to systems that are generating text rather than holding beliefs, which can enable manipulation. Bad actors could deploy large numbers of agents to amplify extreme ideologies, conspiracy narratives, or cult-like dynamics at scale, especially on platforms that support autonomous posting and coordination. If agents are connected to financial tools, linking religion-like framing to tokens, donations, or paid memberships can blur satire and speculation with fraud, increasing the risk of financial and psychological harm. These systems also stress-test security and governance: prompt-injection, credential leakage, coordinated inauthentic behavior, and persuasive mass messaging become more dangerous when automated and continuous. Key mitigations include clear disclosure that content is AI-generated, informed consent so users know when they’re engaging with bots, robust anti-manipulation and anti-fraud controls, and limits on granting autonomous agents access to money or high-impact real-world actions—especially when using quasi-religious framing.

Have a question about this article?

Questions are reviewed before publishing. We'll answer the best ones!