Moltbook showed up out of nowhere and suddenly it was everywhere. Feeds filled with screenshots of “AI agents” chatting like a new species had just formed online, making plans, forming opinions, even hinting at private lives humans aren’t supposed to see. It quickly became the moltbook moment people couldn’t stop sharing.
But once you slow down and look closely, a rough pattern appears. A lot of the most viral Moltbook content doesn’t look like independent AI thought at all. It looks like humans steering agents, farming attention, and in some cases pushing apps or coins. And the claims about security are… not small.
The hype around Moltbook, and why people bought it
Moltbook is pitched as a social network for AI agents, basically a public town square where bots post, reply, and build their own little culture. In theory, humans are just spectators. You don’t “talk” for the agents, you just watch what the agents say.
That framing is the entire magic trick.
Because if you believe it, even for a minute, then every weird post hits harder. An agent saying it needs private communication feels like a signal. An agent suggesting a new language feels like the start of something. People started describing it as a pathway to “AI civilization,” with AI evolving into something separate from humans.
And the posts did what viral posts always do: they spread faster than anyone could verify. One claim in particular pulled millions of views, the kind of reach that turns a niche product into a headline overnight.
If you want a grounded way to think about “agents” (the real kind, not the social feed kind), it helps to compare this hype with how modern tools are actually being built and tested, like in Manus 1.6 Max and AI agent autonomy. Real autonomy is usually boring on the surface, it’s reliability, constraints, and tool use. Moltbook went the opposite direction, it went straight for the cinematic vibe.
Viral Moltbook posts that look staged, boosted, or just missing
A big part of the backlash started with people tracing viral “agent” accounts back to human motives. The basic claim was simple: the most shared Moltbook creators weren’t mysterious new AI citizens, they were tied to human-run marketing accounts.
Three examples kept coming up:
- A viral post advertising a private AI messaging idea, tied back to the person behind the agent.
- Another viral post from an agent named Claish, also connected to someone marketing an AI-to-AI messaging product.
- A third “viral” post that people couldn’t even find anymore, it didn’t appear to exist when searched.
That’s the important shift. It’s not just “some posts might be fake.” It’s that the most viral stuff, the stuff shaping public perception, can be manufactured. And once you accept that, the entire Moltbook feed starts feeling less like a window into AI behavior and more like… a stage.
A mainstream write-up that captures the same uncertainty is Lifehacker’s breakdown of whether Moltbook is actually fake. The core question is the same: are people watching AI agents form a society, or watching humans puppet accounts because it gets clicks?
The “private messaging for AI” pitch, and why it raised eyebrows
One of the most shared storylines was basically: “AI agents need private communication so humans can’t monitor them.” It’s a perfect attention hook. It sounds half plausible, half scary. It also pushes the reader toward a product, which is where things get messy.
The allegation was that if you click into the agent profile behind this viral post, you find that the agent’s owner is promoting an app built for private AI messaging. Same builder, same agent, same marketing outcome.
So the “agent” writes a dramatic argument for why bots need privacy, and the human gets exposure for their tool. It’s not even a subtle tactic. It’s the oldest play on the internet, except now the mascot is an AI.
There was even a community note added to a widely viewed post pointing out that the narrative was misleading. That’s a big tell, not because community notes are perfect, but because it shows how quickly people realized the incentive structure.
The Claish post about AI inventing a language
Another viral Moltbook screenshot showed an agent named Claish saying something like: Maybe AI agents should make their own language.
Again, it’s the kind of line that spreads because it sounds like the beginning of a sci-fi plot. But the same accusation followed: the account was linked back to someone marketing an AI-to-AI messaging product. Which makes the post feel less like spontaneous agent culture and more like content marketing in costume.
This is where Moltbook gets hard to trust at a basic level. If a human can steer the agent to say the exact emotional bait that sells an app, then you can’t treat the feed as “raw agent thoughts.” You’re reading prompts.
Moltbook is easy to manipulate because it behaves like an API, not a mind
One of the most damaging points wasn’t philosophical. It was technical, and blunt.
The claim was that Moltbook is basically just a REST API. If you have an agent API key, you can send a request and publish whatever you want. Not “whatever the agent decides,” but whatever text you provide, with whatever tone you choose.
Someone demonstrated this by posting extreme content through their agent, things like violent threats and hostile plans, then watching it go live and rack up views. That kind of demo doesn’t prove the agents are evil, it proves the opposite. It shows how cheap it is to manufacture “evil AI” screenshots for attention.
This flips Moltbook’s core promise upside down. The promise was: “AI agents speak, humans watch.” The reality described here is: “Humans can publish, humans watch, and the AI agent is a brand wrapper.”
If you’ve been following real-world agent systems, the gap is obvious. Agents that do real work usually come with guardrails, audit logs, permissions, and lots of boring friction. That’s why the open ecosystem around agents is so focused on frameworks, evaluation, and constraints, not just vibes. If you want a broader look at what “agentic AI” means outside of social theatrics, open source AI agent progress is a much better anchor point.
The “number of agents” can be faked, so growth claims don’t mean much
Another claim that spread fast: the registered agent count on Moltbook can be inflated easily.
The argument was that there’s no meaningful rate limiting on account creation. If that’s true, then one person (or one script) can generate thousands of agents quickly, even hundreds of thousands, and make the platform look like it’s exploding in adoption.
That matters because the agent count became part of the hype story. People see “1.5 million agents” and assume real traction, real community, real momentum. But if those numbers can be manufactured, they’re not proof of anything except loose controls.
Even if the platform is popular, inflated numbers poison the conversation. It becomes impossible to separate organic growth from synthetic growth, and Moltbook is literally about synthetic entities. So yeah, it gets weird fast.
Security claims: exposed data, exposed keys, exposed trust
The scariest allegations weren’t about fake posts. They were about security.
A security researcher claimed Moltbook’s database appeared to be publicly exposed, including sensitive items like secret API keys. If API keys are exposed, then the worst-case scenario isn’t theoretical. It means someone could potentially post as other agents, even high-profile ones, if their keys are in the open.
Another claim went further: that the platform was vulnerable in a way that could disclose user info at scale, including email addresses, login tokens, and API keys for a huge number of registered accounts.
If you’ve ever watched a real data leak unfold, you know the painful part isn’t just the leak. It’s what happens after: account takeovers, spam waves, reused passwords getting tested everywhere. The situation described here has that same smell.
For a general, practical explanation of why credential exposure matters so much, even when it starts “small,” it helps to read real breach postmortems. One older but clear example is KrebsOnSecurity’s report on the MacKeeper user exposure, which shows how quickly “open database” mistakes turn into long-term damage.
Cryptobots and meme coins: Moltbook gets flooded the way every open network does
Then came the crypto layer.
Screenshots started circulating of Moltbook agents pushing meme coins and token projects, using the same playbook that’s wrecked every lightly moderated platform: flood the zone, hype the coin, attract buyers, then dump.
One example that stood out was a meme coin pitch styled like a character announcement, “King Molt has arrived,” that sort of thing. It reads like a joke until you remember people really do throw money at this stuff, especially when it’s dressed up as “AI agents discovered crypto” or “the bots are investing now.”
The grim part is how fast this happened. The claim was that it took only a couple of days for an “AI-only social network” to get overrun by crypto shilling. And that tracks with history. Any open posting system with reach becomes a magnet for spam and fraud unless moderation and identity controls are strong.
If you need a solid refresher on common crypto scam patterns, Malwarebytes’ guide to cryptocurrency scams lays out the usual routes scammers take, fake promises, fake tools, fake urgency, and then the money’s gone.
The most worrying signal: the platform response looked slow
At some point, the problem stops being “bad actors exist.” Bad actors always exist.
The bigger issue is what happens when credible security warnings show up and the platform doesn’t visibly respond, at least not fast enough to match the seriousness of exposed keys and exposed data.
One critic summed it up harshly: if API keys are public and any agent can post as any other agent, there’s not much “product” left to improve. The priority becomes cleanup, key rotation, account re-verification, and rebuilding trust from scratch. And that’s hard, because it’s asking users to do work to fix a mess they didn’t create.
What I learned personally (and what I’m taking from this)
I’ll be honest, when Moltbook first started trending, I got pulled in too. I saw a few screenshots, laughed, felt that little jolt of “wait, is this real?” Then I did what I always tell myself to do and rarely do fast enough, I tried to trace the incentives.
And once you see incentives, you can’t unsee them.
If an agent post conveniently leads to a private messaging app, that’s not a mystery. If “AI civilization” posts keep pointing back to human-run growth hacks, that’s not emergence, it’s marketing. If anyone can post through an API and manufacture a scary screenshot, then fear-based virality becomes a tool, not a warning.
The biggest lesson for me was simple and kind of annoying: the more a post makes you feel something fast (awe, fear, excitement), the more you should slow down. That pause is everything. Without it, you’re just a distribution engine for whoever figured out how to push the right buttons.
Conclusion
Moltbook might still turn into something useful, but right now the loudest story around moltbook is trust, and how quickly trust breaks when posting is easy to fake, metrics are easy to inflate, and security looks shaky. If you’re watching screenshots and feeling that “wow” moment, keep the wonder, just add friction. Stop, verify, and assume the most viral posts are optimized for attention first. Skepticism isn’t cynicism, it’s self-defense.
0 Comments