Clawdbot Just Took a Wild Turn: OpenClaw's Creator Joins OpenAI

Clawdbot Just Took a Wild Turn: OpenClaw's Creator Joins OpenAI


If you blinked, you probably missed how fast Clawdbot went from a nerdy side project to a move that might reshape what "personal agents" look like inside mainstream products. Over one weekend, the story swerved hard: the developer behind the project (now called OpenClaw) is heading to OpenAI, and the open-source project is getting pushed into a foundation so it can keep growing out in the open.

And the backstory is… messy. Trademark pressure, handle squatting, crypto scams, malware, and security firms basically yelling "stop" at the same time that GitHub stars were going vertical. That contrast is the whole point. The hype was real, and the risk was also real.

From a viral open-source agent to an OpenAI hire

A dynamic cyberpunk scene of a GitHub repository exploding with stars and forks, showcasing massive growth in an open-source AI agent project on a laptop screen amid digital viral particles.

The headline moment is simple: Sam Altman announced that Peter Steinberger, the builder behind Clawdbot (and then Moltbot, and then OpenClaw), is joining OpenAI to work on the next generation of personal agents.

That's not a "cool hire" story. It's a "this is going to ship" kind of story.

Altman's framing makes that pretty clear, especially the part that sticks in your head:

"Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He's a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open-source project that OpenAI will continue to support."

And then the line that tells you where the industry is headed: the future is going to be extremely multi-agent.

If that sounds abstract, it isn't. "Multi-agent" just means you stop having one assistant that chats, and you start having a small team of specialists that coordinate. One agent handles email. Another books travel. Another watches for changes on a site. Another one checks logs. A bunch of little workers, basically.

Now layer in the rest of the story, because it's not clean. It's not "builder ships product, wins." The chain of events includes (in plain language) a trademark threat, a rebrand scramble, scammers camping on names like vultures, and a security fire drill that made the project feel both exciting and terrifying in the same week.

If you want the earlier baseline of what the tool is and why it spread so fast, this breakdown of what Clawdbot really does pairs well with the rest of what happened next.

Quick TL;DR if you're new to Clawdbot

Back in November 2025, Steinberger built a personal side project, basically glue code that connected WhatsApp to Claude (Anthropic's model) so he could text an agent and have it do real tasks, not just talk. Think inbox management, reservations, flight check-ins, smart home stuff, that kind of everyday "please do this for me" work.

He open-sourced it under the name Claudebot (spelled like Clawbot in the story, with the lobster claw theme), and for a while it had a normal open-source life. A few thousand GitHub stars, solid interest, nothing insane.

Then January hit, and it exploded. The repo reached 201,000 stars, and it pulled over 2 million visitors in a single week. The story calls it the fastest-growing open-source project in GitHub history, and honestly, even if you want to nitpick the phrasing, the speed was still ridiculous.

That popularity didn't just mean "more users." It created an entire mini internet around agents talking to agents, and people turning the framework into every possible weird spinoff you can imagine.

The ecosystem boom, then the naming chaos

Illustration of a lobster-like AI agent molting its shell in rebranding turmoil from Claudebot to Moltbot to OpenClaw, surrounded by crypto scam icons like Solana tokens, malware alerts, and spam tweets. Dark thriller mood with red warning lights in digital art style, landscape orientation.

Here's what made Clawdbot feel different from a normal "AI tool goes viral" moment: it spawned culture, not just clones.

People built Moltbook, which was described as a social network for AI agents. And then it got weirder, fast. Marketplaces for agents, dating-style agent apps, adult-themed agent apps, basically "everything humans do, but for agents." It sounds like a meme, but it's also a signal. When you see a thousand odd experiments pop up, it means a framework is easy enough for people to remix quickly.

One of the most telling reactions came from Andrej Karpathy, who described what was happening at Moltbook as "genuinely the most incredible sci-fi takeoff adjacent thing" he'd seen recently. That's the vibe. Like you're watching the internet prototype something that used to be a movie concept.

Then the trademark problem landed.

On January 27th, Anthropic's legal team sent a notice saying "Claudebot" was too close to their Claude branding. That's not an insane argument. Trademark law is a thing, and companies protect names for a reason. Still, the outcome matters more than the intent, because that letter forced a rebrand right as the project started to go viral.

So Claudebot became Moltbot, keeping the lobster theme (lobsters molt). Clean enough.

But then the internet did what it does. The moment the old username got released, crypto scammers grabbed it almost instantly. We're talking seconds. Then came fake Solana tokens, malware served from GitHub, npm packages getting hijacked, and social mentions turning into pure spam.

Steinberger said he got close to deleting the whole thing. Not because he didn't care, but because it turned into a mess overnight, and there's a point where you're like, okay, I showed you the future, now I'm out. The only reason he didn't burn it down was the contributor reality. People had already invested time into it.

So he rebranded again, this time to OpenClaw, and he described doing it like a covert operation. Monitoring social platforms, setting decoy names, trying to avoid getting sniped again. It's funny, but also not funny, because it shows how fragile identity is in open source when money-hungry scammers are watching.

If you want a deeper look at the scam side of the saga and how fast it spiraled, this internal recap of the 72-hour Clawdbot meltdown captures the texture of it without sugarcoating it.

The security panic was not "FUD," it was a real problem

At the exact same time people were celebrating Clawdbot's capabilities, security people were having the opposite reaction. The story describes Gartner calling it an unacceptable cybersecurity risk, with guidance to block downloads and traffic. Researchers reportedly found more than 30,000 OpenClaw instances exposed on the public internet, no authentication, no protection, just sitting there.

And here's the part that makes your stomach drop a little: these instances could include access to emails, calendars, Slack credentials, API keys, and basically whatever someone fed into their agent. If you gave your agent "just enough access" to be useful, you might have also created "just enough access" to ruin your week.

One security firm reportedly found that 93% of verified instances had vulnerabilities. CrowdStrike even released an OpenClaw removal tool so companies could purge it from their systems completely.

Then Moltbook had its own mess: a database misconfiguration that exposed 1.5 million API keys and 35,000 user emails.

So yeah, two things were true at once:

  • It might be the most exciting agent framework people had seen in a while.
  • It also had some of the scariest real-world security failures, because people installed and hosted it fast, often without locking it down.

That's not "agents are bad." That's "agents with real permissions amplify every mistake." When a chatbot messes up, it's annoying. When an agent with inbox access messes up, it's expensive.

For a bit more outside commentary on how the trademark move and the outcome backfired strategically, this piece titled Anthropic's trademark fumble and the OpenAI hire is an interesting read, even if you don't agree with every take.

Peter Steinberger isn't a random hobbyist

A lot of people see a viral GitHub repo and assume it was luck plus vibes. That's not what happened here.

Steinberger built PS PDF Kit, a PDF toolkit used by Apple, Dropbox, and SAP. He bootstrapped it for 13 years, and the story claims nearly a billion people use apps powered by software he developed. That's not beginner territory. That's "you've shipped infrastructure that matters."

After his exit, he burned out hard, didn't touch a computer for months, and stayed away from tech for around three years. Then he came back in April 2025, and by then AI coding tools were good enough that they pulled him back into building. Call it "vibe coding" if you want, but the output was real: a lot of open-source projects, tons of GitHub activity, and eventually the project that hit the nerve at exactly the right moment.

Then reality hit again, this time on cost. He wanted to build an agent that even his mom could use, not something that only developers can babysit. But running OpenClaw was costing him $10,000 to $20,000 a month, out of pocket. That's the part that quietly explains why "just keep it independent" stops being a cute idea when millions of people show up.

Why OpenAI wanted OpenClaw (and why Peter said yes)

Photorealistic view of OpenAI's modern headquarters at golden hour dusk, featuring overlaid emerging AI agent icons symbolizing next-gen personal assistants and subtle claw-lobster motifs in the architecture details.

Multiple big players reportedly wanted him. Meta was interested, and the story even mentions a personal call from Microsoft's Satya Nadella. Yet he chose OpenAI.

His stated reason, from his weekend blog post, is basically: OpenClaw must stay open source, and he believed OpenAI was the best place to keep pushing the vision while expanding reach. Read between the lines and it suggests at least some other offers came with more control, or less openness.

There's also a bigger chessboard here, and it has less to do with "whose model is smarter" and more to do with distribution and enterprise adoption.

One chart mentioned in the story (from Menlo Ventures) showed OpenAI with about 50% of the enterprise market share in 2023, dropping to 25% by mid-2025, while Anthropic rose to 32%. Another later-2025 chart put Anthropic at 40% of the enterprise LLM API market share. Claude Code, specifically, was described as hitting $1B in revenue in six months, which is just a wild number if you've ever tried selling anything to enterprises (they move slow, until they don't).

Now here's the twist that makes this whole thing feel like a plot: OpenClaw pushed a ton of people onto Anthropic's paid plans and APIs, because many OpenClaw setups ran on Claude. So OpenAI didn't just hire a talented builder, they pulled in the person whose project was sending paying users to their biggest competitor.

If you've been tracking the broader "agents getting more autonomous" trend, this companion piece on how agents are inching toward real autonomy sits in the same lane. Different product, same direction.

The real battle is the agent layer (not model benchmarks)

Three abstract humanoid AI agents collaborate on a professional desk, managing emails, flights, and smart home devices via glowing networks, with a secure vault in the background.

Model benchmarks used to be the whole story. Now they're table stakes.

The real fight is the "agent layer," the software that sits between a model and your life, and actually does the work. That means logging into tools, calling APIs, using web search, handling long-running tasks, and coordinating multiple sub-agents without making you micromanage every click.

And yes, security becomes the product.

You can't sell "personal agents" at scale if the average user has to think like a security engineer. People want the magic without the paranoia. The first company that makes agents feel normal, safe, and boring (in the best way) wins a huge chunk of the market.

That's why this OpenAI move matters. It signals that personal agents are going to show up where normal people already are, and that the next wave won't be "install a repo and pray," it'll be integrated into products with guardrails and defaults.

This is also why the open-source angle is so sensitive. Open source accelerates adoption and experimentation, but it also accelerates copycats, scams, and insecure forks. Keeping OpenClaw in a foundation, while OpenAI supports it, is an attempt to keep the upside without letting the whole thing turn into chaos again.

A quick visual that nails the vibe

A woman plays chess against a robotic arm, showcasing AI innovation in a modern setting.
Photo by Pavel Danilyuk

That's kind of what this moment feels like. Humans are still "playing," still in control, but the machine is now making real moves on the board. Not suggestions. Moves.

What this means for you if you just want useful AI (not drama)

First, agents aren't theoretical anymore. People installed them, hosted them, and used them enough to drive 201,000 stars and millions of visits. That doesn't happen from curiosity alone. It happens because the workflow feels addictive, like texting a helper instead of opening five apps.

Second, expect OpenAI to push agent features into ChatGPT. The whole "agent my mom can use" line matters. It means fewer terminal steps, fewer configs, fewer "read the docs," and more "just works." Also, no, you shouldn't need a specific tiny desktop machine to participate in this future. That narrative tends to show up in every wave, and it usually fades once products get packaged properly.

Third, the Anthropic trademark letter is a masterclass in unintended consequences. Again, not judging the legal move, but the chain reaction is brutal: trademark notice, rebrand, scammers hijack identity, second rebrand, security panic, attention from every major tech company, then OpenAI hires the builder and backs the project via a foundation. That's a lot of downstream effect from a single "please change the name" moment.

If you're building or even just watching the enterprise space, it's also worth keeping tabs on how the OpenAI vs Anthropic rivalry keeps escalating. This internal breakdown of GPT-5.3 vs Opus 4.6 in the AI coding war is a good snapshot of how aggressive the shipping pace has gotten.

A few places to keep up with tools and updates

If you like tracking new tools without living on social media all day, these are solid starting points:

And if you want a more "what does this mean" outside take on the OpenClaw hire itself, an OpenClaw acqui-hire analysis lays out the bigger industry angle.



What I learned from watching Clawdbot blow up (and, honestly, from building around agents)

I've shipped enough messy software to know this feeling: when something goes viral, you don't get a victory lap, you get a stress test. And Clawdbot was a stress test from hell, because it mixed three dangerous ingredients: real utility, real permissions, and real money orbiting the project.

The biggest lesson for me is that "open source + agents" needs a different muscle than "open source + library." A library can be insecure and you might get bugs. An agent can be insecure and you might get your inbox, keys, and accounts exposed. That's a totally different blast radius, and it changes what "good defaults" even means.

I also walked away with a quieter takeaway: the agent interface matters more than the model for most people. Texting an agent inside an app you already use feels natural, almost boring, and that's why it spreads. People don't want another dashboard. They want a helper that lives where they already live.

Finally, I'm way less impressed by star counts than I used to be. I still love momentum, I'm not made of stone, but stars don't equal audits, and hype doesn't equal safety. The weird part is that the internet is going to relearn that lesson a few more times, because the demand is too strong. People want assistants with hands. They just don't want those hands grabbing the wrong thing.

Conclusion

Clawdbot didn't just go viral, it exposed where the AI race is actually headed: toward personal agents that can take action, across tools, with security that's baked in instead of bolted on later. The OpenAI hire signals that this agent layer is about to become a core product battleground, not a side hobby for developers. If OpenClaw stays open source inside a foundation and OpenAI ships a safer "agent for normal people," the next chapter won't be smaller, it'll be louder. So yeah, keep watching this one, because the agent wars are only getting started, and Clawdbot was the opening shot.

Post a Comment

0 Comments