OpenAI Hires OpenClaw Creator: Why the AI Agent Race Just Exploded

OpenAI Hires OpenClaw Creator


Something big just happened in agents, not in a vague "AI is improving" way, but in a "control of the next software layer" way. OpenClaw, the open-source agent that spread like wildfire, now has its creator headed to OpenAI, while the project itself keeps spreading through major distribution in China and a browser-based rollout that makes it feel less like a GitHub toy and more like a real product.

If you're trying to understand where this is going, here's the simple frame: OpenAI just pulled in the builder, China just pulled in the users, and OpenClaw sits in the middle, flexible enough to plug into both.


OpenClaw didn't win because it chatted better, it won because it acted

OpenClaw showed up late last year and it didn't feel like another "type here, get text back" assistant. It felt like software with hands. The pitch, in plain terms, was that it could live on your machine and handle real chores: watch your inbox, deal with insurers, check you into flights, automate browser tasks, run commands, and keep working in the background while you do something else.

The video introduces OpenClaw as an agent that can take actions like email and browser automation, not just answer questions.

That "background" part is the emotional hook. Most AI tools still behave like a helpful clerk behind a counter. You walk up, ask, wait. OpenClaw instead feels like a junior assistant sitting nearby, paying light attention, then tapping you on the shoulder when something needs doing.

And yeah, people noticed fast. It blew past 100,000 GitHub stars within weeks, and at one point it was pulling in around two million visitors in a single week. That is not normal open-source growth, that's social momentum plus developer curiosity colliding at full speed.

The heartbeat system is the real shift

The detail that made OpenClaw stand out was its heartbeat system. Instead of waiting for you to prompt it every time, it checks what's going on, decides if something needs attention, then moves. That's a small design choice with huge consequences, because it changes the relationship.

The video highlights OpenClaw's heartbeat concept, showing the agent checking status and deciding when to act.

A chatbot is reactive, even when it's smart. An agent with a heartbeat is "on-call." It's not magic, it's just a loop, but it's the kind of loop that starts to look like a product layer, not a feature.

Also, that's where the mainstream angle matters. Lots of agent projects exist, sure, but many of them feel like you're assembling a robot in your garage. OpenClaw's vibe was closer to, "install this and it actually does stuff," which is why power users and social media ran with it so hard.

Viral growth brings attention, and attention brings mess

The flip side of that growth is pressure. When your project goes from niche to huge in weeks, every weak spot turns into a headline. The excitement is real, but so is the chaos.

And once everyone starts talking about "agents" instead of "prompts," the stakes change. Now it's not just, "is the model smart," it's, "can this thing be trusted to run around inside my accounts and tools without breaking something."

Speed turned OpenClaw into a target, rebrands, bad actors, and real security fear

OpenClaw's rise came with the kind of problems that only show up when something gets popular fast. There were rebrands and trademark headaches, and then the serious stuff: researchers found hundreds of malicious skills uploaded by bad actors. That's the nightmare scenario for an agent ecosystem, because skills are basically capability modules, and capability modules are basically attack surfaces if you don't treat them like code you'd run in production.

A GitHub stars or popularity spike is shown to illustrate how quickly OpenClaw went viral.

Misconfiguration risk also got called out, because an agent that can touch your browser, files, email, and third-party services is only one sloppy setting away from leaking data or opening the door to an exploit. China's industry ministry reportedly issued warnings about the risks, which tells you this wasn't just developer paranoia.

Then there's the unglamorous part people skip past: money. Steinberger was reportedly burning $10,000 to $20,000 a month just to keep things running. That matters because open-source at that scale isn't just code, it's infrastructure, moderation, and constant firefighting.

If you want a parallel example of how fast agent hype can attract scams and identity chaos, this breakdown of the Clawdbot $16M AI scam exposed shows the same pattern: explosive growth, naming confusion, and attackers rushing in to exploit trust gaps.

GitHub stars measure excitement, not safety. The moment agents plug into real accounts, "cool demo" stops being the bar.

OpenAI hired Peter Steinberger, and that changes the center of gravity

Then OpenAI stepped in, not with an acquisition, but with a hire. Sam Altman posted that Peter Steinberger is joining OpenAI to work on the next generation of personal agents, and that OpenClaw will live on as an open-source project inside a foundation, with OpenAI continuing to support it.

A screenshot of Sam Altman's post announcing Peter Steinberger joining OpenAI and OpenClaw moving to a foundation.


The key detail is what didn't happen: no "OpenAI acquired OpenClaw" press release, no price tag, no "we're shutting it down." Instead, OpenAI absorbed the brain behind it, and left the project standing.

For a straight news summary of the move, here's Reuters reporting on Steinberger joining OpenAI and OpenClaw becoming a foundation project. The basic facts line up with what the industry's been watching: OpenAI wants agents to become core, and it wants the people who understand how agents behave in the wild.

Why he joined (and why OpenAI is the obvious magnet)

Steinberger's stated motivation, as described here, is pretty relatable if you've ever built anything that unexpectedly turned into a "company." He's a builder, he already spent more than a decade building a company before, and running another one wasn't the dream. Changing how software works was.

OpenAI can offer what almost nobody else can: massive compute access, product reach, and a path for agent ideas to hit billions of users instead of staying stuck in GitHub threads and niche communities. It's not romantic, it's distribution and resources.

Altman also talked openly about multi-agent futures, agents coordinating with each other, becoming a platform layer. That's the part a lot of people gloss over. When OpenAI says "agents," it doesn't mean "a neat sidebar feature." It means "the thing you use to do work."

The video emphasizes OpenClaw remaining open source under a foundation structure while Steinberger joins OpenAI.


Alt Text: The video emphasizes OpenClaw remaining open source under a foundation structure while Steinberger joins OpenAI.

And yeah, there's tension. Some folks see stability and validation. Others see corporate gravity and worry the project slowly turns into "closed claw" in spirit, even if the license stays MIT on paper.

If you want more context on how quickly open-source agent frameworks started pressuring big labs before all this, this earlier piece on the first open-source AI agent that surprised OpenAI and Google fits the same storyline: the agent layer is getting real, and the old model-only framing is fading.

Baidu embedded OpenClaw, and distribution is the whole game now

While OpenAI was locking down talent, China was locking down distribution. Just days before Lunar New Year, Baidu announced it was embedding OpenClaw directly into its flagship search app, with around 700 million monthly users.

Baidu integrating OpenClaw into its main search app, highlighting the scale of the user base.

That number is so large it almost stops sounding real, but the implication is simple: OpenClaw jumps from "something you run via Telegram or a local setup" to "something sitting inside one of the biggest consumer apps on earth." Users can message it directly and use it for coding, organizing files, managing email, planning schedules, and day-to-day digital work without bouncing between apps.

For an external report on that rollout, see CNBC coverage of Baidu adding OpenClaw AI into its search app for 700 million users.

Baidu isn't stopping at "assistant in search," either. The plan includes e-commerce integrations and expanding across other services. That's consistent with what other Chinese giants are doing: Alibaba pushed its Qwen chatbot deep into shopping apps like Taobao, and Tencent is moving similarly. The pattern is clear, AI inside the flow of daily life, not parked on a separate site.

OpenClaw fits this approach because it's model-agnostic. It can run on OpenAI models, Anthropic's Claude, DeepSeek, and others. That flexibility matters more in China, where companies want control over infrastructure and model choice. It also matters globally, because it turns OpenClaw into a connector layer, the thing that routes work to whatever model is cheapest, fastest, or politically acceptable.

Moonshot AI put OpenClaw in the browser, and removed the setup pain

Right in the middle of the OpenAI hire news and the Baidu rollout, Moonshot AI launched Kimi Claw, basically OpenClaw running natively in the browser on kimi.com. The practical effect is huge: no local setup, no wrestling with Docker configs, you open a tab and the agent is there, persistent, always on, running 24/7.

Kimi Claw running in a browser interface, positioned as an always-on agent.

That "always-on" promise lands differently when it's cloud-hosted. It stops being a weekend tinkering project and starts being a service you can actually keep around, like email. Moonshot also bundled in 40 GB of cloud storage tied to the agent, so it can keep large datasets, docs, and code across sessions. Context normally evaporates in chat tools. Here, it sticks, or at least that's the goal.

The video highlights persistent storage for the agent, emphasizing long-term memory across sessions.

Then you've got the skills layer. Kimi Claw plugs into Claw Hub, with over 5,000 community-built skills, which act like modular abilities the agent can chain together. Instead of writing custom integrations every time, you pull from a library. One public example of the ecosystem around skills is this OpenClaw skills collection on GitHub, which gives you a feel for how quickly "capabilities" can multiply once a community gets momentum.

Moonshot also added live data access through what it calls "prograde search," so the agent can pull real-time info (like financial data) instead of guessing from older training. That's not just a feature bullet. It's a reliability move, because time-sensitive tasks are where hallucinations stop being funny.

real-time search or browsing inside the agent, framed as a way to reduce hallucinations.

Finally, there's "bring your own claw," where developers can connect self-hosted OpenClaw setups to the cloud UI, or bridge into apps like Telegram. So you get a hybrid: convenience when you want it, flexibility when you need it.

From a strategy angle, Moonshot is betting on frictionless hosting. From a geopolitics angle, it's also China offering a polished managed agent layer that competes with whatever OpenAI plans to push deeper into ChatGPT.

Security, geopolitics, and the uncomfortable part of autonomous agents

Here's the part that makes people pause, even the people who love agents: when an AI can act, mistakes scale fast. That's not a slogan, it's just math. If a normal chatbot is wrong, you might copy the wrong sentence. If an agent is wrong, it might send the wrong email, change the wrong setting, or grant the wrong permission, and it can do it repeatedly.

Cybersecurity firms (CrowdStrike was mentioned in the discussion around this) have raised red flags about giving autonomous agents deep access to business systems. The risk gets worse when skills and plugins come from a wide-open community, because attackers don't need to break the model, they just need to slip a bad tool into the chain.

For a concrete example of how exposed agent instances can become a real-world issue, this report on 42,665 exposed OpenClaw instances found by security researchers captures the kind of operational mess that shows up when thousands of people deploy fast and secure later.

Now add the platform reality. When agents live inside massive apps tied to national tech ecosystems, the questions get sharper: where does the data go, who can inspect the toolchain, who controls model routing, who can shut it off, and who benefits from the defaults.

This is also why the "race" feels different now. Benchmarks still matter, but the fight moved up the stack. Distribution, ecosystems, and control of the layer where real work happens, that's where the pressure is.

If you've been tracking how agents keep inching from demos to dependable execution, this piece on Manus 1.6 Max and the push toward real autonomy lines up with the same trend: the hard problem is not answering, it's finishing.

The future argument won't be "which model is smartest," it'll be "which agent do you trust to run your day."

What I learned watching this unfold (and how I think about agents now)

I've played with enough automation tools to know the feeling people are chasing here. You want the boring chores gone, and you want it to happen without you managing a fragile stack. Still, every time I see an "always-on" agent go viral, I force myself to slow down for one reason: permissions.

When a tool can read your email, touch your files, run commands, and browse logged-in tabs, that's basically "full access to your life," just phrased politely. So my personal rule is to start embarrassingly small. I'll give an agent one narrow job (like monitoring something public, or drafting from non-sensitive notes) and I'll keep it boxed in. Only after I've watched it behave for a while do I open more doors. That sounds cautious, but it's also the only way the tech stays fun instead of stressful.

The other lesson is about distribution. A project can be brilliant and still not matter if it stays in a niche. Baidu putting OpenClaw into a 700-million-user app is the opposite of niche. That's not a feature launch, that's a new default. It reminded me that the winners in software are often the ones who get embedded, not the ones who get applauded.

And last thing, this one surprised me a bit, openAI (yeah, I'm using the lower-case version people type casually) isn't just competing with models anymore. It's competing with "where work happens." Browser agents, search agents, commerce agents, chat-based agents, whichever one sits closest to your daily habits ends up shaping what you trust, what you pay for, and what you never even think to question.

That's why the scam and security stories matter too. If you want one more cautionary parallel before you install anything that promises "it does everything," reread the inside the Clawdbot collapse and notice how fast confusion turns into consequences.

So what happens next?

OpenClaw sits in a weird, powerful spot: OpenAI has the builder, China has the distribution, and the open-source community still has the code. Over the next year, the biggest question isn't whether agents get better, they will, it's who gets to set the defaults for how they run, what they can touch, and how safely they ship.

If you're watching this space, watch the foundation governance, watch the managed browser versions, and watch where agents get embedded next. And if you've got a strong take on whether OpenAI is supporting open agents or trying to make sure the important decisions happen inside its walls, that take is worth sharing.


Post a Comment

0 Comments