Peter Steinberger built OpenClaw in November 2025 as a side project. He called it Clawdbot — a nod to Claude. He used Claude as the model. He built the community around Claude. He spent months talking about how good Claude was.
On April 4, 2026, Anthropic cut off his users.
135,000 active OpenClaw instances. Gone from flat-rate. Now metered. For heavy users, the math works out to roughly 50 times what they were paying before. The email landed at noon Pacific. No extended warning. One week's delay was the best Peter could negotiate — and he tried. "Both me and Dave Morin tried to talk sense into Anthropic," he wrote. "Best we managed was delaying this for a week."
You can argue this is a straightforward business decision. You can argue the unit economics were always broken. Both are probably true. But the timeline makes this harder to dismiss as just that — and the timeline is what HN and the developer community haven't stopped talking about since.
1. The Math That Always Made This Inevitable
Thesis: The flat-rate subscription was never designed for what OpenClaw users were doing with it. The economics broke the moment serious agent workflows started running 24/7.
Evidence: Anthropic's Claude Max plan costs $200 per month. That gets you significantly more tokens than the $20 Pro plan — but it's still a flat rate. The assumption built into any flat-rate subscription is average usage. A human using Claude to write emails and debug code uses maybe a few million tokens a month. A continuously running OpenClaw agent swarm is something else entirely.
The Wes Roth transcript puts actual numbers to this: in just seven days — March 24 to 30 — running OpenClaw on Claude Sonnet 4.6 (not even the most expensive model), he burned through $200 in API-equivalent costs. That's one week. At that rate, $200 a month buys you roughly seven days of moderate agent usage. Heavy users were doing considerably more. Industry estimates put the cost gap at up to 50 times the subscription price for the heaviest sessions.
The technical reason is real. Anthropic's own tools — Claude Code and Claude Cowork — were engineered specifically to maximize prompt cache hit rates. When Claude sees the same context repeatedly, it reuses previous computation instead of processing every token fresh. Third-party tools like OpenClaw, not being built by Anthropic, didn't optimize for this. So identical workloads cost Anthropic significantly more to serve through OpenClaw than through its own products. Boris Cherny himself went as far as submitting pull requests to OpenClaw to help improve this — before the cutoff, not after.
Verdict: The economics argument is legitimate. A $200 subscription was quietly delivering $5,000+ worth of compute to power users. That math doesn't work. The only real question is whether the economics alone explain the timing.
2. The Timeline Anthropic Wishes You'd Ignore
Thesis: The sequence of events between February and April 2026 is awkward. Not illegal. Not provably malicious. Just awkward in a way that's difficult to explain away with engineering constraints alone.
Evidence: Here it is, in order.
November 2025 — Peter Steinberger releases Clawdbot. He names it as a nod to Claude. He builds it around Claude. The community builds around it. 135,000 instances within months.
January 2026 — Anthropic raises trademark concerns about the name "Clawdbot." Peter renames it. Twice. It lands on OpenClaw. Not exactly a warm signal.
February 14, 2026 — Peter announces he's joining OpenAI. Sam Altman publicly welcomes him to "drive the next generation of personal agents." OpenClaw moves to an open-source foundation with OpenAI's backing.
February 20, 2026 — Anthropic quietly updates its legal terms. New language explicitly prohibits using subscription OAuth tokens in any third-party tool. This update happens six days after Peter joins OpenAI.
April 4, 2026 — Full enforcement. 135,000+ instances cut from flat-rate access at noon Pacific.
Peter's words: "Funny how timings match up." That's a measured way to say what a lot of the community said less politely.
Verdict: Correlation isn't causation. Anthropic may have been planning this since January regardless of Peter's move. But the terms update six days after he joins OpenAI is the kind of timing that makes neutral observers raise an eyebrow.
3. Copy Then Close: Allegation or Pattern?
Thesis: Peter's "copy then close" accusation is the most inflammatory claim in this story. It's also the hardest to fully verify — and the hardest to fully dismiss.
Evidence: When Anthropic's Claude Code source code leaked accidentally in March 2026 — 512,000 lines pushed to the public npm registry via a misconfigured debug file — the open-source community got an unplanned look at what Anthropic was building. What they found, according to multiple developers who reviewed it, was a feature set that bore considerable resemblance to what OpenClaw had been building in public.
"Dreaming" — a memory consolidation feature that OpenClaw added — appeared in Anthropic's internal roadmap. Persistent agent memory, tool integration patterns, local-first architecture concepts. None of these are patentable. All of them had been pioneered publicly by OpenClaw and the broader community around it.
This is where it gets complicated. Open-source software exists to be used, learned from, and improved upon. That's the entire point. Anthropic observing what OpenClaw built and building similar features is not theft — legally or ethically. The question the community is actually asking is different: did Anthropic take the innovations, fold them into a closed ecosystem, and then remove the subsidy that made the original tool viable? Because if so, that's a particular kind of move.
Worth noting: after the cutoff, OpenClaw immediately added "dreaming" — memory consolidation — as a direct response. The open-source community is not standing still.
Verdict: "Copy then close" as an accusation is overstated. As a description of a pattern, it has enough supporting evidence to take seriously. Those are different things. One implies intent. The other just describes sequence.
4. Boris Cherny's Position Is Harder Than It Looks
Thesis: Boris Cherny — Anthropic's Head of Claude Code and creator of Claude Code — handled this better than the decision deserved. That distinction matters.
Evidence: Boris was the one communicating directly with the community on X. He acknowledged the frustration. He offered refunds. He personally submitted pull requests to OpenClaw's repo to improve its prompt cache efficiency — before the cutoff, not as a PR exercise after it. He said plainly: "I know it sucks. Fundamentally engineering is about tradeoffs."
Nobody in this story has accused Boris of bad faith. The criticism is aimed at Anthropic as an institution — at whoever made the call on the timing, the terms update, and the enforcement approach. Boris appears to be the person who had to execute a decision he didn't fully control and tried to soften the landing while doing it.
That's actually the most uncomfortable part of this story. When the most credible defender of Anthropic's position is someone who clearly has genuine goodwill toward the open-source community — and whose goodwill didn't change the outcome — it tells you something about where the decision was actually made.
Verdict: Boris handled it well. The decision itself is a separate question. Conflating the two — blaming Boris or giving Anthropic credit for his goodwill — misreads what happened.
5. What Anthropic Actually Burned
Thesis: The financial cost of the subsidy was real. The cost of ending it this way is less quantifiable — and probably larger.
Evidence: OpenClaw users weren't just power users. They were the people making YouTube videos about how great Claude was. Writing tutorials. Recommending Claude to their teams. Building courses around it — courses that now have to be updated or pulled entirely because the setup they showed no longer works.
This is the group that does the hardest kind of marketing — the kind you can't buy. Word of mouth from people who genuinely use the product and tell others. Community estimates suggest roughly 60% of active OpenClaw sessions were running on Claude subscription credits. Those users are now either paying 50 times more, switching to OpenAI's Codex (which has explicitly said third-party tools are welcome), or both.
"All those people that fell in love with using Claude for AI agents," as the Wes Roth transcript puts it, "they became the people evangelizing Claude." Past tense is doing a lot of work in that sentence now. You can find the full cost breakdown of Claude Max vs API pricing — the numbers are not small.
Verdict: Anthropic saved money on compute. It spent that saving in a currency that doesn't show up on a balance sheet until much later.
6. OpenAI's Bet
Thesis: OpenAI's response to all of this is deliberate, and they know exactly what they're doing.
Evidence: OpenAI hired Peter Steinberger in February. Explicitly said third-party tools are welcome on their platform. Codex — their equivalent coding agent — has not restricted usage for OpenClaw or any third-party harness. When Anthropic announced the cutoff, OpenAI's Codex was the immediate alternative users switched to.
This is a calculated move. Hire the person who built the most popular tool in the ecosystem. Welcome the community he brought with him. Let Anthropic absorb the backlash of the cost enforcement. Position Codex as the open alternative.
The unresolved question — which nobody has answered yet — is whether OpenAI can actually sustain this if OpenClaw traffic scales on their platform the way it did on Anthropic's. The same economics apply. A continuous agent swarm is a continuous agent swarm regardless of which model it's calling. If GPT-5.4 becomes the model of choice for 135,000+ OpenClaw instances running 24/7, OpenAI will face the same math Anthropic did. The broader IP and ecosystem dynamics around Claude Code are still playing out.
Verdict: OpenAI is winning the PR battle. Whether they're winning the economics battle depends on a question they haven't had to answer yet.
My Take
Anthropic's decision wasn't wrong. A $200 subscription delivering $5,000 of compute is not a subscription — it's a charity. No company heading toward an IPO can defend that to investors, and the economics needed to close eventually. That part I don't dispute.
What I can't get past is the timing. Not as proof of bad faith — I don't have that. As a strategic read: enforcing this six days after Peter joins OpenAI, rather than three months earlier when the math was identical, looks like a reactive decision dressed up as a principled one. The economics were always there. Something changed in February. Nobody at Anthropic has addressed that gap directly.
The "copy then close" framing is where I pump the brakes. Open-source software is meant to be learned from. Anthropic building features that resemble OpenClaw's innovations is not inherently predatory — it's how software works. The more uncomfortable version of this story is simpler: Anthropic subsidized a community that made Claude famous among power users, then removed the subsidy when the cost became visible on a pre-IPO balance sheet. That's not villainy. It's capitalism with bad optics.
The thing I keep coming back to: Anthropic's biggest asset in the developer community wasn't Claude's benchmark scores. It was the reputation for being the thoughtful one. The lab that cared about the ecosystem. That reputation is harder to rebuild than compute costs are to recover. And right now, OpenAI has Peter.
- April 4, 2026: Anthropic cut flat-rate subscription access for 135,000+ OpenClaw instances. Users now pay metered API rates — up to 50x more for heavy usage.
- The economics were always broken: $200/month subscriptions were delivering $5,000+ of compute to heavy agent users.
- The timeline is awkward: terms updated six days after Peter Steinberger joined OpenAI in February; enforcement followed six weeks later.
- Boris Cherny handled the communication well — but appears to have executed a decision made above him.
- The "copy then close" accusation is unverified but not without supporting circumstantial evidence from the leaked Claude Code source.
- OpenAI is explicitly welcoming third-party tools — for now. The same economics will apply if usage scales.
- The real cost to Anthropic isn't compute — it's developer goodwill that doesn't show up on a balance sheet until it's already gone.
Frequently Asked Questions
Q: Can I still use OpenClaw with Claude?
Yes — but not on your flat-rate subscription. You need either a separate API key (billed at $3/$15 per million tokens for Sonnet 4.6, $15/$75 for Opus 4.6) or Anthropic's new "Extra Usage" pay-as-you-go bundles, available at up to 30% discount if you prepurchase $1,000+. Anthropic offered a one-time credit equal to one month's subscription value, redeemable by April 17, 2026.
Q: What is the actual cost difference now?
For casual users running occasional agent tasks, the difference is small. For heavy users running continuous agent swarms, industry estimates put the increase at up to 50 times the previous monthly spend. One documented case: $200 in API-equivalent costs in seven days of moderate OpenClaw usage on Sonnet 4.6. At API rates, that's $200 per week rather than $200 per month.
Q: Is this just about OpenClaw or does it affect other tools?
It's broader. Anthropic explicitly stated the policy "applies to all third-party harnesses." OpenClaw was first because it was the largest. NanoClaw, OpenCode, and any tool routing Claude through subscription OAuth tokens is subject to the same restriction. The policy is categorical, not targeted.
Q: Why did Peter Steinberger join OpenAI?
Peter announced in February 2026 that joining OpenAI was "the fastest way to bring this to everyone." Sam Altman publicly welcomed him to work on next-generation personal agents. OpenClaw continues as an open-source project under a foundation with OpenAI's support. Whether the Anthropic restrictions influenced the timing of Peter's move or vice versa is not publicly confirmed by either side.
Q: Is OpenAI actually safer to build on than Anthropic now?
Right now, OpenAI is explicitly welcoming third-party tools and has not restricted OpenClaw usage. But the economics that broke Anthropic's model don't disappear just because the provider changes. If 135,000+ OpenClaw instances migrate to GPT-5.4 and run continuously, OpenAI will face the same math. Whether they handle it differently — or whether they've actually priced for agentic workloads in a way Anthropic hadn't — is the real question nobody has answered yet.
Q: What happened to the Claude Code source leak mentioned here?
On March 31, 2026, Anthropic accidentally published 512,000 lines of Claude Code source to the public npm registry via a misconfigured debug file. The leak was discovered almost immediately — a researcher tweeted a download link, and 16 million people visited the thread within hours. A GitHub mirror hit 50,000 stars in under two hours. Anthropic confirmed no customer data was exposed and attributed it to human error in packaging. The leaked code is what gave the community visibility into Claude Code's internal feature roadmap — and sparked the "copy then close" comparison with OpenClaw's features. You can read more about the IP and legal implications of the Claude Code leak in our earlier analysis.
The honest caveat: We don't know Anthropic's internal reasoning. The timeline is circumstantial. The "copy then close" accusation is unproven. Boris Cherny appears to have genuinely tried to soften this. And the underlying economics — $200/month subscriptions funding $5,000/month of compute — were always going to break.
What we do know: 135,000 instances got cut from flat-rate access in a single afternoon. The people most affected were Anthropic's most vocal advocates. The timing lines up uncomfortably with Peter's move to OpenAI. And right now, OpenAI has Peter, has the community's goodwill, and has explicitly said third-party tools are welcome.
Whether that's strategy or luck on OpenAI's part — and whether it survives contact with the same economics — is the story still being written.
0 Comments