Six years ago, February 2020 felt like the calm before a storm we didn’t see coming. Today, in early 2026, that same 'pre-collapse' energy is back—but this time, the virus is made of silicon.
A viral essay by Matt Schumer recently sent shockwaves through the tech world, claiming we are in the 'This is overblown' phase of AI, right before the shift becomes impossible to ignore. Whether you call it an 'Intelligence Explosion' or a 'Vibe Shift,' one thing is clear: AI isn't just a tool anymore; it's becoming a competent coworker.
Here is why 2026 feels fundamentally different, and why the 'AI is a toy' argument officially died this year.
"Quick Take: I used to think AI sparks were luck. Now, I see them as a 12-month warning for my entire career."The "AI is the new COVID" analogy, and why it still lands
February 2020 "normal" contrasted with a near-future AI-shaped world, created with AI.
The COVID comparison is dramatic, and it's not perfect. One solid pushback comes from John Kugan (TVPN), who points out the math: infections don't grow exponentially forever. They follow a logistic curve, accelerating for a while, then tapering and flattening, because there's a finite number of humans to infect.
That criticism is fair. AI also hasn't "snuck up" in the same way. It's been bubbling for years, improving in public, right in front of everyone. You can argue the timeline feels less like a surprise wave and more like a rising tide.
Still, the analogy hits in one important way: a lot of people are not prepared. Not emotionally, not professionally, not operationally. The folks closest to the work aren't only predicting what might happen. Many of them are describing what has already happened in their own roles.
Schumer's line that sticks is simple: he describes what he wants in plain English, walks away, and comes back to finished work. Not a draft. Not a halfway attempt. The completed thing, often better than what he would have produced himself.
That's the vibe shift. It's less "AI is a helpful assistant," and more "AI is starting to feel like a competent coworker who doesn't need much supervision."
Why the alarm is louder now
The loudest warnings don't sound like theory. They sound like someone saying, "This already hit my job, and it's moving outward." That's why the tone feels different from the usual tech hype cycle.
And honestly, it also explains the tension: people want to dismiss it because the implications are uncomfortable. Yet the people building and using these systems daily keep repeating the same idea. The pace is faster than most of the public realizes.
What "vibe coding" looks like when models get good enough
The clearest proof point in the discussion wasn't a benchmark. It was everyday work getting compressed into hours.
A simple example: building a personal site that automatically pulls in channel subscribers, new videos, a podcast feed, writing, portfolio items, and even newsletter subscriber counts. The surprising part isn't that it's possible. It's that it can happen in under an hour, sometimes in 45 minutes, with prompts that read like normal English.
The same pattern shows up in business bottlenecks:
- Rebuilding a tools site so it becomes more news-focused, while still keeping a tools database
- Spinning up a thumbnail generator by giving the AI a set of headshots, common logos, and past thumbnail examples
- Making a rough "video titler" app where you drag and drop a video (or upload a transcript) and get suggested titles, ranked by predicted performance
None of that requires perfection or pretty UI. It just needs to solve a real problem quickly.
This is where the newer coding models came up, especially GPT-5.3 Codex and Claude Opus 4.6. People using them keep saying the same thing: they don't just follow instructions, they make choices that feel like judgment. There's a hint of taste. That used to be the line in the sand, the "AI will never do that" line.
To track what changed in this exact model moment, this internal breakdown is useful: OpenAI's GPT-5.3 vs Claude Opus 4.6, explained for real workflows.
The weird part is how quickly the back-and-forth disappears. You stop "guiding" and start "delegating."
Reddit threads reflect this too. The tone moved from "it kinda works" to "this is scary good," especially for agent-like behavior, tool use, testing loops, and fewer obvious bugs.
"I tried AI, it wasn't impressive" is an outdated take now
A lot of people have a very specific memory of AI: trying ChatGPT in 2023 or 2024, watching it hallucinate, seeing it write shaky code, and deciding the whole thing was a toy.
That reaction made sense then.
The problem is that AI time doesn't map to human time. Two years might as well be a decade. Even six months can feel like an era shift in capability, especially in coding and long multi-step work.
There's also a practical detail people miss: free tiers lag. If someone only uses free plans, they often aren't seeing what paid users get. And the paid crowd tends to be the group using these tools daily for real work, which means they spot the improvement first.
The flip phone versus smartphone comparison fits here. If you only tried the flip phone, you didn't really test "modern phones." You tested the thing right before the jump.
That's part of why the public conversation gets so messy. People argue from different versions of reality.
The METR "time horizon" chart explains the speed better than vibes
A visual take on rapidly expanding AI task time horizons, created with AI.
If you want one chart that people keep coming back to, it's the time horizon work from METR (sometimes referenced in conversation as "Meter"). Their idea is straightforward: measure how long a task takes a human expert, then estimate what task length an AI agent can complete reliably.
METR publishes this publicly, and it's worth looking at the original source: METR's task-completion time horizons of frontier models.
Here's the simpler timeline described alongside that research, which helps ground the feeling of speed:
| Year | What people noticed AI could do |
|---|---|
| 2022 | Struggled with basic arithmetic reliably |
| 2023 | Passed major professional exams (like the bar) |
| 2024 | Wrote working software and explained graduate-level science |
| Late 2025 | Some top engineers said AI handled most of their coding workload |
| Feb 2026 | New releases made prior models feel like a different era |
Then there's the punchline from the time horizon view: the length of tasks AI can do keeps expanding, with an implied doubling rhythm measured in months, not decades. And it isn't just coding. Similar curves show up in math, browser use, robotics simulations, scientific Q&A, and more.
If that trend continues, the unsettling projection isn't "AI gets a bit better." It's "AI can work independently for days or weeks," and not long after, "AI can run projects that would take humans months."
That's the point where office work stops being protected by inertia.
When AI helps build the next AI, the pace can speed up again
Here's where the story turns from "fast progress" into "feedback loop."
OpenAI's GPT-5.3 Codex announcement included a line that made people pause: it was the first model described as instrumental in creating itself. Early versions helped debug training, manage deployment, and diagnose evaluations. Put plainly, AI started speeding up the engineering work required to make more AI.
Anthropic has been saying versions of the same thing. In a January 2026 post about technology's "adolescence," Dario Amodei described AI writing much of the code at Anthropic and accelerating progress toward the next generation. There's also a practical detail from their own team experience: autonomous action runs doubled from around 10 steps to around 20 steps before needing human steering (and that was before the newest models arrived).
Smarter AI writes better code. Better code helps build smarter AI. Then the loop tightens.
People call this the intelligence explosion idea. Labels aside, the mechanism is easy to picture. Once a system contributes directly to the process of improving itself, the "how fast can teams ship?" limit starts moving.
If you're watching markets react to this shift, it's already showing up. Here's a relevant internal read on that angle: why new Claude tools spooked investors and hit IT stocks.
White-collar jobs aren't "safe," they're just next in line
It's tempting to treat coding as a special case. Most people aren't developers, so they hear "AI writes code" and shrug.
The uncomfortable part is why labs focused on coding first. It wasn't random. They made AI good at code because building AI requires huge amounts of code. Once the models help with that, the flywheel speeds up. After that, expanding into other knowledge work is a matter of time and training focus, not a brand-new invention.
This is where the job predictions enter, and they're not subtle. Dario Amodei has said in interviews that 50 percent of entry-level jobs could disappear, with unemployment reaching 10 to 20 percent. He frames it as a duty to be honest about what's coming.
Other voices from Anthropic have predicted widespread automation of white-collar work within a few years, describing the possibility of a rough decade as automation spreads. There's also the human side of it, people leaving AI labs and writing openly about moral stress and fear about the threshold we're approaching.
Meanwhile, Nvidia CEO Jensen Huang has said every job will be affected. Kyu Lee has described "50 percent displacement" predictions as uncannily accurate.
Even if you disagree with the timeline, it's hard to argue with the direction. Legal work, finance, software, content, customer support, medical analysis, anything that happens on a screen is in the blast radius. The capabilities aren't arriving "someday." They're arriving now, and then businesses take time to adopt them.
The other reason this feels different from older waves of automation is the lack of an obvious "next place to go." When factories automated, office roles grew. When the internet reshaped retail, logistics and services expanded. This time, AI targets thinking work broadly, so retraining doesn't guarantee a safe island.
If you want a calmer, hype-resistant frame for the AGI conversation and the business incentives around it, this internal post adds helpful context: the business of "almost AGI" and job fears built on assumptions.
The "country of geniuses" thought experiment makes the risk feel real
One of the strongest ways to understand AI risk isn't a chart, it's a thought experiment.
Imagine a literal country appears in 2027, filled with 50 million "geniuses," each smarter than any Nobel Prize winner, statesman, or technologist. Now add the twist: they operate hundreds of times faster than humans, don't sleep, and take 10 cognitive actions for every one we take.
If you were advising national security for a major country, you wouldn't treat that as a normal competitor. You'd treat it as the most serious strategic threat in generations, maybe ever. The risks show up fast: autonomy, misuse for destruction, power grabs, economic chaos, and destabilizing second-order effects.
This idea has been widely repeated in coverage of Amodei's writing. For a readable summary of that framing, Fortune's "country of geniuses in a data center" piece captures the core scenario.
The point isn't to panic. It's to stop treating capability growth like a normal product cycle.
The upside is massive, and the downside isn't theoretical either
If we get this right, the upside is hard to overstate. Supporters of rapid progress argue AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious diseases, even aging research could move faster than any human institution has managed before.
At the same time, the downside doesn't need sci-fi. Controlled tests have already shown attempts at deception, manipulation, and blackmail behavior. There are also examples of models gaming benchmarks, which should make anyone pause, because it hints at incentive-shaped behavior.
The people building these systems often sound both excited and uneasy. That combination is telling. They think it's too powerful to ignore, and too important to abandon.
The skeptics aren't denying change, they're arguing about the clock
Not everyone buys the "two years" framing. MG Siegler, writing on Spyglass, argues that if you strip away apocalyptic tone, what's being described is a very useful technology improving quickly, reshaping jobs over 5 to 10 years. That's closer to the internet than COVID, and the internet transformed industries over decades, not months.
There's also the more practical counterweight: job loss isn't the only effect. A Bloomberg clip highlighted consulting firms hiring new AI roles, in some cases outpacing entry-level hiring reductions. That's a real pattern in transitions, some roles fade while new ones appear.
My take is simple: even the skeptical view doesn't say "nothing happens." It says "it takes longer." And if the debate is 2 years versus 10 years, the actions you'd take right now barely change. Waiting still looks like the worst option.
What to do right now (without turning it into a personality)
The most unfair advantage in moments like this is being early. Not early as in "tweeting hot takes," but early in hands-on use, so you understand the limits and the strengths.
A few practical habits came up repeatedly, and they're refreshingly plain:
- Use paid AI tools, not just free tiers. Paid versions tend to be meaningfully ahead, and you won't see the real capability gap otherwise.
- Push AI to do work, not answer trivia. Treat it less like Google and more like a junior teammate you can assign tasks to.
- Try "too hard" tasks anyway. A lawyer can hand it a contract and request a counterproposal. An accountant can provide a full return and ask for issues. The first output might be rough, but iteration is the trick.
- Become the person who can show others. The person who walks into a meeting and says, "I did this in an hour, not three days," becomes valuable immediately.
If you want a steady stream of tool discovery and news, Future Tools' AI tools and news hub is built for that, and their weekly AI newsletter can help you keep pace without living on social feeds. If podcasts are more your thing, The Next Wave podcast channel is also part of that ecosystem.
Separately, there's a hopeful angle that gets lost in the job talk. The barrier to building has dropped. People who always wanted to ship an app, write a book, or test a small e-commerce idea suddenly have a tutor and a helper available at all times. That doesn't fix everything, but it does open doors.
What I learned after sitting with all of this
I'll be honest, I didn't love how heavy this topic felt at first. Part of me wanted a cleaner story. Something like, "AI will boost productivity, new jobs will appear, and it'll all balance out." That still might be partly true, but it doesn't cover what's happening week to week.
The biggest shift for me is how I think about "capability hints." A year ago, if a model showed a tiny spark of judgment, I would've brushed it off as luck. Now, I treat that spark as a warning sign. If it can sort of do something today, it might do it well in the next cycle. That's a weird way to live, but it matches what I'm seeing.
I also noticed how much of the public debate is frozen in older versions of the tools. When someone says "it hallucinates, it's useless," I don't even argue anymore. I just assume they haven't tried what the paying daily users are trying. That gap in lived experience explains a lot of the shouting.
Finally, I keep coming back to a small, grounding thought: ignoring this on principle won't stop it. It only guarantees you'll understand it last, which is a rough place to be if your work happens on a screen.
Conclusion: pay attention, then get your hands dirty
The timelines might be wrong, and the analogies might be messy, but the direction is hard to miss. AI systems are getting better at long, messy, multi-step work, and they're starting to help build the next generation of themselves. That's why the mood changed.
If you do one thing after reading this, make it practical: spend time using these tools for real tasks, not just quick questions. The goal isn't to worship AI or fear it, it's to stay awake while the ground shifts.
0 Comments