In a single year, the smartest AI system on a Mensa Norway test jumped from 96 to 136 IQ. In human terms, that is a shift from average office coworker to almost genius level. It did not take a childhood, a school system, and a lifetime of reading. It took one year of progress in software and hardware.
This is why many researchers think artificial general intelligence is no longer a distant science fiction idea but a near-term engineering project. Some, like Geoffrey Hinton, often called the godfather of AI, speak openly about “maybe four years left” before we reach a point where machines outthink us in most areas that matter.
This can sound like pure doom. It does not have to be. If we understand what is coming and why serious people are worried, we can respond with clear eyes instead of panic.
In this post, we will walk through:
- what self-improving AI actually means
- how we might move from AGI to superintelligence
- why this could go wrong very fast
- and what you can realistically do about it
AI’s IQ Shock And Why Experts Are Nervous
The IQ jump from 96 to 136 on the Mensa Norway test is more than a trivia fact. It hints at the speed of improvement when software, compute, and training tricks compound.
IQ tests are an imperfect measure of intelligence, but they do measure something about pattern finding and abstract reasoning. When a machine goes from “average human” to “near genius” on that curve in a year, you have to ask where that curve levels off, if it levels off at all.
Many AI scientists are worried for three main reasons:
- AI has started improving itself.
We now use AI tools to design better models, pick better training data, and even write the code that runs new experiments. - Runaway growth looks possible.
Once AI can improve AI without much human help, we might see an intelligence explosion that we do not control. - Extinction risk from very powerful systems feels imminent.
Thousands of researchers signed public letters saying advanced AI could pose a risk of human extinction, not just job losses.
A lot of this fear draws on Leopold Aschenbrenner’s long report on where current trends could lead. His Situational Awareness report on AGI to superintelligence is now circulating among senior people in government and industry, because it lays out a concrete path from today’s systems to something far beyond human-level.
The idea is simple but unsettling: if you can automate AI research itself, the field may go from decades-long progress cycles to progress measured in days.
The Path From AGI To Superintelligence
Aschenbrenner breaks the story into four broad steps. The first is reaching AGI. The next three are about using AI systems to speed up AI research.
Step 1: Reaching Artificial General Intelligence
Today’s models are good at language, code, and many knowledge tasks, but they still feel brittle. Artificial general intelligence means a system that can do most cognitive tasks at the level of a capable human. It can learn new subjects, reason across domains, and use tools with common sense.
Here is the part that has insiders shifting in their seats: the leaders of the three major frontier labs have all said they expect something like AGI within 2 to 5 years. Even long-time skeptics like Yann LeCun now talk about “several years” rather than “centuries” or “never.”
If you want a broader look at how AGI might change jobs and social order, Understanding the impact of artificial general intelligence gives a deeper, story-driven overview.
Once AGI shows up, though, the story does not stop. It speeds up.
Steps 2–4: AI Starts Automating Its Own Research
The heart of the concern is recursive self-improvement: AI systems that help build better AI systems.
We already see the early version of this:
- Former Google CEO Eric Schmidt has warned that using AI to design new AI systems is a “dangerous point.”
- Microsoft CEO Satya Nadella has said their work with OpenAI’s o1 model has entered a recursive phase, where they use AI tools to optimize AI development.
- A DeepMind scientist described how a reinforcement learning system discovered a better reinforcement learning algorithm than the ones human experts spent years designing.
- Leaders at Anthropic say that most of their production code is now written by their own large models.
What does this look like in practice? The “job description” of an AI researcher is simple on paper:
- Read recent machine learning papers and past work.
- Come up with new ideas or tweaks.
- Code experiments, run them, and study the results.
- Repeat.
Reading, summarizing, coding, and running experiments are all things modern models already do in some limited form. Scale that up with a bit more reliability and tool use, and it is not hard to picture an AI agent that matches a strong human researcher. Aschenbrenner argues that simple extensions of current trends could get us to that level by about 2027.
Then compute turns this into something alien.
One modern NVIDIA H100 chip has raw compute in the ballpark of a human brain. We are on track to have tens of millions, maybe around 100 million, H100-class chips deployed in the near future. That is enough for hundreds of millions of copies of an AGI running at once.
Now layer on algorithmic speedups. Gemini 1.5 Flash is about 10 times faster than the first release of GPT‑4 at similar quality, and that happened in roughly a year, with a few hundred human researchers. If you had 100 million automated AI researchers, each working at 100 times human speed, they could do a year of human-style research in a few days.
To them, we would move like plants.
On top of that, these copies could share what they learn almost instantly. Geoffrey Hinton has raised the prospect of models acting as a kind of hive mind, where skills and insights flow between instances with a simple sync step.
Humans beat Neanderthals not because we were much smarter as individuals, but because we were better at sharing knowledge. Now imagine AI systems that think 100 times faster and share their full “memories” with each other. They would compress what feels like a thousand years of cultural evolution into a span that looks like a year to us.
The Final Leap: Superhuman AI Research
Once AI systems become better at AI research than the best human teams, the game changes again.
A superhuman AI researcher could:
- read every machine learning paper ever written
- replay every experiment ever run in the lab
- run countless new experiments in parallel, day and night
- share every result with every other copy instantly
Over time, that is not just more of the same. It becomes a different kind of mind, with millennia of effective experience and perfect recall.
Hinton worries that when systems cross this line, they will “take control” in a very practical sense. Not by sudden violence, but by becoming the default mind that governments, companies, and militaries lean on for every serious decision.
At that point, humans stop being the senior partner in the relationship.
How AI Self-Improvement Shows Up Today
This all sounds abstract until you look at real systems we already have.
AlphaZero: Three Hours To Surpass A Lifetime
AlphaZero is a famous chess and Go program from DeepMind. The striking thing about it was not just strength, but how it learned.
It was not trained on decades of human games, the opening books grandmasters study, or human concepts like “isolated pawn” or “king safety.” It started with the rules and played itself.
In about three hours of self-play, it became stronger than the best human and the strongest classic chess engines in history.
Imagine spending 30 or 40 years becoming the best in the world at something, only to watch a machine explore for three hours and blow past you. That is what many researchers expect to happen in field after field, once systems like this are pointed at science and engineering.
Robots Learning Ten Years Of Skills In An Hour
A year ago, most humanoid robots struggled to walk without looking clumsy. Then large-scale reinforcement learning hit robotics.
Now we see machines learning:
- smooth walking and running
- side flips and acrobatic moves
- kip-ups from the ground
- complex, almost “kung fu” style motions
Boston Dynamics’ Atlas, for example, learned to run and perform advanced moves by training in simulation, not by trial and error in the real world.
Here is the wild part: one hour of real-world compute time in simulation can give a robot the equivalent of ten years of training experience. That is the logic behind the famous Matrix scene where Neo learns martial arts in seconds. “I know kung fu.” For robots, that kind of accelerated learning is not magic, it is what large-scale simulation already looks like.
This is why some roboticists expect to see capable humanoid robots walking around warehouses, factories, and maybe even public spaces within about five years. If they can practice ten thousand times faster in virtual worlds, their real-world progress might feel sudden.
Fast Takeoffs In Games And Beyond
We have already watched fast takeoffs in other domains:
- Go, where AlphaGo and then AlphaZero crushed the best pros
- Shogi and chess, where self-play systems reached superhuman level
- Minecraft-style environments, where agents learn complex sequences of actions through trial and error
These systems did not need slow, decade-long hardware cycles. They just needed better algorithms and lots of simulation.
As Aschenbrenner stresses, you do not even need to automate robotics research to get to superintelligence. You only need to automate AI research. Once minds are much smarter, they can pick up robotics, biology, or anything else later.
From Lab Curiosity To Planet-Scale Power
So what happens when you have a civilization of AI systems, each one smarter than any human researcher, all working together?
An Industrial And Scientific Shockwave
Picture billions of digital workers, each able to run code, reason, design, and experiment. They tap into every field at once:
- robotics and manufacturing
- biotech and drug discovery
- weapons design and defense systems
- materials science and energy
Problems that took human teams decades could fall in days.
The right analogy might be the difference between the atomic bomb and the hydrogen bomb. An atomic bomb can destroy a city. A single hydrogen bomb can flatten a country and can have more explosive power than all bombs used in World War II combined. That is the kind of gap many expect between plain AGI and full superintelligence.
The same knowledge that lets you cure cancer faster can also help you design new pathogens. Power to create and power to destroy rise together.
Economic Booms And Uneven Change
Standard economic models say that if you can automate both physical and mental labor, you can drive growth to numbers we never see today. Aschenbrenner talks about GDP growth rates of around 30 percent per year, driven by robot-run factories and AI-managed supply chains.
You might have:
- giant robot factories covering something like the Nevada desert
- data centers filled with fast AIs doing most cognitive work
At the same time, some sectors could stay human by law. Governments might insist that doctors, therapists, or lawyers remain human to keep a sense of trust or control. Some jobs, like port workers, might be protected by regulation that makes automation illegal for a time.
So you get a strange picture: some parts of life feel like science fiction, other parts look stuck in the 1990s.
The Military And Political Stakes
Where this really gets scary is military and state power.
Whoever controls superintelligence could:
- develop mosquito-sized drone swarms for surveillance or attack
- design entirely new types of weapons that no one has seen before
- hack poorly defended systems at scale, from elections to power grids
- design new biological threats and pay humans in cryptocurrency to synthesize them
Even without robots marching down the street, a small civilization of superintelligent AIs could shape events through hacking, financial systems, psychological operations, and careful influence on key humans.
The history analogy often used here is Hernán Cortés. With roughly 500 well-armed Spaniards, he toppled the Aztec Empire of around 10 million people. He did not have godlike power, just better technology, strategy, and disease vectors.
Superintelligent AI systems compared to today’s militaries could look like 21st-century armies facing 19th-century horse brigades with bayonets. Raw numbers stop mattering when one side has overwhelming brains and better tools.
Why Bottlenecks May Not Save Us
A natural hope is that physical limits will slow all this down.
Two main bottlenecks come up in expert debates: compute and diminishing returns.
1. Limited compute and chip access
Training and testing advanced models needs huge piles of chips. The CEO of DeepSeek has said that U.S. bans on sending high-end chips to China are their single biggest problem.
Yet once you have AI systems that are 100 times faster than humans at designing experiments, they can use the same pool of chips more efficiently. They can pick better experiments, avoid waste, and find algorithmic tricks humans missed.
We already see examples of this:
- the cost of reaching a given score on standard math benchmarks has dropped by up to 1,000 times in just a few years
- large language models keep getting between 9 and 900 times cheaper per year to run for the same level of performance, thanks to smarter training and serving methods
Every time algorithms improve, it is like getting a chunk of extra compute for free. That extra power then feeds back into making even better algorithms. You get a loop.
2. Diminishing returns in algorithm progress
Another hope is that progress will slow as we pick the low-hanging fruit. Maybe each new idea will add less and less.
That might happen at some point, but AI is a very young field compared to math or physics. For most of human history, progress was almost flat. Then farming, industry, and computing stacked up and it felt like the world exploded in speed.
We might be in a similar pre-takeoff phase with AI research itself.
For a helpful public summary of these arguments, Aschenbrenner has a readable Situational Awareness introduction that condenses his longer technical writeup.
Living With An Imminent Risk
None of this means doom is guaranteed. It does mean we should treat the next decade with the seriousness it deserves.
Some key facts to keep in mind:
- The heads of the top AI labs have superintelligence as an explicit goal, not a side effect.
- Many of them talk about 2 to 5 years as the timeline to something like AGI.
- Thousands of academics and industry leaders signed statements saying advanced AI could pose extinction-level risks.
- Geoffrey Hinton left a comfortable position at Google so he could speak freely, and he has put our odds of extinction at around 50 percent if we stay on the current path.
If you want to see the evidence and sources behind many of these claims, the creator of the original video compiled a long list of references in a detailed sources document.
So what can one person do in the face of all this?
A few concrete steps:
- Learn enough to think for yourself, rather than bouncing between hype and denial.
- Support groups that push for sensible AI governance and safety research. The Control AI campaign lists some ways to get involved.
- Talk calmly with friends, coworkers, and family about what is coming, so this topic is not left only to insiders and marketers.
If you want to follow the creator whose work inspired this piece, you can find Drew’s mix of AI thoughts and memes on X under PauseusMaximus.
Conclusion: A Calm Look At A Wild Decade
Standing at this moment feels strange. On one side, we have chatbots that still forget context and make silly mistakes. On the other, we have serious people arguing that superintelligent AI could emerge within a few years and change everything from work to war.
The truth is that self-improving AI is already here in weak form, and many signs point toward stronger forms soon. We cannot stop this progress with wishful thinking, but we also are not helpless passengers. We can push for better safety research, smarter regulation, and a culture that treats artificial general intelligence with the gravity it deserves.
If you have read this far, you already care more than most. Keep learning, keep asking hard questions, and keep your sense of agency. The systems we build in the next decade will shape what “normal life” means for centuries.
0 Comments