The Business of “Almost” AGI: How Pretend Futures Turn Into Real Money

  a futuristic city skyline at dusk, with half the scene glowing in cool blue code and neural network lines, and the other half showing ordinary office workers at desks


AGI shows up in headlines every week now. One day it is “imminent,” the next day the term itself is “not useful.” If this leaves you feeling confused, tired, or low-key anxious, you are not alone.

In this post, I want to slow things down and look at AGI as a business story, not as science fiction. Why are so many people so sure we are “almost there”? Why does that confidence translate into billions of dollars, even when we cannot agree what AGI actually is?

By the end, you will have a clear picture of:

  • What AI, AGI, and ASI really mean in plain language
  • How the “almost AGI” story is used to justify eye-watering losses
  • Why AGI job fear is built on assumptions, not present reality
  • How investors use the AGI narrative as permission to burn money

Take a breath. We are going to unpack the hype without panic and without doom.

AI, AGI, ASI: Clearing Up The Basics

Before talking about money and fear, we should agree on words. Right now, people mix up “AI” and “AGI” as if they are the same thing. They are not.

There are three broad buckets people use: AI (or ANI), AGI, and ASI.

Narrow AI: The Only Kind We Actually Have

What we really have today is artificial narrow intelligence (ANI). Narrow is the key word here.

ANI systems are very good at one specific thing, in a very specific way. They do not “understand” the world. They pattern match.

Common examples:

  • Text models like Claude or GPT that generate responses based on prompts
  • Voice assistants like Siri that recognize speech well in limited contexts
  • Face recognition that unlocks your phone
  • Recommendation systems on YouTube, Netflix, or TikTok

All of these are “smart tools,” not digital minds. They shine inside their lane and fail hard outside it. As of 2025, this is the only type of AI that exists in practice.

AGI: The Shape-Shifting, Theoretical Idea

Artificial general intelligence (AGI) is not a product. It is a theoretical idea.

Roughly speaking, AGI would:

  • Match human-level intelligence
  • Learn new tasks without being retrained from scratch
  • Reason, plan, and transfer knowledge across domains
  • Handle novel problems it has never seen before

If you want a picture in your mind, think of a movie robot that talks, plans, and adapts across many tasks the way a person could.

The problem is that even among experts, there is no single, agreed definition of AGI. If you read something like the Wikipedia overview of artificial general intelligence or IBM’s overview of what artificial general intelligence is, you see slightly different emphasis and scope.

Everybody agrees on one thing though: we are not there yet.

ASI: The Pure Science Fiction End Of The Spectrum

Artificial superintelligence (ASI) sits one step further. It is the idea of a system that is smarter than humans in every domain, can improve itself, and races away from us.

This is often tied to “singularity” scenarios. Right now, it is fully hypothetical.

Quick Comparison: ANI vs AGI vs ASI

TypeSimple definitionDoes it exist today?Everyday example
ANIGreat at one narrow taskYesGPT-style chatbots, face ID, recommendation engines
AGIHuman-level general intelligenceNoRobots and AIs from sci-fi movies
ASISmarter than humans in all areasNoSuperintelligent AI overlords from fiction

Keeping this table in mind will help you spot when someone quietly shifts from “AI” to “AGI” in the same sentence. That shift often hides the trick.

The Confidence Paradox Around AGI

Here is where things get strange.

The closer someone is to actual AI research, the more cautious they are about AGI. Many PhD-level researchers talk about AGI with:

  • No hard timelines
  • Careful language
  • A lot of “we do not know yet”

They do this because the definition itself is vague and almost impossible to measure. How do you even prove “general intelligence” in a machine when we cannot agree on how to measure it in a human?

Now contrast that with the loudest AGI voices in media or on social platforms:

  • They throw around the word “consciousness” without defining it
  • They speak in confident, absolute tones
  • They often cannot explain what they mean by “conscious AI” beyond vibes

The result is a confidence paradox. The people who know the most are the least sure. The people furthest away are the most certain.

That certainty is contagious. Confidence often gets mistaken for credibility, which is great if you are trying to raise money or grab headlines, but not so great if you are just trying to make sense of what is real.

If you want a wider context on how human psychology feeds into this kind of tech hype, it pairs well with this breakdown of the AI bubble driven by human psychology.

Illustration of a loudspeaker amplifying a small AI brain into a huge shadow


A Quick Sponsored Detour: When Automation Costs Start To Bite

In my own work, I spend a lot of time doing competitive research. That means scanning product pages, pricing tables, LinkedIn posts, then summarizing everything for my team.

I use tools like n8n to glue APIs, apps, and models into automated workflows. It is amazing, until you scale up. Every workflow has a cost. When you go from 1 to 50 workflows, bills rise fast. Hosting, API credits, everything.

The most cost effective setup for that kind of heavy automation is to self-host n8n on a virtual private server. That is where Hostinger comes in with its n8n-ready VPS plans and an easy install flow. I like that you pay for the server, not per workflow, and that you can keep everything private on your own box, which helps a lot if you care about GDPR or SOC 2 requirements.

If you are serious about scaling automations without wrecking your budget, this kind of setup is worth considering while prices are low, especially during Black Friday style promos.

What Real AI Researchers Actually Predict About AGI

If you strip away the hype and look at what well-known researchers say, the story gets much calmer.

A recent example comes from a podcast where Andrej Karpathy was interviewed about AGI timelines. His view was that we are at least ten years away from AGI, and even that number was framed as speculation based on simple trend lines.

That kind of thinking runs into something called extrapolation bias. We see rapid progress for a few years, then our brains draw a straight line into the future and assume the same pace will continue. Reality almost never follows a perfect straight line.

You can see similar caution in many expert surveys and timelines. They vary widely and usually include words like “maybe,” “depends,” or “under certain assumptions.” This is very different from media headlines that treat AGI like an overdue Amazon package.

If you want a short, neutral summary of the concept itself, Google Cloud’s explanation of what artificial general intelligence is is a decent primer. Again, notice how hypothetical and conditional the language is.

Line chart illustration showing hype line vs slower expert timeline for AGI


How The AGI Race Justifies Burning Billions

Now we get to the money.

One of the most revealing quotes in this whole story comes from Sam Altman speaking at Stanford. Paraphrased, his point was simple: he does not care if they burn 500 million, 5 billion, or 50 billion dollars per year. As long as they are “making AGI,” it is worth it.

Read that slowly.

The logic goes like this:

  1. AGI is inevitable.
  2. Whoever gets there first will capture unimaginable value.
  3. Therefore, current losses do not matter.

In this frame, large language models plus multimodal add-ons plus some “magic” equals “we are on the path to AGI.” If you accept that, then setting billions on fire every year turns from reckless to visionary.

Look at one example. OpenAI has reportedly projected burning 115 billion dollars through 2029, after already posting a 13.5 billion loss against 4.3 billion in revenue in the first half of 2025. Under normal circumstances, this pattern would cause serious concern. Wrapped in the AGI story, it becomes a brave long-term bet.

Here is the key point: the same behavior that would be called delusion in other industries gets reframed as bold vision when you attach it to AGI.

“Safety” As A Convenient Story For Market Power

Another part of the AGI narrative is about safety. You often hear some version of:

“We must concentrate control of advanced AI in a few trusted hands for safety.”

On the surface, that sounds reasonable. Nobody wants unregulated powerful systems running wild. But this safety framing also does something else. It justifies market concentration.

Here is the problem structure:

  1. AGI is inevitable.
  2. It is dangerous.
  3. So we must control it in a few places.
  4. Therefore, very large companies should run it and smaller ones should not.

If critics say “this looks like monopoly behavior,” the response is “it has to be this way for safety.”

Work from places like the Yale Law & Policy Review has highlighted how this plays out across the AI stack:

  • In semiconductors, Nvidia holds around 92% market share in its niche.
  • In cloud computing, AWS, Azure, and Google Cloud control about 63% of the global market.

The AGI safety story helps frame this kind of concentration as protective, not extractive. You can see how that is helpful if you are one of the giants.

Diagram of AI stack with chips, cloud, and models highlighting concentration at each level


Manufactured Urgency: AGI As A Fundraising Story

The more hypothetical the concept, the easier it is to stretch.

A Fortune piece noted that one of the triggers for AGI’s “fall from grace” was the lukewarm reception of GPT-5. The model launched with massive expectations and, in many eyes, landed with a thud. The jump from GPT-4 to GPT-5 did not match the mental line people drew in their heads.

Even so, the AGI label continues to be a golden ticket. A few examples:

  • Safe Superintelligence Inc., founded by Ilya Sutskever, reportedly raised 2 billion dollars at a 12 billion valuation without a working product or revenue.
  • Thinking Machines Labs secured billions based on AGI promises alone.

This is not about attacking any particular founder. In fact, you can respect the technical work and still point out the business pattern. The pattern is this:

“We are building safe superintelligence / AGI” → “This is a once-in-history race” → “So it makes sense to pour in billions early.”

The urgency is not organic. It is manufactured around a concept that does not yet have a clear, testable definition.

AGI And Jobs: Why The Fear Feels Bigger Than The Facts

A huge part of the AGI story lives in our heads as job fear. You see comments like, “You are so cynical now, but wait till AGI takes your job.” That kind of line pushed me to sit down and read through several “future of work” and AGI-related reports.

Here is what I found when I looked closely.

The Famous “300 Million Jobs” Number

One of the most quoted numbers comes from reports by Goldman Sachs and the IMF. They estimate that around 300 million jobs globally could be affected by AI, roughly 9.1% of the global workforce.

300 million sounds huge, and it is. But context matters. Let us stack that against previous crises.

EventJobs lost / affectedPeak unemploymentRecovery pattern
Great Depression (US)15 million jobs25%4 years to peak, 10+ years total
2008 financial crisis27 million jobs globally15 million unemployed in US5–7 years with wage stagnation
COVID-19 shock33 million more unemployed in 1 year~13% in US, 6.5% globalK-shaped: high earners gained, low earners lost

On top of that, in a normal year in the US, you see around 50 million job separations. People leave, get fired, switch jobs. Those roles are usually replaced by new hires. There is constant churn.

The AGI fear story often skips this context. It treats the 300 million as an abrupt, permanent shock where jobs vanish and never come back.

The Hidden Assumption: AGI Already Exists

Here is the real catch. When you read the methods behind those reports, the displacement they describe assumes AGI-level capabilities.

They imagine systems that:

  • Perform full cognitive work across entire occupations
  • Operate without human supervision
  • Replace humans at scale across many tasks

That is not the ANI we have today. That is closer to the AGI definition that research articles such as IBM’s overview of artificial general intelligence describe as hypothetical.

In other words, a lot of job loss scenarios do not describe what current generative AI can do. They describe what a future AGI might do if it existed and if it were deployed in that way.

What The Labor Data Actually Shows So Far

Now let us look at what we can measure instead of what we imagine.

A recent analysis from Brookings and the Yale Budget Lab looked at labor market data in the 33 months since GPT’s release. Their conclusion was calm and clear:

  • There was no detectable disruption at the level of the whole economy.
  • You can see local effects and adjustments, but not a systemic earthquake.

At the same time, headlines focused on tech layoffs and often blamed AI. The reality is more boring: traditional economics like inflation, interest rates, and restructuring decisions still drive layoffs far more than AI tools do.

So we end up in a strange place.

  • There is no solid evidence of large-scale AI job disruption yet.
  • But we are already anxious about hypothetical AGI that does not exist.

The mismatch between data and fear is what I think of as the anxiety–reality gap.

Simple bar chart comparing media headlines about AI layoffs vs flat labor disruption line


How The AGI Story Buys “Permission To Burn”

To really see how unusual the AGI funding pattern is, it helps to compare it to normal AI SaaS companies.

Take AI SaaS startups like Perplexity. Their valuations often trade at high revenue multiples. For Perplexity, for instance:

  • Valuation: 18 billion dollars
  • Revenue: 300 million dollars
  • Multiple: 60x revenue

That is already very generous. Investors are paying 60 dollars for each 1 dollar of current yearly sales. Still, at least there is a product and a clear business model. Some of these companies, like Cursor and Midjourney, are already profitable with relatively small teams. Midjourney is even bootstrapped.

Now look again at the AGI-focused funding world.

  • OpenAI revenue: around 15 billion dollars
  • Annual loss: about 8 billion dollars (around 53% of revenue)
  • Monthly burn: about 78 million dollars
  • Funding raised (2025): 40 billion dollars
  • Valuation: around 300 billion dollars

These numbers describe a company that is not just burning money, but burning it with explicit permission from investors because of what they believe about AGI.

AGI is not a product category here. It is a narrative that unlocks tolerance for extreme losses. AI SaaS gets rewarded with high multiples. AGI gets rewarded with time, patience, and forgiveness that almost no other sector would ever see.

If you want a more grounded view on how AI is (and is not) producing financial returns right now, this breakdown of the AI ROI reality check for 2025 helps put those numbers into perspective.

Also Read: Grok 4.1 Just Dropped: How xAI Quietly Took Over the AI Charts

So Where Does That Leave Us With AGI?

Pulling all of this together, here is the picture that emerges.

  • AI (ANI) exists, is useful, and is already woven into many tools.
  • AGI is a moving target, still undefined in a rigorous way, and does not exist yet.
  • The people closest to the research are the most cautious about timelines.
  • The loudest hype comes from outside that circle and often ties AGI to money, monopolies, or fear.
  • The current labor data does not show a large-scale AI disruption so far, yet job anxiety is high.
  • The AGI narrative gives large players permission to burn through extraordinary sums in the name of a hypothetical future.

In my view, AGI as a concept today lives somewhere between magical thinking and a Terminator script. It is not that progress is fake. It is that the story about progress has run far ahead of what we can actually measure.

The irony is hard to miss. The closer someone is to real cutting edge research, the quieter and more careful they are. The further away they are, the more intimidated they feel by grand claims made in confident tones.

So maybe the most practical move right now is simple:

  • Treat narrow AI as powerful tools, not gods.
  • Question any claim that uses AGI to justify unlimited spending or permanent fear.
  • Keep an eye on real labor and productivity data, not only on headlines.
  • Give yourself permission to worry less about something that, for now, does not exist.

Hype cycles come and go. Bubbles inflate and deflate. Human nature, with all its hopes and fears, is the constant. If you can see the pattern, you can step out of the panic and make calmer, better choices.

Thanks for reading. If this helped you feel a little more grounded about AGI, share it with someone who is feeling overwhelmed by the noise.

Post a Comment

0 Comments