Artificial intelligence is hungry for power. Data centers are planning new power plants, even nuclear, just to keep up. Into that world steps a new company called Extropic, claiming its chips can be up to 10,000 times more energy efficient than top Nvidia GPUs for certain AI tasks.
They say they can pull this off by flipping how we think about computers. Instead of fighting the natural thermal noise in hardware, they use it. Their approach, called thermodynamic computing, treats the tiny random shakes inside matter as a feature, not a bug.
In this post, you’ll see what that means in plain language, how their new “magic coin” computer parts work, where the 10,000x number comes from, what is real today, and what it could mean for AGI and everyday devices if it all scales.
If you want a deeper technical intro from the source, Extropic has a helpful overview in their article on thermodynamic sampling units.
From Noise As Enemy To Noise As Tool
Everything around you has a little shake in it. Atoms jiggle. Electrons wobble. That constant microscopic motion is thermal noise. It is why things have temperature.
Traditional computers treat that noise like a problem. They spend a huge amount of effort and energy keeping it out so that a 1 is a perfect 1 and a 0 is a perfect 0. Cooling systems, clean power, careful circuit design, all work to silence that jitter.
You can picture a GPU as a very stressed accountant in a soundproof office. The whole building is built so that no one distracts them while they crunch numbers.
Extropic flips that mindset. Instead of building a soundproof office, they ask a different question: what if the noise is the answer? What if the random shake itself becomes the core tool? That is the idea behind thermodynamic computing, and it forced them to create a brand new building block for computers.
Why Traditional Computing Burns So Much Energy
To understand why this matters, it helps to look at how computers treat randomness today.
Normal chips:
- Fight thermal noise with cooling and strict design rules.
- Use extra circuitry to create randomness when they need it.
- Spend energy calculating probabilities instead of letting physics handle it.
Extropic’s approach:
- Treats noise as useful raw material.
- Lets particles flicker in a controlled random way.
- Tries to let the hardware relax into the answer, instead of forcing it with heavy math.
Temperature here is not just a reading on a sensor. It is the physical jitter that current chips try to avoid. Extropic is trying to build a computer that speaks that jitter as its native language.
The Core Invention: Probabilistic Bits, Or P-bits
Inside every normal computer is a simple building block: the bit. A bit is like a light switch. It is either on or off, 1 or 0, and that is it. Your phone, your laptop, and massive data center GPUs all boil down to billions of these switches flipping in very structured ways.
Extropic introduced a different kind of building block called a probabilistic bit, or p-bit.
Think of a p-bit as a programmable magic coin instead of a switch.
You can tell this coin:
“I want you to land on heads 70 percent of the time and tails 30 percent of the time.”
Then you flip it millions of times. Over many flips, it behaves like a 70/30 coin.
The wild part is that you can then say, “Now be 50/50.” Or “Now be 99 percent heads and 1 percent tails.” And the coin just does it. No hidden trick, no slow recalibration. The probability is part of the coin’s nature, and you can tune it on demand.
In Extropic’s picture, the universe’s natural wobble is what makes that magic coin flip. The p-bit constantly flips on its own, driven by thermal noise, but it spends more time in 1 or in 0 according to the probability you set.
Here is why that is such a big change compared to today’s chips.
Modern GPUs need fake coins. To act like that 70/30 coin, a GPU:
- Runs a complex algorithm to generate a random number.
- Checks whether that number falls inside the 70 percent range.
- Outputs a 1 or 0 based on that check.
The GPU pretends to be random by doing lots of math. That math costs time and energy.
A p-bit, in contrast, does not calculate randomness. It just is randomness. The tiny physical noise inside the circuit makes it flicker between 0 and 1 all by itself. The only work you do is to nudge how often it prefers one side or the other.
The Extropic team designed tiny circuits, made from normal silicon building blocks, that behave this way. They flip themselves millions of times per second. The energy they use to keep flickering is described as “like a whisper”.
Some key traits of a p-bit:
- Tunable probability: You can set it to prefer 1 or 0 with any bias, like 70/30 or 99/1.
- Instant retuning: You can change that bias very quickly, without reconfiguring heavy circuitry.
- Physical randomness: It relies on thermal noise, not complex pseudo-random algorithms.
One p-bit like this is a neat trick. The real question is what happens when you have thousands or millions of them and you let them talk to each other.
P-bits vs Regular Bits At A Glance
Here is a side-by-side view of how a traditional bit compares to a p-bit.
| Feature | Regular Bit | P-bit (Probabilistic Bit) |
|---|---|---|
| State behavior | Fixed 0 or 1 | Constantly flickers between 0 and 1 |
| Control | Forced to exact state | Tuned to spend more time as 0 or 1 by probability |
| Randomness | Needs math to fake randomness | Uses built-in physical thermal noise |
| Energy for randomness | High, due to calculations | Very low, randomness is essentially free |
| Use in AI | Calculates probabilities | Physically represents probabilities |
The important idea is that p-bits do not just output random bits. They physically embody the probability itself.
Building A New Kind Of Processor: Thermodynamic Sampling Units
Once you can build p-bits, the next step is to connect many of them. When you wire thousands or millions of these “magic coins” into a network and let them influence each other, you get a new type of processor.
Extropic calls these processors thermodynamic sampling units, or TSUs.
You can think of a TSU as the “Zen master’s brain” in the earlier analogy. The p-bits are like neurons that stochastically flip, each one feeling its neighbors, settling into patterns that represent answers to problems.
For a few years, this was mostly theory backed by small experiments. A handful of researchers proved they could make p-bits behave in silicon. They ran room-scale lab setups and early prototypes that showed the physics works.
Then, in October 2025, Extropic published research that made people sit up. They claimed that, for a particular AI-style task, their thermodynamic computing approach could match the job of a top-tier AI GPU while using around 10,000 times less energy.
To understand how large that factor is, consider:
- If your phone battery lasts 1 day, 10,000 times longer would be about 27 years.
- If your monthly electricity bill is $100, 10,000 times less would be 1 cent.
- If a 5 hour flight scaled the same way, it would take less than 2 seconds.
That is not a small optimization. It is not a 10 percent speedup or a 2x gain. It is more like the difference between a horse and a spaceship.
Extropic’s team puts this number in the context of the AI energy crunch. Data centers are already talking about building power grids larger than the current US grid, just to feed future AI models. A 10,000x improvement in energy per thought could change that story.
To move beyond simulations, Extropic also built real silicon.
- X0: Their first prototype TSU chip, described as a simple device with dozens of probabilistic circuits. It runs at room temperature and shows that these p-bit primitives can be built and controlled reliably.
- XTR0: A desktop development kit that holds two X0 chips. It lets researchers test hybrid algorithms, where traditional CPUs or GPUs work alongside thermodynamic sampling units.
Extropic plans to make XTR0 available to selected early access partners, giving labs and companies a way to play with thermodynamic computing on a desk-top device, not in a complex lab.
From Theory To Z1
Extropic’s progress line looks like this:
- Early years: Conceptual work and room-scale, cryogenic experiments.
- X0: First compact silicon chip with p-bit circuits, running at normal room temperature.
- XTR0: Desktop device with two X0 chips for testing hybrid workloads.
- Thermal: An open source Python library that simulates TSUs on existing GPUs, so developers can start writing algorithms now.
- Z1: The first commercial scale TSU, planned to feature about a quarter million interconnected p-bits in one chip.
The idea is that Z1 chips will be used in systems that chain many TSUs together, reaching millions of p-bits in a dense and power efficient package. Those systems are intended to run energy based models, a class of machine learning models that work with probability and energy states in a way that matches how TSUs behave.
How Thermodynamic Computing Generates Images
To show what their hardware concept is good at, Extropic focused on something modern AI is already known for: generating images.
They designed a new kind of AI model called a denoising thermodynamic model, or DTM, to run on thermodynamic sampling units. The name hints at what it does: it takes noisy inputs and denoises them into meaningful outputs.
A simple analogy helps here. Think of an old TV that is not tuned to any channel. The screen shows a storm of black and white dots. That is noise.
Now imagine that hidden inside that static is a real image, say a picture of a shoe. Your job is to slowly clean up the static until the shoe appears. You would look at patterns of dots and gradually adjust them until a clear picture emerges.
Modern generative image models, like diffusion models, do something very similar. They:
- Start from pure noise.
- Repeatedly run heavy math to slightly clean up the noise.
- After many steps, end up with a clear image.
On a GPU, every step involves a vast number of calculations. For each pixel, the GPU has to compute the probability of different colors, given all the surrounding pixels, and update it. That is where you get phrases like “math nightmare” and “energy intensive” in practice. The work scales badly.
Extropic’s TSU takes another route.
Gibbs Sampling And Letting Patterns Settle
The DTM running on a TSU uses an algorithm called Gibbs sampling. The name sounds technical, but the core idea is simple.
Imagine a huge crowd of people in a stadium. Each person holds a coin that can be heads or tails. They can only see the few people closest to them, not the whole crowd.
You give each person one rule:
“Try to agree with your neighbors.”
If most neighbors show heads, a person makes their own coin more likely to be heads. If most show tails, they lean towards tails. No one orders them exactly what to do; they just nudge their own behavior to be a bit more like the crowd they see.
In Extropic’s hardware:
- Each p-bit is like a person in the crowd.
- The “neighbors” are other p-bits connected to it in the circuit.
- The “coin” state is the 0 or 1 the p-bit is currently in.
- The rule comes from the model being trained.
The beautiful part is that there is no central boss. No master processor is micromanaging every p-bit. Instead, each p-bit flips randomly according to its probability and the influence of its neighbors.
Over time, the whole network settles into a stable pattern. That pattern, in the image example, is the denoised picture.
You can think of it like a ball rolling into a valley. The ball does not need a map or GPS. Physics pulls it toward the lowest energy state. The TSU behaves similarly. It naturally relaxes into the most probable pattern, according to the constraints encoded in the network.
A rough step-by-step of thermodynamic denoising looks like this:
- Start with a grid of p-bits in random states, like static on a TV.
- Each p-bit reads its neighbors.
- It adjusts how likely it is to be 0 or 1, nudged by that local context.
- The grid repeats this many times, with p-bits constantly flipping.
- A stable pattern appears that matches a meaningful image.
In Extropic’s research, they simulated a small TSU that generated simple, tiny black and white images of clothing items: T-shirts, shoes, and bags. These are standard benchmarks in generative modeling, not glossy art pieces.
They then compared how much energy a future Z1-like TSU would use for the same task against a very efficient GPU running the best known algorithm for that benchmark.
The result they reported: the TSU approach could use around 10,000 times less energy.
Other researchers have looked at their math and confirmed that the calculations in the paper are correct. The claim is not a marketing slide, it is derived from physics-based models and accepted energy estimates.
Why This Fits Generative AI So Well
Generative AI systems, whether text models like ChatGPT or image models like Midjourney, are all working with probability distributions. They model “What is the most likely next word?” or “What is the most likely pixel pattern for this prompt?”
TSUs are built to sample from probability distributions directly. That is their native job.
Extropic highlights energy based models as a strong match. These models represent patterns as low energy states in a landscape. The TSU’s physics mirrors that idea, since the device literally relaxes into low energy patterns when left to run.
So while today’s champions are transformers and diffusion models on GPUs, thermodynamic computing could favor a different class of models that are more natural on this new hardware.
Reality Check: What’s Real, What’s Hype
A claim like 10,000x energy efficiency sounds almost too good. Extropic seems aware of that and spends time on the caveats.
On the positive side:
- They are not just talking. They have published peer-reviewed style research.
- They have released Thermal, an open source Python library, so people can simulate TSUs on existing GPUs.
- An independent researcher has already checked parts of their work and said the math lines up.
- They have built real hardware, not only models. X0 and the XTR0 kit exist as physical devices.
However, there is a big but.
The 10,000x number comes from:
- A simulation, not a full Z1 chip in a data center rack.
- A simple benchmark, small black and white images of clothing, not photo-realistic scenes or complex multimodal tasks.
The video uses a nice analogy: this is like a rookie baseball player in a batting cage. A pitching machine feeds balls, and the rookie hits one so hard it breaks the sound barrier. You know that the raw power is there, but that is not the same as winning the World Series under pressure.
Extropic’s current chip, X0, is also a test chip. Think of the Wright brothers’ first plane. It proved humans could fly, but it did not carry people across the Atlantic. In the same way, X0 proves that p-bits and TSU primitives work in silicon and at room temperature, but it will not run huge modern AI models.
The planned Z1 is the first chip that might get closer to that, with hundreds of thousands of p-bits and integration into larger systems. That is still in the design and build pipeline.
There is another important catch. You cannot just take a trained transformer or diffusion model that runs on a GPU and drop it onto a TSU. The architectures are too different. The “accountant” and the “Zen master” speak different languages.
That means:
- New algorithms must be designed for TSUs, like Extropic’s denoising thermodynamic models.
- A whole new research field has to grow around this hardware, much like how GPUs led to new deep learning methods.
- Tooling, libraries, and developer skills all need time to mature.
So no, you should not expect a thermodynamic chip in your next phone giving you a 30 year battery life. This tech is at the “first successful flight” stage, not the “affordable global airline” stage.
A simple way to frame the current situation:
Pros:
- Strong theoretical grounding.
- Verified math on benchmarks.
- Real silicon prototypes at room temperature.
- Open source tools (Thermal) for early algorithm work.
Cons:
- Key results are from simulations, not full production chips.
- Benchmarks use tiny, simple datasets.
- Requires new AI models, not a simple port of current ones.
- Scaling up manufacturing and systems will take years.
The upside is huge, but the path is long.
The Future If Thermodynamic Chips Scale
Now comes the bigger question: if Extropic and others succeed at scaling thermodynamic computing, what kind of world does that create?
The first impact is on the AI energy crisis. Today, the plan is mostly to scale the same computing style, just with more chips and more power plants. Some data center operators are planning their own nuclear facilities to meet AI demand.
Thermodynamic computing attacks the other side of the equation. Instead of only increasing energy supply, it increases how many thoughts per watt you can get. It is about packing more intelligence into the same matter.
If you can get 1,000x or 10,000x more effective AI work per unit of power, then:
- Massive data centers would need far less energy to train and run strong models.
- Smaller devices could host robust AI locally instead of tethering to the cloud.
- Power constrained settings, like remote villages or satellites, could still use advanced AI.
Some possible real world effects:
- Everyday devices: Your phone, car, or AR glasses could run a full-strength AI assistant locally, not a trimmed down version. It could know you deeply, help you think, create, and learn all day, without killing your battery.
- Global health: A doctor in a remote clinic could use a handheld device that contains the practical medical knowledge of the world. No need for a thousand mile fiber link to a power hungry data center. The chip inside would sip energy while running heavy probability models for diagnosis.
- Science and engineering: TSUs could help simulate how new drugs behave inside the body, or how new materials perform under extreme pressure. These are inherently probabilistic tasks, with trillions of possible interactions. Thermodynamic computing is built to search those spaces by sampling.
There is also a more philosophical angle. Today’s AI is very math heavy and brittle in some ways. It is like a vast calculator, powerful but still mechanical. A computer that is designed around physical randomness, probability, and energy landscapes might feel different.
Extropic hints at a possible shift from “accountant” AI to more intuitive AI. Less like a rule follower and more like an artist or a human brain that leans on noise in a useful way.
Of course, big GPU makers are not sitting still. They are making their chips more efficient every year. But that is more like polishing a horse-drawn carriage. Extropic’s thermodynamic hardware aims for a spaceship. Both move you, but on very different curves.
We may be at a fork in the road:
- One path: AI stays a luxury, controlled by a few big corporations, powered by huge plants, draining grids.
- The other path: AI becomes like air, cheap and everywhere, a tool that lifts many people, not just a few.
Extropic’s technology does not guarantee the second path, but it opens up a way to get there by changing how we convert energy into intelligence.
Also Read: 15 Breakthroughs of 2025 That Are Quietly Rewriting Our Future
What This Could Mean For AGI And The AI Race
The title of the video and this post talks about AGI arriving early. That is not only about smarter algorithms. It is also about the physical cost of thinking.
Brains are astonishingly energy efficient. Your brain runs on about the power of a dim light bulb, yet it handles speech, vision, planning, creativity, and more.
Thermodynamic computing tries to draw from similar physical principles. Instead of forcing silicon to act like perfect, deterministic switches, it lets circuits behave more like a noisy physical system that finds likely answers by relaxing.
If that approach works and can scale:
- You can pack far more effective intelligence into each rack of hardware.
- That makes it cheaper to train and run AGI-level systems.
- It also makes it easier to distribute strong AI more widely, not keep it locked in a handful of labs.
Extropic’s open source Thermal library is one way developers can start exploring this space now, even before Z1 ships. It lets you write and test thermodynamic algorithms on today’s GPUs so that, when TSUs arrive, the software is not starting from zero.
0 Comments