In October 2025, a research team at Fudan University in Shanghai quietly published something that could reset the entire chip industry. They built a fully functional flash memory chip from a two-dimensional material only a few atoms thick, reached a 94% manufacturing yield, and showed a path to mass production in as little as five years.
This is not a lab demo that works once under a microscope. It is a programmable, full-featured memory chip that plugs into standard silicon. Paired with a record-breaking 400 picosecond memory device, it points to a future where storage is no longer the bottleneck for AI, gaming, or anything else that touches data.
If you care about AI speed, battery life, or the global chip race, this story matters to you.
The Memory Wall That Has Been Slowing Everything Down
For decades, tech companies kept our devices improving by shrinking transistors and stacking more memory. That path is running into physics.
Most of the storage in your life, from phone flash to SSDs in data centers, comes from NAND flash memory. Engineers like to compare it to a city of skyscrapers. Early chips were like a city filled with short buildings. When they needed more storage, they did not spread out, they built up.
Today, chips from companies like Samsung stack more than 200 layers of memory cells. Each extra layer adds more capacity, but it also creates a bigger traffic problem. Electrons, which carry your data, have to move through tiny vertical channels, like crowded elevators in a super tall tower. As the buildings get higher, those elevator rides get slower and less reliable.
That creates the memory wall:
- Your CPU is a lightning-fast chef.
- Memory is the pantry at the end of a long hallway.
- The chef spends more and more time waiting for ingredients.
High resolution video, open-world games, AR and VR, and especially modern AI models all hammer memory. A model like GPT-4 has hundreds of billions of parameters, and every training step shuffles those numbers through memory. The processors are fast, but they spend much of their time waiting for data to arrive.
The other half of the problem is physics at tiny scales. As engineers shrink memory cells down to just a few nanometers, the rules change. Electrons start slipping through barriers that should block them, a quantum effect called tunneling. You can think of it as ghost electrons walking through walls. For flash memory, which depends on trapping electrons to store a 0 or a 1, that is catastrophic.
The result:
- Higher error rates.
- Shorter memory lifetimes.
- More wasted energy as heat.
- Ever more expensive factories.
Each new advanced fab can cost tens of billions of dollars, for only modest gains. Everyone knew we would need something beyond silicon, but no one had a practical path.
Until the CY1 chip appeared.
What Comes After Silicon? Meet Fudan’s CY1 2D Flash Chip
In a 2025 Nature paper, researchers led by Liu Chunsen and Zhou Peng at Fudan University described what they call the CY1 chip. It is the first fully functional two-dimensional NOR flash memory chip integrated on top of standard silicon.
They built it with a process they named ATOM2CHIP, which connects an atom-thick memory layer to a normal CMOS logic chip underneath.
Independent coverage from sources like Tom’s Hardware and TechXplore confirmed the main result: this is not a single device on a bench. It is a complete memory system with instruction support, parallel operations, and yields close to commercial chips.
The 2D Material That Makes It Possible
You have probably heard of graphene, a single layer of carbon atoms. Fudan’s team used a different 2D material called molybdenum disulfide (MoS₂), part of a family known as transition metal dichalcogenides.
Why MoS₂?
- It can form layers only a few atoms thick.
- It has good electronic properties for switching and data storage.
- It works at practical voltages and temperatures.
In a 2D sheet like this, electrons can zip through with very low resistance. That means less wasted energy, less heat, and the potential for extremely fast switching.
But none of this would matter if you could not connect that delicate sheet to real chips.
Why 2D Chips Were “Impossible” Until ATOM2CHIP
For years, 2D materials were stuck in research labs. The problem was simple to describe and hard to solve.
Even the smoothest silicon wafer looks like rough terrain at the atomic scale. Imagine trying to lay a silk scarf over a gravel road, then driving a truck over it. The scarf tears and bunches up. That is what happened when people tried to place atom-thin materials directly onto finished silicon.
The Fudan group attacked the problem in three key steps:
- Pick flexible 2D materials
They chose materials that can bend and conform, more like silk than glass. - Use modular integration
Instead of building everything at once, they fabricated the silicon CMOS logic and the 2D memory arrays separately. The 2D layer was grown and patterned on a pristine, controlled surface, then transferred onto the CMOS wafer. - Create dense vertical connections with conformal adhesion
They formed millions of tiny vertical bridges between the layers, a structure they describe as high-density monolithic interconnection. A special adhesion process lets the 2D film flow over the bumps of the silicon underneath, like a liquid coating rough ground. This keeps the film intact and keeps its electrical properties stable.
The result is not just a stack of chips. It is closer to a fused 3D structure at the atomic level.
What The CY1 Chip Can Actually Do
The Nature paper, which you can read in full through the open-access version titled “A full-featured 2D flash chip enabled by system integration”, gives hard numbers that matter for real products.
Here is a summary of some headline specs:
| Feature | CY1 2D Flash Chip |
|---|---|
| Memory architecture | 2D NOR flash on 0.13 μm CMOS silicon |
| Memory cell yield | 94.34% |
| Operating speed | Up to 5 MHz |
| Program / erase time | About 20 nanoseconds |
| Energy per bit | About 0.644 picojoules |
| Data retention | 10 years |
| Endurance | Over 100,000 write cycles |
| Operations | Full 8-bit instructions, 32-bit parallel access |
Those numbers are not record-breaking in every metric, but they prove something more important: this works as a complete, programmable memory chip with real-world reliability numbers.
If you compare that to experimental devices that only flip once under ideal lab setups, this is a major step closer to shipping hardware. Reports from Interesting Engineering and TechSpot underline that point.
And the CY1 chip is not even the fastest thing this team has built.
The Insane Speed Record: A 400 Picosecond PoX Device
Six months before CY1, the same group published another Nature paper describing a device they called PoX. This was a single memory cell, not a full system, but it did something that grabbed every engineer’s attention.
It reached a programming speed of 400 picoseconds.
To get a sense of scale:
- 1 microsecond is a millionth of a second.
- 1 nanosecond is a billionth of a second.
- 1 picosecond is a trillionth of a second.
Most current NAND flash writes data in microseconds. Some advanced DRAM and SRAM operate in nanoseconds. This PoX device switched in hundreds of picoseconds, thousands of times faster than typical flash and even beyond standard cache memory speeds.
The PoX device is based on the same ATOM2CHIP approach and 2D materials that power CY1. You can think of CY1 as the “practical product” path, and PoX as the glimpse of the upper limit.
A good analogy is a Formula 1 car trying to refuel through a garden hose. Today’s AI accelerators, like Nvidia’s H100 and various TPUs, are the F1 cars. NAND flash is the garden hose. The PoX speed record shows what a proper fuel line could look like.
Why This Breakthrough Matters So Much For AI
At this point, AI is less limited by raw compute and more by memory speed and bandwidth. GPUs and custom AI chips can execute trillions of operations per second, but they are sitting idle much of the time, waiting for data.
That is true at every stage of an AI system:
- Training large language models.
- Fine-tuning models on custom data.
- Running inference in production.
For training, every step involves loading huge batches of parameters and sample data from storage, then writing updated parameters back. When your main storage layer still behaves like technology from the 1980s, you waste time and energy shuttling bits around.
If you swap that for 2D memory with nanosecond or even picosecond access, a few things could change very quickly:
- Training runs that used to take weeks might drop to days or hours.
- Data centers could cut electricity use sharply, since much of it goes into memory and I/O overhead.
- The cost per experiment for AI research teams could fall by orders of magnitude.
China is already exploring multiple routes to faster, cooler AI hardware. For example, some groups are developing photonic accelerators that use light instead of electrons, like the quantum photonic chip for AI workloads. Combine that kind of compute with ultra fast, low energy 2D memory and you get a very different picture of what future AI clusters look like.
What It Could Mean For Your Devices
This kind of memory does not only help giant data centers. It also changes what is possible in personal devices.
Some likely effects if 2D flash hits mass production:
- Instant-on laptops
You open the lid and the system is active immediately. Not “five seconds fast,” but effectively zero wait. - Near instant file transfers
Moving an 8K movie or a giant game install feels like copying a small image today. - No more loading bars in your favorite apps
Games, creative tools, and browsers could pull assets from storage so fast that traditional loading screens disappear. - Cooler, quieter hardware
The very low energy per bit means far less heat from storage. Your phone stays cool under load, your laptop fans spin less often, and your console does not sound like a jet engine. - On-device AI that feels like magic
Models with billions of parameters could live right on your phone or laptop SSD, with enough speed to respond instantly. Many AI tasks that now depend on the cloud, such as personal assistants or local copilots, could run offline with no data leaving your device.
That last point is important for privacy. If your personal AI runs locally and never sends your messages, documents, or photos to a remote server, your risk surface shrinks a lot.
From Lab To Factory In Just 3 To 5 Years
A lot of breakthrough materials never leave the lab. Graphene is the classic example. So why is this different?
Fudan’s team did something very smart: instead of trying to replace the whole silicon stack, they integrated with existing CMOS technology. The CY1 chip uses a standard 0.13 micrometer CMOS process for the logic layer. That is an older but very mature node, with stable tools and low costs.
The 2D memory layer sits on top. That means:
- Foundries do not need to build brand-new multi billion dollar fabs.
- They can keep using their existing CMOS lines, then add ATOM2CHIP steps.
- The supply chains for wafers, masks, and most tools stay intact.
In the Nature paper, Liu Chunsen and colleagues point out that early transistors took about 24 years to progress from the first prototype to a full CPU. By contrast, ATOM2CHIP builds on decades of CMOS work, so the timeline compresses sharply.
Several signs make their 3 to 5 year target for pilot production credible:
- High yield already
A 94.34% memory cell yield is comparable to commercial flash, which is usually the hardest hurdle for scaling up. - Multiple peer-reviewed papers
Two separate Nature publications in the same year, one on the PoX device and one on the CY1 system, show both the physics and the integration are sound. - Real fabrication runs
They report successful tape-out using standard industrial processes, not just custom lab setups. - Heavy national funding
Programs like China’s National Key Research and Development Program and the National Natural Science Foundation are backing this work, which means there is money and political will to move it from lab to fab.
If they hit a pilot line by around 2027 to 2029, early products using 2D memory could appear before the end of the decade.
The Chip War Twist: How 2D Memory Slips Past Export Bans
All of this is happening against the backdrop of a very active tech rivalry.
Since 2022, the United States has rolled out several rounds of export controls aimed at slowing China’s semiconductor progress:
- October 2022, bans on advanced AI chips shipped to China.
- January 2023, bans on EUV lithography tools needed for cutting edge logic nodes.
- October 2023, broader controls that also touch high end DUV tools, materials, and advanced packaging.
The strategy focused on logic chips like CPUs and GPUs. Those need extreme precision lithography at nanometer scales and depend on tools from companies such as ASML, which are easy to choke off.
Memory, especially 2D material memory, is a different story.
ATOM2CHIP grows its 2D layers using chemical vapor deposition (CVD). This process is widely used across many industries and does not require the same advanced EUV tools that are now restricted. The precision comes from the chemistry and the atomic structure of the material, not from drawing ultra fine lines with light.
That means:
- Many of the tools needed for 2D memory are already available inside China.
- A lot of that equipment is not on export control lists because it is seen as fairly standard.
Zhou Peng has described this approach as a way for China to develop source technology where it can lead, not just catch up. Some analysts call this a “lane change” move. If you cannot win while running in the same lane, you build a new lane next to it.
While the United States and its allies are squeezing the path to better 3D NAND and high end logic, China is pouring resources into atomic scale alternatives that do not rely on those same choke points.
Can The Rest Of The World Catch Up?
Companies like Samsung, SK Hynix, Micron, and Intel’s memory partners have spent decades and hundreds of billions of dollars refining silicon NAND. Their fabs are tuned to stack more layers, polish lithography, and squeeze more bits into each cell.
Pivoting to 2D materials will not be easy for them:
- Many fabs cost around 20 billion dollars each and are heavily optimized for current processes.
- Engineers will need new skills for handling fragile 2D films and complex transfer steps.
- Supply chains for gases, substrates, and inspection tools will need updates.
- Customers will need to qualify an entirely new memory technology for reliability and safety.
Some Western labs are already exploring similar ideas, and coverage like the TechSpot analysis of 2D memory chips shows growing interest. But on commercialization, the available evidence suggests Fudan’s group and China’s ecosystem are ahead.
The open question is how quickly global players can respond now that the core concepts and some process details are public.
Conclusion: The Future Of Memory Is One Atom Thick
For forty years, silicon based flash has carried everything from family photos to foundation models. Now we are seeing the first serious blueprint for what comes next, and it is only a few atoms thick.
Fudan University’s ATOM2CHIP work gives us a new class of memory that is fast, energy efficient, and compatible with existing fabs. It tackles the memory wall that slows AI, it offers cooler and more responsive devices, and it introduces a new front in the global chip contest.
Whether the most exciting outcome is instant AI on your phone, laptops that never lag, or something nobody has imagined yet, one thing feels clear: the old assumptions about memory speed and capacity are about to be rewritten.
If you had access to storage that operates at near light speed, what would you build first?
0 Comments