By Subhrajit Mukherjee
As artificial intelligence races ahead, its capacities and limitations are now being computed by those at the forefront of the revolution – people such as OpenAI CEO Sam Altman – in terms of gigawatts of electricity. This is because the data centers that power AI consume vast and increasing amounts of electricity. US media recently reported that OpenAI, which runs ChatGPT, has a target to build 250 GW of capacity by 2033. To put that in perspective, the total installed capacity of India, the world’s most populous country, was 476 GW in June 2025.
At the fundamental physical level, this power-guzzling happens because a lot of electricity is spent in simply moving data between memory and processor. If our devices could compute and store data in the same spot, that would dramatically reduce power consumption.
That is what neuromorphic hardware promises to do. The word “neuromorphic” refers to brain-like systems and devices.
The human brain is a masterclass in efficiency. You glance at your phone, hear a familiar tune, or catch the aroma of food from the kitchen, and your brain instantly knows what’s happening. It doesn’t just sense; it recognises, decides, and remembers.
The magic happens because billions of neurons are tied and fire together, and the microscopic junctions between them, known as synapses, change strength with every experience. This tight interweaving of sensing, processing and memory lets us recognise faces or hum a melody using barely the energy of an LED light.
Traditional computers, by contrast, are clunky multitaskers. Sensors gather data, processors compute, and memory chips store results. Every bit of data must travel back and forth, wasting time and power. This so-called von Neumann bottleneck is why even our fastest chips struggle with energy-hungry artificial intelligence.
Neuromorphic electronics turn this model inside out. Here, memory and computation live together, just like in the brain.
Imagine electronics that can not just store data or follow code, but perceive the world, learn from it, and remember what matters. That idea, once science fiction, is becoming a reality in neuromorphic electronics, where devices begin to behave like tiny brains.
Such devices would solve more than the energy crisis alone. They would also address the plateauing of growth in computing power, which has steadily multiplied over the years following Moore’s Law: a projection that the number of transistors on an integrated circuit doubles approximately every two years.
After decades of steady miniaturisation, Moore’s Law is slowing. Transistors are approaching atomic limits, and squeezing them any smaller is no longer sustainable.
Neuromorphic hardware promises to herald a new “More than Moore” era. Imagine phones or sensors that learn locally, without relying on distant cloud servers. For AI, this means faster responses, lower energy bills, and greater privacy, all vital in an era of connected everything.
Devices made from exotic materials can learn from the signals they receive, adjusting their internal states with every pulse. Over time, they “remember” patterns by forming hardware training that happens without software.
The magic of atom-thin materials
The trick lies in the materials themselves. Scientists are building these brain-like devices using two-dimensional (2D) crystals, which are nothing but sheets of atoms as thin as one atomic layer of carbon in the form of graphene. Materials such as graphene, molybdenum disulphide (MoS₂) and hexagonal boron nitride (hBN) can be stacked like Lego bricks to create new behaviours that silicon alone can’t offer. Because they’re atomically thin, even a tiny gate pulse can dramatically change how electrons flow, allowing a single transistor to act like a tunable synapse.
This revolution requires research that bridges materials science, quantum physics and neuroscience-inspired electronics.
Over the last few years, researchers working in labs in Israel and Singapore, including this author, have shown how atom-thin materials can be coaxed into devices that sense, learn and even make logical decisions.
This work has paved the way for programmable logic and memory in such materials. Developments include a ‘neuristor’, a single device that merges optical sensing with electrical memory and logic, mimicking how the brain fuses perception and recall. Subsequent work has paved the way for a materials-to-device toolkit for the next generation of intelligent hardware.
The art of explaining the invisible
Think of a librarian. A traditional computer is like a librarian who must run to a distant archive every time you ask a question. A neuromorphic chip is the librarian who keeps the most-used books right at the desk. Or picture a city skyline. Old-style computing spreads across countless one-storey buildings, each doing a single task. Brain-inspired chips are vertical cities of thought, stacking sensing, logic and memory into one elegant tower.
Even memory follows familiar intuition: the more often you practise something, the longer it lasts. These devices work the same way. Stronger or longer pulses reinforce their pathways, the electronic counterpart of learning.
The new goal is to scale these single-device ideas into interconnected arrays that function like hardware neural networks, capable of autonomous communication and decision-making without human input. It’s an ambitious step — from individual “thinking” devices to collective intelligence in silicon — but one that could redefine how machines interact with the world.[5]
You don’t need a physics degree to see what’s coming. Imagine wearable health monitors that learn your unique rhythms, alerting you only to meaningful changes, or environmental sensors that adapt to local noise, or AI assistants that think on your device instead of in the cloud. For industries, this could mean faster computation with less energy. For society, it’s technology that grows smarter without growing hungrier.
We are witnessing the first sparks of a shift, from coding intelligence in software to building it directly into matter. When that transition matures, the next generation of chips won’t just compute. They’ll sense, learn and remember, just as we humans do.
Subhrajit Mukherjee, an Assistant Professor of Physics at Shiv Nadar Institution of Eminence, Delhi-NCR, heads the Optoelectronic Materials and Device (OEMD) Lab [Link]. He works on neuromorphic engineering and ultra-low-power electronics based on 2D materials. More details can be found here: [Google Scholar]













