UCSD - University of California - San Diego

01/23/2025 | Press release | Distributed by Public on 01/23/2025 19:28

Scaling up Neuromorphic Computing for More Efficient and Effective AI Everywhere and Anytime

Published Date

January 23, 2025

Story by:

Share This:

Article Content

Neuromorphic computing-a field that applies principles of neuroscience to computing systems to mimic the brain's function and structure-needs to scale up if it is to effectively compete with current computing methods. In a review published Jan. 22 in the journal Nature, 23 researchers, including two from the University of California San Diego, present a detailed roadmap of what needs to happen to reach that goal. The article offers a new and practical perspective toward approaching the cognitive capacity of the human brain with comparable form factor and power consumption.

"We do not anticipate that there will be a one-size-fits-all solution for neuromorphic systems at scale but rather a range of neuromorphic hardware solutions with different characteristics based on application needs," the authors write.

Applications for neuromorphic computing include scientific computing, artificial intelligence, augmented and virtual reality, wearables, smart farming, smart cities and more. Neuromorphic chips have the potential to outpace traditional computers in energy and space efficiency, as well as performance. This could present substantial advantages across various domains, including AI, health care and robotics. As the electricity consumption of AI is projected to double by 2026, neuromorphic computing emerges as a promising solution.

"Neuromorphic computing is particularly relevant today, when we are witnessing the untenable scaling of power- and resource-hungry AI systems," said Gert Cauwenberghs, a Distinguished Professor in the UC San Diego Shu Chien-Gene Lay Department of Bioengineering and one of the paper's coauthors.

Neuromorphic computing is at a pivotal moment, said Dhireesha Kudithipudi, the Robert F. McDermott Endowed Chair at the University of Texas San Antonio and the paper's corresponding author. "We are now at a point where there is a tremendous opportunity to build new architectures and open frameworks that can be deployed in commercial applications," she said. "I strongly believe that fostering tight collaboration between industry and academia is the key to shaping the future of this field. This collaboration is reflected in our team of co-authors."

Last year, Cauwenberghs and Kudithipudi secured a $4 million grant from the National Science Foundation to launch THOR: The Neuromorphic Commons, a first-of-its-kind research network providing access to open neuromorphic computing hardware and tools in support of interdisciplinary and collaborative research.

Gert Cauwenberghs, a professor in the Shu Chien Gene Lay Department of Bioengineering at UC San Diego, is one of the coauthors of a Nature review paper on how to scale up neuromorphic computing.
Photos: David Baillot/UC San Diego Jacobs School of Engineering

In 2022, a neuromorphic chip designed by a team led by Cauwenberghs showed that these chips could be highly dynamic and versatile, without compromising accuracy and efficiency. The NeuRRAM chip runs computations directly in memory and can run a wide variety of AI applications-all at a fraction of the energy consumed by computing platforms for general-purpose AI computing. "Our Nature review article offers a perspective on further extensions of neuromorphic AI systems in silicon and emerging chip technologies to approach both the massive scale and the extreme efficiency of self-learning capacity in the mammalian brain," said Cauwenberghs.

To achieve scale in neuromorphic computing, the authors propose several key features that must be optimized, including sparsity, a defining feature of the human brain. The brain develops by forming numerous neural connections (densification) before selectively pruning most of them. This strategy optimizes spatial efficiency while retaining information at high fidelity. If successfully emulated, this feature could enable neuromorphic systems that are significantly more energy-efficient and compact.

"The expandable scalability and superior efficiency derive from massive parallelism and hierarchical structure in neural representation, combining dense local synaptic connectivity within neurosynaptic cores modeled after the brain's gray matter with sparse global connectivity in neural communication across cores modeling the brain's white matter, facilitated through high-bandwidth reconfigurable interconnects on-chip and hierarchically structured interconnects across chips," said Cauwenberghs.

"This publication shows tremendous potential toward the use of neuromorphic computing at scale for real-life applications. At the San Diego Supercomputer Center, we bring new computing architectures to the national user community, and this collaborative work paves the path for bringing a neuromorphic resource for the national user community," said Amitava Majumdar, director of the division of Data-Enabled Scientific Computing at SDSC here on the UC San Diego campus, and one of the paper's coauthors.

In addition, the authors also call for stronger collaborations within academia, and between academia and industry, as well as for the development of a wider array of user-friendly programming languages to lower the barrier of entry into the field. They believe this would foster increased collaboration, particularly across disciplines and industries.

The NeuRRAM chip is not only twice as energy efficient as state-of-the-art, it's also versatile and delivers results that are just as accurate as conventional digital chips.

Neuromorphic Computing at Scale

Dhireesha Kudithipudi and Tej Pandit, University of Texas, San Antonio
Catherine Schuman, University of Tennessee, Knoxville
Craig M. Vineyard, James B. Aimone and Suma George Cardwell, Sandia National Laboratories
Cory Merkel, Rochester Institute of Technology
Rajkumar Kubendran, University of Pittsburgh
Garrick Orchard and Ryad Benosman, Intel Labs
Christian Mayr, Technische Universität Dresden
Joe Hays, U.S. Naval Research Laboratory,
Cliff Young, Google DeepMind
Chiara Bartolozzi, Italian Institute of Technology
Amitava Majumdar and Gert Cauwenberghs, University of California San Diego
Melika Payvand, Institute of Neuroinformatics, University of Zürich and ETH Zürich
Sonia Buckley, National Institute of Standards and Technology
Shruti Kulkarni, Oak Ridge National Laboratory
Hector A. Gonzalez, SpiNNcloud Systems GmbH, Dresden, Germany
Chetan Singh Thakur, Indian Institute of Science, Bengaluru
Anand Subramoney, Royal Holloway, University of London, Egham
Steve Furber, The University of Manchester

Related stories:
In Nature article, UTSA researchers chart course for scalable, neuromorphic computing systems


UC San Diego Part of National Hub for Large-scale Neuromorphic Computing

A Neuromorphic Chip for AI on the Edge

Learn more about research and education at UC San Diego in: Artificial Intelligence

Share This: