LLNL Computers Provide a Boost for Brain-Inspired Computing
In a February 17 talk at Lawrence Livermore, IBM chief scientist Dharmendra Modha described the challenges and successes of IBM’s ambitious Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project, which is inspired by biological networks such as the brain. One of the limitations of today’s computers is the power needed for large-scale data processing. Animal and human brains, however, can process large amounts of information in complex ways with very low power consumption. SyNAPSE, with Defense Advanced Research Projects Agency support, aims to create low-power computers that can scale to levels found in brains found in nature.
Modha noted that SyNAPSE has benefited from simulations run on LLNL’s Dawn and Sequoia supercomputers. “This novel non-Neumann architecture was midwifed by the HPC facilities here [at LLNL],” he said. Early in the project, Modha’s team used the results of hundreds of published papers to simulate connections in the brain of a monkey on the two IBM BlueGene systems. “The brain is nothing but a social network, or graph, of neurons,” he explained. “The edges are synapses and the nodes are neurons.” The 20-petaflop Sequoia enabled the IBM team to perform a 100 trillion synapse simulation in three dimensions. Even on such a powerful system, the simulation ran 1500 times slower than real time. The modeling confirmed and expanded on the results of 30 years of imaging and 60 years of structural studies of long-distance wiring in mammal brains.
The SyNAPSE chip, introduced by IBM in 2014, is powered by an unprecedented 1 million neurons and 256 million synapses, consumes the same amount of energy as a hearing aid battery, and can be tiled to create a more powerful computer.