The 150 teraFLOP/s Catalyst, a unique high-performance computing (HPC) cluster, serves as a proving ground for new HPC and big data technologies.
We are determining how to build future generations of supercomputers. We are actively exploring issues such as possible uses of persistent memory (non-volatile random access memory or NVRAM) and methods to reduce power consumption or to increase reliability while maintaining (or even reducing) cost and maintaining (or improving) performance. We are also closely interacting with industry through local initiatives and programs such as FastForward. Throughout these activities, we combine unique research capabilities with our prove track record of building and deploying reliable and productive large-scale systems. View content related to Hardware Architecture.
Vulcan is one of the largest, most capable computational resources available in the United States for industrial collaborators.
The advent of many-core processors with a greatly reduced amount of per-core memory has shifted the bottleneck in computing from FLOPs to memory. A new, complex memory/storage hierarchy is emerging, with persistent memories offering greatly expanded capacity, and augmented by DRAM/SRAM cache and scratchpads to mitigate latency.
On November 14, 2014, Secretary of Energy Ernest Moniz announced that a partnership involving IBM, NVIDIA, and Mellanox was chosen to design and develop systems for Lawrence Livermore and Oak Ridge (ORNL) national laboratories. The LLNL system, Sierra, is the next advanced technology system sited at LLNL in the Advanced Simulation and Computing (ASC) Program’s system line that has included Blue Pacific, White, Purple, BlueGene/L, and Sequoia.
Sequoia’s complexity and scale made integration challenging but has not prevented early users from setting new records and achieving strong results.