Application developers are partnering with supercomputing experts and hardware vendors to ready a stable of mission-critical applications for the next generation of computing architectures, including LLNL’s Sierra supercomputer.
We are determining how to build future generations of supercomputers. We are actively exploring issues such as possible uses of persistent memory (non-volatile random access memory or NVRAM) and methods to reduce power consumption or to increase reliability while maintaining (or even reducing) cost and maintaining (or improving) performance. We are also closely interacting with industry through local initiatives and programs such as FastForward. Throughout these activities, we combine unique research capabilities with our prove track record of building and deploying reliable and productive large-scale systems. View content related to Hardware Architecture.
HPC Cluster Engineer Academy is a paid internship that will give you direct experience with running and maintaining high performance computing (HPC) systems.
Livermore computer scientists have helped create a flexible framework that aids programmers in creating source code that can be used effectively on multiple hardware architectures.
Cab is a large capacity computing resource shared by the M&IC and ASC programs for running small to moderate parallel jobs.
The 150 teraFLOP/s Catalyst, a unique high performance computing (HPC) cluster, serves as a proving ground for new HPC and big data technologies.
Vulcan is one of the largest, most capable computational resources available in the United States for industrial collaborators.
The advent of many-core processors with a greatly reduced amount of per-core memory has shifted the bottleneck in computing from FLOPs to memory. A new, complex memory/storage hierarchy is emerging, with persistent memories offering greatly expanded capacity, and augmented by DRAM/SRAM cache and scratchpads to mitigate latency.
On November 14, 2014, Secretary of Energy Ernest Moniz announced that a partnership involving IBM, NVIDIA, and Mellanox was chosen to design and develop systems for Lawrence Livermore and Oak Ridge (ORNL) national laboratories. The LLNL system, Sierra, is the next advanced technology system sited at LLNL in the Advanced Simulation and Computing (ASC) Program’s system line that has included Blue Pacific, White, Purple, BlueGene/L, and Sequoia.