This issue highlights some of CASC’s contributions to the DOE's Exascale Computing Project.
Topic: Exascale
Collecting variants in low-level hardware features across multiple GPU and CPU architectures.
Two LLNL teams have come up with ingenious solutions to a few of the more vexing difficulties. For their efforts, they’ve won awards coveted by scientists in the technology fields.
Bugs, broken codes, or system failures require added time for troubleshooting and increase the risk of data loss. LLNL has addressed failure recovery by developing the Scalable Checkpoint/Restart (SCR) framework.
LLNL’s HPC capabilities play a significant role in international science research and innovation, and Lab researchers have won 10 R&D 100 Awards in the Software–Services category in the past decade.
Discover how the software architecture and storage systems that will drive El Capitan’s performance will help LLNL and the NNSA Tri-Labs push the boundaries of computational science.
Unveiled at the International Supercomputing Conference in Germany, the June 2024 Top500 lists three systems with identical components as registering 19.65 petaflops on the High Performance Linpack benchmark, ranking them among the world’s 50 fastest.
LLNL researchers have achieved a milestone in accelerating and adding features to complex multiphysics simulations run on GPUs, a development that could advance HPC and engineering.
The Tools Working Group delivers debugging, correctness, and performance analysis solutions at an unprecedented scale.
Compilers translate human-programmable source code into machine-readable code. Building a compiler is especially challenging in the exascale era.
The El Capitan Center of Excellence provides a conduit between national labs and commercial vendors, ensuring that the exascale system will meet everyone’s needs.
Backed by Spack’s robust functionality, the Packaging Working Group manages the relationships between user software and system software.
The advent of accelerated processing units presents new challenges and opportunities for teams responsible for network interconnects and math libraries.
The system will enable researchers from the National Nuclear Security Administration weapons design laboratories to create models and run simulations, previously considered challenging, time-intensive or impossible, for the maintenance and modernization of the United States’ nuclear weapons stockpile.
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
The Center for Efficient Exascale Discretizations has developed innovative mathematical algorithms for the DOE’s next generation of supercomputers.
Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.
Siting a supercomputer requires close coordination of hardware, software, applications, and Livermore Computing facilities.
Flux, next-generation resource and job management software, steps up to support emerging use cases.
The Tri-Lab Operating System Stack (TOSS) ensures other national labs’ supercomputing needs are met.
Livermore Computing is making significant progress toward siting the NNSA’s first exascale supercomputer.
Innovative hardware provides near-node local storage alongside large-capacity storage.
From wind tunnels and cardiovascular electrodes to the futuristic world of exascale computing, Brian Gunney has been finding solutions for unsolvable problems.
As CTO of Livermore Computing, de Supinski is responsible for formulating, overseeing, and implementing LLNL’s large-scale computing strategy, requiring managing multiple collaborations with the HPC industry and academia.
LLNL participates in the ISC High Performance Conference (ISC23) on May 21–25.
