On the newest episode of the Big Ideas Lab podcast, listeners will go behind the scenes of LLNL's latest groundbreaking achievement: El Capitan, the world’s most powerful supercomputer.
Topic: Hybrid/Heterogeneous
Verified at 1.742 exaflops (1.742 quintillion calculations per second) on the High Performance Linpack—the standard benchmark used by the Top500 organization to evaluate supercomputing performance—El Capitan is the fastest computing system ever benchmarked.
The NNSA’s exascale milestone is possible only through successful industry partnerships. Hewlett Packard Enterprise staff share their experiences working with LLNL.
LLNL is participating in the 36th annual Supercomputing Conference (SC24) in Atlanta on November 17–22, 2024.
Listen to the latest Big Ideas Lab podcast episode on LLNL supercomputing! This article contains links to the podcast on Spotify and Apple.
LLNL participates in the ISC High Performance Conference (ISC24) on May 12–16.
The Tools Working Group delivers debugging, correctness, and performance analysis solutions at an unprecedented scale.
Compilers translate human-programmable source code into machine-readable code. Building a compiler is especially challenging in the exascale era.
The El Capitan Center of Excellence provides a conduit between national labs and commercial vendors, ensuring that the exascale system will meet everyone’s needs.
Backed by Spack’s robust functionality, the Packaging Working Group manages the relationships between user software and system software.
The advent of accelerated processing units presents new challenges and opportunities for teams responsible for network interconnects and math libraries.
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
The Tri-Lab Operating System Stack (TOSS) ensures other national labs’ supercomputing needs are met.
Livermore Computing is making significant progress toward siting the NNSA’s first exascale supercomputer.
Innovative hardware provides near-node local storage alongside large-capacity storage.
Siting a supercomputer requires close coordination of hardware, software, applications, and Livermore Computing facilities.
Flux, next-generation resource and job management software, steps up to support emerging use cases.
LLNL CTO Bronis de Supinski talks about how the Lab deploys novel architecture AI machines and provides an update on El Capitan.
Splitting memory resources in high performance computing between local nodes and a larger shared remote pool can help better support diverse applications.
As CTO of Livermore Computing, de Supinski is responsible for formulating, overseeing, and implementing LLNL’s large-scale computing strategy, requiring managing multiple collaborations with the HPC industry and academia.
The Lab was already using Elastic components to gather data from its HPC clusters, then investigated whether Elasticsearch and Kibana could be applied to all scanning and logging activities across the board.
LLNL participates in the ISC High Performance Conference (ISC23) on May 21–25.
Supercomputers broke the exascale barrier, marking a new era in processing power, but the energy consumption of such machines cannot run rampant.
UCLA's Institute for Pure & Applied Mathematics hosted LLNL's Erik Draeger for a talk about the challenges and possibilities of exascale computing.