This 2021 R&D 100 award-winning software solves data center bottlenecks by enabling resource types, schedulers, and framework services to be deployed as data centers evolve.
Topic: HPC Systems and Software
Science & Technology Review highlights the Exascale Computing Facility Modernization project that delivered the infrastructure required to bring exascale computing online in 2023.
An LLNL Distinguished Member of Technical Staff, Todd Gamblin leads the Spack project, an open-source package manager with a rapidly growing global community that has changed the way people use HPC software.
The Exascale Computing Project has compiled a playlist of videos from multiple national labs to highlight the impacts of exascale computing.
While LLNL awaits the arrival of El Capitan, physicists and computer scientists running scientific applications on testbeds are getting a taste of what to expect.
Employees gathered for the Lab’s first-ever Employee Engagement Day, held Oct. 11. The event featured food, drink, informative displays, historical films and more.
Climate change can bring not only heat, but also increased humidity, reducing the efficiency of the evaporative coolers many HPC centers rely on.
Preparing the Livermore Computing Center for El Capitan and the exascale era of supercomputers required an entirely new way of thinking about the facility’s mechanical and electrical capabilities.
The second article in a series about the Lab's stockpile stewardship mission highlights computational models, parallel architectures, and data science techniques.
The first article in a series about the Lab's stockpile stewardship mission highlights the roles of computer simulations and exascale computing.
The new oneAPI Center of Excellence will involve the Center for Applied Scientific Computing and accelerate ZFP compression software to advance exascale computing.
LLNL participates in the CMD-IT/ACM Richard Tapia Celebration of Diversity in Computing Conference (Tapia2022) on September 7–10.
The Advanced Technology Development and Mitigation program within the Exascale Computing Project shows that the best way to support the mission is through open collaboration and a sustainable software infrastructure.
LLNL has signed a memorandum of understanding with HPC facilities in Germany, the United Kingdom, and the U.S., jointly forming the International Association of Supercomputing Centers.
LLNL's Greg Becker spoke with HPC Tech Shorts to explain how Spack's binary cache works. The video “Get your HPC codes installed and running in minutes using Spack’s Binary Cache” runs 15:11.
Computer scientist Kathryn Mohror is among LLNL's recipients of the Department of Energy’s Early Career Research Program awards.
Learn how to use LLNL software in the cloud. In August, we will host tutorials in collaboration with AWS on how to install and use these projects on AWS EC2 instances. No previous experience necessary.
An LLNL team will be among the first researchers to perform work on the world’s first exascale supercomputer—Oak Ridge National Laboratory’s Frontier—when they use the system to model cancer-causing protein mutations.
Since 2018, software developer Trevor Smith has been putting his education and computing skills to good use supporting the Lab's HPC environment. He helps develop, deploy, and manage systems software that enables effective and secure use of computing resources.
The Lab's upcoming exascale-capable supercomputer will see an implementation of a converged accelerated computing unit, or APU, hybrid CPU-GPU compute engine.
In a presentation delivered to the 79th HPC User Forum at Oak Ridge National Laboratory, LLNL's Terri Quinn revealed that AMD’s forthcoming MI300 APU would be the computational bedrock of El Capitan, which is slated for installation at LLNL in late 2023.
The utility-grade infrastructure project massively upgraded the power and water-cooling capacity of the adjacent Livermore Computing Center, preparing it to house next generation exascale-class supercomputers for NNSA.
This year marks the 30th anniversary of the High Performance Storage System (HPSS) collaboration, comprising five DOE HPC national laboratories: LLNL, Lawrence Berkeley, Los Alamos, Oak Ridge, and Sandia, along with industry partner IBM.
Livermore’s archive leverages a hierarchical storage management application that runs on a cluster architecture that is user-friendly, extremely scalable, and lightning fast.
After 30 years, the High Performance Storage System (HPSS) collaboration continues to lead and adapt to the needs of the time while honoring its primary mission of long-term data stewardship of the crown jewels of data for government, academic and commercial organizations around the world.