Follow along at your own pace through tutorials of several open-source HPC software projects.
Topic: Multimedia
Listen to the latest Big Ideas Lab podcast episode on LLNL supercomputing! This article contains links to the podcast on Spotify and Apple.
LLNL is applying ML to real-world applications on multiple scales. Researchers explain why water filtration, wildfires, and carbon capture are becoming more solvable thanks to groundbreaking data science methodologies on some of the world’s fastest computers.
Two LLNL teams have come up with ingenious solutions to a few of the more vexing difficulties. For their efforts, they’ve won awards coveted by scientists in the technology fields.
Discover how the software architecture and storage systems that will drive El Capitan’s performance will help LLNL and the NNSA Tri-Labs push the boundaries of computational science.
LLNL’s fusion ignition breakthrough, more than 60 years in the making, was enabled by a combination of traditional fusion target design methods, HPC, and AI techniques.
By taking weather variables such as wildfire, flooding, wind, and sunlight that directly impact the electrical grid into consideration, researchers can improve electrical grid model projections for a more stable future.
Over several years, teams have prepared the infrastructure for El Capitan, designing and building the computing facility’s upgrades for power and cooling, installing storage and compute components and connecting everything together. Once all the pieces are in place, the life of El Cap as world-class supercomputer begins.
LLNL's Ian Lee joins a Dots and Bridges panel to discuss HPC as a critical resource for data assimilation and numerical weather prediction research.
Unique among data compressors, zfp is designed to be a compact number format for storing data arrays in-memory in compressed form while still supporting high-speed random access.
Livermore CTO Bronis de Supinski joins the Let's Talk Exascale podcast to discuss the details of LLNL's upcoming exascale supercomputer.
Variorum provides robust, portable interfaces that allow us to measure and optimize computation at the physical level: temperature, cycles, energy, and power. With that foundation, we can get the best possible use of our world-class computing resources.
The new model addresses a problem in simulating RAS behavior, where conventional methods come up short of reaching the time- and length-scales needed to observe biological processes of RAS-related cancers.
UCLA's Institute for Pure & Applied Mathematics hosted LLNL's Tzanio Kolev for a talk about high-order finite element methods.
UCLA's Institute for Pure & Applied Mathematics hosted LLNL's Erik Draeger for a talk about the challenges and possibilities of exascale computing.
LLNL security operations team lead Ian Lee recently gave a webinar describing how the Lab uses Elasticsearch for HPC. The 19:27 video is available on demand.
From our fall 2022 hackathon, watch as participants trained an autonomous race car with reinforcement learning algorithms.
The Exascale Computing Project has compiled a playlist of videos from multiple national labs to highlight the impacts of exascale computing.
Preparing the Livermore Computing Center for El Capitan and the exascale era of supercomputers required an entirely new way of thinking about the facility’s mechanical and electrical capabilities.
LLNL's Greg Becker spoke with HPC Tech Shorts to explain how Spack's binary cache works. The video “Get your HPC codes installed and running in minutes using Spack’s Binary Cache” runs 15:11.
This year marks the 30th anniversary of the High Performance Storage System (HPSS) collaboration, comprising five DOE HPC national laboratories: LLNL, Lawrence Berkeley, Los Alamos, Oak Ridge, and Sandia, along with industry partner IBM.