LLNL participates in the 32nd annual Supercomputing Conference (SC20) held virtually on November 9–19, 2020.
Topic: HPC Systems and Software
LLNL has installed a new AI accelerator into the Corona supercomputer, allowing researchers to run simulations while offloading AI calculations from those simulations to the AI system.
LLNL computer scientist and Spack PI Todd Gamblin explains how the package manager works in this video from CppCon (C++ Conference). The video runs 6:53.
LLNL will celebrate the second annual Exascale Day on October 18 with the DOE's Exascale Computing Project, Hewlett Packard Enterprise, Argonne, and Oak Ridge.
Funding by the CARES Act enabled LLNL and industry partners to more than double the speed of the Corona supercomputing cluster to in excess of 11 petaFLOPS of peak performance.
LLNL will provide significant computing resources to students and faculty from 9 universities that were newly selected for participation in the Predictive Science Academic Alliance Program (PSAAP).
This summer, the Computing Scholar Program welcomed 160 undergraduate and graduate students into virtual internships. The Lab’s open-source community was already primed for student participation.
When it comes to solving complex technical issues for GPU-accelerated supercomputers, the national labs have found that tackling them is “better together.”
An interview with Todd Gamblin from the LLNL about the Spack project, discussing his current research project along with his involvement in Spack.
Computing’s summer hackathon was held virtually on August 6–7 and featured presentations from teams who tested software technologies, expanded project features, or explored new ways of analyzing data.
Computing’s fourth annual Developer Day was held as a virtual event on July 30 with 8 speakers and 90 participants.
Members of the leadership team of the DOE’s Exascale Computing Project cover the state of the project on the "Let's Talk Exascale" podcast. Episode 72 runs 1:26:54 and includes a transcript.
LLNL computer scientist Stephen Herbein discusses the open-source Flux Framework HPC software on this video episode of Next Platform TV. His segment begins at 27:34.
In this issue featuring LLNL's R&D 100 Award winners from 2019, software deployment is faster and easier with the Spack package management tool.
In this issue featuring LLNL's R&D 100 Award winners from 2019, the versatile Scalable Checkpoint/Restart framework offers more reliable simulation performance.
Ian Karlin on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Brian Van Essen.
Scheduled for completion in 2022, the project will expand the Livermore Computing Center's power and cooling capacity in preparation for exascale supercomputing hardware.
LLNL participates in the digital ISC High Performance Conference (ISC20) on June 22 to 25.
The Center for Efficient Exascale Discretizations recently released MFEM v4.1, which introduces features important for the nation’s first exascale supercomputers. LLNL's Tzanio Kolev explains.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
LLNL's latest HPC system, aptly nicknamed “Magma," delivers 5.4 petaflops of peak performance crammed into 760 compute nodes.
Elaine Raybourn interviews LLNL's Todd Gamblin about the Spack project's experience working remotely.
The Maestro Workflow Conductor is a lightweight, open-source Python tool that can launch multi-step software simulation workflows in a clear, concise, consistent, and repeatable manner.
The Exascale Computing Project's Let's Talk Exascale podcast has a new episode (5:54) featuring LLNL's Todd Gamblin, who talks about the package manager Spack.
AMD will supply upgraded GPUs for the Corona supercomputing cluster, which will be used by scientists working on discovering potential antibodies and antiviral compounds for SARS-CoV-2.