LLNL will provide significant computing resources to students and faculty from 9 universities that were newly selected for participation in the Predictive Science Academic Alliance Program (PSAAP).
This summer, the Computing Scholar Program welcomed 160 undergraduate and graduate students into virtual internships. The Lab’s open-source community was already primed for student participation.
When it comes to solving complex technical issues for GPU-accelerated supercomputers, the national labs have found that tackling them is “better together.”
An interview with Todd Gamblin from the LLNL about the Spack project, discussing his current research project along with his involvement in Spack.
Computing’s summer hackathon was held virtually on August 6–7 and featured presentations from teams who tested software technologies, expanded project features, or explored new ways of analyzing data.
Computing’s fourth annual Developer Day was held as a virtual event on July 30 with 8 speakers and 90 participants.
Members of the leadership team of the DOE’s Exascale Computing Project cover the state of the project on the "Let's Talk Exascale" podcast. Episode 72 runs 1:26:54 and includes a transcript.
LLNL computer scientist Stephen Herbein discusses the open-source Flux Framework HPC software on this video episode of Next Platform TV. His segment begins at 27:34.
In this issue featuring LLNL's R&D 100 Award winners from 2019, the versatile Scalable Checkpoint/Restart framework offers more reliable simulation performance.
In this issue featuring LLNL's R&D 100 Award winners from 2019, software deployment is faster and easier with the Spack package management tool.
Ian Karlin on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Brian Van Essen.
Scheduled for completion in 2022, the project will expand the Livermore Computing Center's power and cooling capacity in preparation for exascale supercomputing hardware.
The Center for Efficient Exascale Discretizations recently released MFEM v4.1, which introduces features important for the nation’s first exascale supercomputers. LLNL's Tzanio Kolev explains.
LLNL participates in the digital ISC High Performance Conference (ISC20) on June 22 to 25.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
LLNL's latest HPC system, aptly nicknamed “Magma," delivers 5.4 petaflops of peak performance crammed into 760 compute nodes.
Elaine Raybourn interviews LLNL's Todd Gamblin about the Spack project's experience working remotely.
The Maestro Workflow Conductor is a lightweight, open-source Python tool that can launch multi-step software simulation workflows in a clear, concise, consistent, and repeatable manner.
The Exascale Computing Project's Let's Talk Exascale podcast has a new episode featuring LLNL's Todd Gamblin, who talks about the package manager Spack. Episode 67 runs 5:54 and includes a transcript.
AMD will supply upgraded GPUs for the Corona supercomputing cluster, which will be used by scientists working on discovering potential antibodies and antiviral compounds for SARS-CoV-2.
The White House announced the COVID-19 HPC Consortium to provide access to the world’s most powerful HPC resources that can advance the pace of scientific discovery in the fight to stop the virus.
This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 processors.
With its advanced CPUs/GPUs developed by AMD, El Capitan’s peak performance is expected to exceed 2 exaFLOPS, which would make it the fastest supercomputer in the world when it is deployed in 2023.
On January 31, 2020, the Sequoia supercomputer and its file system were decommissioned after nearly 8 years of remarkable service and achievements.
New year, new hackathon! The January 30–31 event was Computing’s 23rd hackathon and the 1st scheduled in the winter season.