Ian Karlin on AI hardware integration into HPC systems, workflows, followed by a talk about software integration of AI accelerators in HPC with Brian Van Essen.
Scheduled for completion in 2022, the project will expand the Livermore Computing Center's power and cooling capacity in preparation for exascale supercomputing hardware.
The Center for Efficient Exascale Discretizations recently released MFEM v4.1, which introduces features important for the nation’s first exascale supercomputers. LLNL's Tzanio Kolev explains.
LLNL participates in the digital ISC High Performance Conference (ISC20) on June 22 to 25.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
LLNL's latest HPC system, aptly nicknamed “Magma," delivers 5.4 petaflops of peak performance crammed into 760 compute nodes.
Elaine Raybourn interviews LLNL's Todd Gamblin about the Spack project's experience working remotely.
The Maestro Workflow Conductor is a lightweight, open-source Python tool that can launch multi-step software simulation workflows in a clear, concise, consistent, and repeatable manner.
The Exascale Computing Project's Let's Talk Exascale podcast has a new episode featuring LLNL's Todd Gamblin, who talks about the package manager Spack. Episode 67 runs 5:54 and includes a transcript.
AMD will supply upgraded GPUs for the Corona supercomputing cluster, which will be used by scientists working on discovering potential antibodies and antiviral compounds for SARS-CoV-2.
The White House announced the COVID-19 HPC Consortium to provide access to the world’s most powerful HPC resources that can advance the pace of scientific discovery in the fight to stop the virus.
This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 processors.
With its advanced CPUs/GPUs developed by AMD, El Capitan’s peak performance is expected to exceed 2 exaFLOPS, which would make it the fastest supercomputer in the world when it is deployed in 2023.
On January 31, 2020, the Sequoia supercomputer and its file system were decommissioned after nearly 8 years of remarkable service and achievements.
New year, new hackathon! The January 30–31 event was Computing’s 23rd hackathon and the 1st scheduled in the winter season.
LLNL is now home to the world’s largest Spectra TFinity system, following a complete replacement of the tape library hardware that supports Livermore’s data archives.
A multi-institutional consortium aims to speed up the drug discovery pipeline by building predictive, data-driven pharmaceutical models.
The Summit Sierra team, consisting of 45 staff at LLNL and ORNL, received a DOE Secretary’s Achievement Award for delivering, respectively, Sierra and Summit supercomputers.
The extreme-scale scientific software development kit (xSDK) is an ecosystem of independently developed math libraries and scientific domain components.
A software product from the ECP called UnifyFS can provide I/O performance portability for applications, enabling them to use distributed in-system storage and the parallel file system.
At SC19, there were Spack events each day of the conference. Spack is an open-source scientific software package manager for HPC environments, MacOS, and Linux platforms.
The 2019 International Conference for High Performance Computing, Networking, Storage, and Analysis—SC19—returned to Denver. Once again LLNL made its presence known as a force in supercomputing.
Computing’s 22nd hackathon was held on October 24–25. The event has become so popular that a fourth hackathon will be added to the seasonal rotation in early 2020.
The HPCwire Editors’ and Readers’ Choice awards for Top Supercomputing Achievement recognized Cray, LLNL, and two other labs for developing the first U.S. exascale-class supercomputers.
Penguin Computing announced that Corona, a high performance computing cluster delivered to LLNL in 2018, has been upgraded with the newest AMD Radeon Instinct MI60 accelerators.