The White House announced the COVID-19 HPC Consortium to provide access to HPC resources that can advance scientific discovery in the fight to stop the virus.
This week, LLNL highlighted one of the latest additions to its computing arsenal: Magma. Magma is a Penguin Computing “Relion” system comprised of 752 nodes with Intel Xeon Platinum 9242 processors.
With its advanced CPUs/GPUs developed by AMD, El Capitan’s peak performance is expected to exceed 2 exaflops, which would make it the fastest supercomputer in the world when it is deployed in 2023.
On January 31, 2020, the Sequoia supercomputer and its file system were decommissioned after nearly 8 years of remarkable service and achievements.
New year, new hackathon! The January 30–31 event was Computing’s 23rd hackathon and the 1st scheduled in the winter season.
LLNL is now home to the world’s largest Spectra TFinity system, following a complete replacement of the tape library hardware that supports Livermore’s data archives.
A multi-institutional consortium aims to speed up the drug discovery pipeline by building predictive, data-driven pharmaceutical models.
The Summit Sierra team, consisting of 45 staff at LLNL and ORNL, received a DOE Secretary’s Achievement Award for delivering, respectively, Sierra and Summit supercomputers.
The extreme-scale scientific software development kit (xSDK) is an ecosystem of independently developed math libraries and scientific domain components.
TEIMS manages collaborative tasks, site characterization, risk assessment, decision support, compliance monitoring, and regulatory reporting for the Environmental Restoration Department.
Researchers develop innovative data representations and algorithms to provide faster, more efficient ways to preserve information encoded in data.
Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities, he switched from research professor to professional researcher.
Highlights include perspectives on machine learning and artificial intelligence in science, data driven models, autonomous vehicle operations, and the OpenMP standard 5.0.
FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.
Umpire is a resource management library that allows the discovery, provision, and management of memory on next-generation architectures.
Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.
Highlights include debris and shrapnel modeling at NIF, scalable algorithms for complex engineering systems, magnetic fusion simulation, and data placement optimization on GPUs.
Users need tools that address bottlenecks, work with programming models, provide automatic analysis, and overcome the complexities and changing demands of exascale architectures.
This open-source file system framework supports hierarchical HPC storage systems by utilizing node-local burst buffers.
Highlights include CASC director Jeff Hittinger's vision for the center as well as recent work with PruneJuice DataRaceBench, Caliper, and SUNDIALS.
LLNL's interconnection networks projects improve the communication and overall performance of parallel applications using interconnect topology-aware task mapping.
The PRUNERS Toolset offers four novel debugging and testing tools to assist programmers with detecting, remediating, and preventing errors in a coordinated manner.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
BLT software supports HPC software development with built-in CMake macros for external libraries, code health checks, and unit testing.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.