The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations and workshops centered around HPC.
Topic: Emerging Architectures
The award recognizes progress in the team's ML-based approach to modeling ICF experiments, which has led to the creation of faster and more accurate models of ICF implosions.
LLNL is participating in the 34th annual Supercomputing Conference (SC22), which will be held both virtually and in Dallas on November 13–18, 2022.
Science & Technology Review highlights the Exascale Computing Facility Modernization project that delivered the infrastructure required to bring exascale computing online in 2023.
Preparing the Livermore Computing Center for El Capitan and the exascale era of supercomputers required an entirely new way of thinking about the facility’s mechanical and electrical capabilities.
Computing’s annual Developer Day held a hybrid event on July 21 with lightning talks, a town hall discussion, and guest speakers.
The Lab's upcoming exascale-capable supercomputer will see an implementation of a converged accelerated computing unit, or APU, hybrid CPU-GPU compute engine.
The utility-grade infrastructure project massively upgraded the power and water-cooling capacity of the adjacent Livermore Computing Center, preparing it to house next generation exascale-class supercomputers for NNSA.
As the U.S. welcomed the world’s first “true” exascale supercomputer, three predecessor machines for LLNL's future exascale system El Capitan managed to rank highly on the latest Top500 List of the world’s most powerful supercomputers.
LLNL participates in the ISC High Performance Conference (ISC22) on May 29 through June 2.
The MAPP incorporates multiple software packages into one integrated code so that multiphysics simulation codes can perform at scale on present and future supercomputers.
El Capitan will have a peak performance of more than 2 exaflops—roughly 16 times faster on average than the Sierra system—and is projected to be several times more energy efficient than Sierra.
LC sited two different AI accelerators in 2020: the Cerebras wafer-scale AI engine attached to Lassen; and an AI accelerator from SambaNova Systems into the Corona cluster.
LLNL has established the AI Innovation Incubator (AI3), a collaborative hub aimed at uniting experts from LLNL, industry, and academia to advance AI for scientific and commercial applications.
For the first time ever, SC21 went hybrid, with dozens of both in-person and virtual workshops, technical paper presentations, panels, tutorials and “birds of a feather” sessions.
LLNL is participating in the 33rd annual Supercomputing Conference (SC21), which will be held both virtually and in St. Louis on November 14–19, 2021.
Researchers have found that fluctuations in qubits can be highly correlated. The team also linked tiny error-causing perturbations in the qubits’ charge state to the absorption of cosmic rays.
The latest issue of LLNL's Science & Technology Review magazine showcases Computing in the cover story alongside a commentary by Bruce Hendrickson.
In his opening keynote address at the AI Systems Summit, LLNL CTO Bronis de Supinski described integration of two AI-specific systems to achieve system level heterogeneity.
CTO Bronis de Supinski discusses the integrated storage strategy of the future El Capitan exascale supercomputing system, which will have in excess of 2 exaflops of raw computing power spread across nodes.
A near node local storage innovation called Rabbit factored heavily into LLNL’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan.
Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
Livermore computer scientists have helped create a flexible framework that aids programmers in creating source code that can be used effectively on multiple hardware architectures.