Two LLNL teams have come up with ingenious solutions to a few of the more vexing difficulties. For their efforts, they’ve won awards coveted by scientists in the technology fields.
Topic: Power Management
Collecting variants in low-level hardware features across multiple GPU and CPU architectures.
LLNL’s HPC capabilities play a significant role in international science research and innovation, and Lab researchers have won 10 R&D 100 Awards in the Software–Services category in the past decade.
LLNL participates in the ISC High Performance Conference (ISC23) on May 21–25.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
libMSR provides a convenient interface to access Model Specific Registers and to allow tools to utilize their full functionality.
These techniques emulate the behavior of anticipated future architectures on current machines to improve performance modeling and evaluation.
