Splitting memory resources in high performance computing between local nodes and a larger shared remote pool can help better support diverse applications.
Topic: Data Movement and Memory
LLNL participates in the ISC High Performance Conference (ISC23) on May 21–25.
An LLNL Distinguished Member of Technical Staff, Gokhale is considered an expert in her field, and continues to enjoy the fast pace of innovation and change in computing.
LLNL is participating in the 34th annual Supercomputing Conference (SC22), which will be held both virtually and in Dallas on November 13–18, 2022.
Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.
The MAPP incorporates multiple software packages into one integrated code so that multiphysics simulation codes can perform at scale on present and future supercomputers.
A Livermore-developed programming approach helps software to run on different platforms without major disruption to the source code.
The latest issue of LLNL's Science & Technology Review magazine showcases Computing in the cover story alongside a commentary by Bruce Hendrickson.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
Researchers develop innovative data representations and algorithms to provide faster, more efficient ways to preserve information encoded in data.
Umpire is a resource management library that allows the discovery, provision, and management of memory on next-generation architectures.
Highlights include debris and shrapnel modeling at NIF, scalable algorithms for complex engineering systems, magnetic fusion simulation, and data placement optimization on GPUs.