Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.
Topic: Performance, Portability, and Productivity
LLNL participates in the digital ISC High Performance Conference (ISC21) on June 24 through July 2.
Computing relies on engineers like Stephanie Brink to keep the legacy codes running smoothly. “You’re only as fast as your slowest processor or your slowest function,” says Brink, who works in CASC. By analyzing a legacy code’s performance, Brink and her team can reduce the amount of time it takes to run and allow for more critical science to be accomplished.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
Our researchers will be well represented at the virtual SIAM Conference on Computational Science and Engineering (CSE21) on March 1–5. SIAM is the Society for Industrial and Applied Mathematics with an international community of more than 14,500 individual members.
Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities, he switched from research professor to professional researcher.
FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.
Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.
LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.
Performance analysis of parallel scientific codes is difficult. The HAC model allows direct comparison of data across domains with data viz and analysis tools available in other domains.
This tool that automatically diagnoses performance and correctness faults in MPI applications. It identifies abnormal MPI tasks and code regions and finds the least-progressed task.
These techniques emulate the behavior of anticipated future architectures on current machines to improve performance modeling and evaluation.
Olga Pearce studies how to detect and correct load imbalance in high performance computing applications.
Kathryn Mohror develops tools that give researchers the information they need to tune their programs and maximize results. After all, she says, “It’s all about getting the answers more quickly.”