Topic: Performance, Portability, and Productivity

Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.

Project

LLNL participates in the digital ISC High Performance Conference (ISC21) on June 24 through July 2.

News Item

Computing relies on engineers like Stephanie Brink to keep the legacy codes running smoothly. “You’re only as fast as your slowest processor or your slowest function,” says Brink, who works in CASC. By analyzing a legacy code’s performance, Brink and her team can reduce the amount of time it takes to run and allow for more critical science to be accomplished.

People Highlight

Our researchers will be well represented at the virtual SIAM Conference on Computational Science and Engineering (CSE21) on March 1–5. SIAM is the Society for Industrial and Applied Mathematics with an international community of more than 14,500 individual members.

News Item

Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities, he switched from research professor to professional researcher.

People Highlight

FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.

Project

Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.

People Highlight

LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.

Project

A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.

Project

Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.

Project

LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.

Project

LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.

Project

Performance analysis of parallel scientific codes is difficult. The HAC model allows direct comparison of data across domains with data viz and analysis tools available in other domains.

Project

This tool that automatically diagnoses performance and correctness faults in MPI applications. It identifies abnormal MPI tasks and code regions and finds the least-progressed task.

Project

These techniques emulate the behavior of anticipated future architectures on current machines to improve performance modeling and evaluation.

Project

Olga Pearce studies how to detect and correct load imbalance in high performance computing applications.

People Highlight

Kathryn Mohror develops tools that give researchers the information they need to tune their programs and maximize results. After all, she says, “It’s all about getting the answers more quickly.”

People Highlight