LLNL participates in the ISC High Performance Conference (ISC22) on May 29 through June 2.
Topic: Performance, Portability, and Productivity
LLNL’s Python 3–based ATS tool provides scientific code teams with automated regression testing across HPC architectures.
The Exascale Computing Project (ECP) 2022 Community Birds-of-a-Feather Days will take place May 10–12 via Zoom. The event provides an opportunity for the HPC community to engage with ECP teams to discuss our latest development efforts.
The MAPP incorporates multiple software packages into one integrated code so that multiphysics simulation codes can perform at scale on present and future supercomputers.
Highlights include power grid challenges, performance analysis, complex boundary conditions, and a novel multiscale modeling approach.
SC21's inaugural Best Reproducibility Advancement Award went to an LLNL team for a benchmark suite aimed at simplifying the evaluation process of approximation techniques for scientific applications.
A newly funded project involving LLNL computer scientist Ignacio Laguna will examine numerical aspects of porting scientific applications to different HPC platforms.
A Livermore-developed programming approach helps software to run on different platforms without major disruption to the source code.
Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.
LLNL participates in the digital ISC High Performance Conference (ISC21) on June 24 through July 2.
Computing relies on engineers like Stephanie Brink to keep the legacy codes running smoothly. “You’re only as fast as your slowest processor or your slowest function,” says Brink, who works in CASC. By analyzing a legacy code’s performance, Brink and her team can reduce the amount of time it takes to run and allow for more critical science to be accomplished.
The latest issue of LLNL's Science & Technology Review magazine showcases Computing in the cover story alongside a commentary by Bruce Hendrickson.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
Our researchers will be well represented at the virtual SIAM Conference on Computational Science and Engineering (CSE21) on March 1–5. SIAM is the Society for Industrial and Applied Mathematics with an international community of more than 14,500 individual members.
Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
Computational Scientist Ramesh Pankajakshan came to LLNL in 2016 directly from the University of Tennessee at Chattanooga. But unlike most recent hires from universities, he switched from research professor to professional researcher.
FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.
Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.
LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.