Lawrence Livermore heads to the ISC High Performance Conference (ISC19) in Frankfurt, Germany, on June 16–20. The event will draw more than 3,500 participants from the research and commercial communities, and 160 exhibitors will share the latest technology and products of interest to HPC developers and users. Be sure to follow LLNL Computing on Twitter with the #ISC19 hashtag.
Parallel Software Development Tools
We’re working on a new generation of tools to help our users with exascale machine bottlenecks. Our research emphasizes performance analysis and code correctness and aims to address these main challenges: seamless integration with programming models, scalability, automatic analysis, detection of inefficient resource usage, and tool modularity. View content related to Parallel Software Development Tools.
The RADIUSS (Rapid Application Development via an Institutional Universal Software Stack) project aims to lower cost and improve agility by encouraging adoption of our core open-source software products for use in institutional applications.
Like many LLNL computer scientists, Kathryn Mohror juggles multiple responsibilities both at her workplace and in the scientific community. She came to the Lab as a postdoctoral researcher in 2010 and joined Computation’s Center for Applied Scientific Computing (CASC) as a staff scientist in 2012. Today she leads CASC’s Data Analysis Group, mentors students, and conducts peer review for scientific journals.
Bolstered by ergonomic stretching and exercise breaks, Computation’s 20th hackathon saw high turnout collaboration, experimentation, and fun.
FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL’s Sierra supercomputer.
The 2019 Department of Energy (DOE) Performance, Portability and Productivity meeting is slated for April 2–4, 2019, in Denver, CO, where attendees will have the opportunity to share ideas and updates on performance portability—the ability for applications to be used effectively on different systems without the need for extreme customizations—across the DOE’s current and future supercomputers.
Umpire is a resource management library that allows the discovery, provision, and management of memory on next-generation architectures.
Users need tools that address the bottlenecks of exascale machines, work seamlessly with the programming models on the target machines, scale to the full size of the machine, provide the necessary automatic analysis capabilities, and be flexible and modular enough to overcome the complexities and changing demands of exascale architectures.