As this three-part news series explains, LLNL is striving to create a computing ecosystem that operates at exascale speeds (more than 1018 calculations per second) to carry out its national security and science missions an order of magnitude faster than today’s high performance computing (HPC) systems. The Livermore Computing (LC) Division is developing software—including Flux, OpenZFS, and SCR—to support these systems.
A resource and job management software (RJMS) system serves as the centerpiece for an HPC center. Within the HPC software stack, the RJMS has constant knowledge of both the workload and the resources available. Its job is to manage the users’ tasks on the system’s computing resources as efficiently as possible while dealing with issues such as manageability, scalability, and network topology awareness. As workloads and systems increase in complexity, traditional RJMS systems are confronting limits of expressibility and scalability they were not designed to handle. Enter Flux, Livermore’s next-generation RJMS that is designed to meet the Laboratory’s exascale computing needs.
“Managing and scheduling the resources in large HPC centers today is much different than it was five to ten years ago,” says Dong Ahn, project leader of the Advanced Simulation and Computing (ASC) Program’s Next-Generation Computing Enablement effort. Ahn points to two broad categories of challenges that traditional RJMS systems may have difficulty resolving and that Flux is designed to address: the resource challenge and the workflow challenge.
The resource challenge results from the need to incorporate an increasingly large number of new resource types into systems (e.g., burst buffers, graphics processing units, disk input/output, network bandwidth) and from managing the intricate relationships between them; Flux addresses this by using a graph-based resource-matching approach that enables effective scheduling of many resource types and their relationships without having to increase the overall software complexity of its schedulers.
The workflow challenge emerges from the evolution of more diverse and dynamic workflows that run in today’s HPC centers. For example, applications are integrating machine learning and in-situ data processing into traditional scientific simulations. Flux addresses this through a novel “hierarchical” scheme, where nested Flux instances divide up the available resources and schedule the components that make up an application. Flux’s application programming interfaces (APIs) make it easy to portably and efficiently support these workflows.
Mitigating these issues is crucial for the Laboratory, as the problems can significantly degrade the performance and capabilities of modern workflows on the HPC resources and thus slow users’ progress in scientific research and mission-critical decisions. Ahn notes that, given that a single advanced technology system can cost hundreds of millions of dollars, the cost of suboptimal performance can be prohibitively large.
Livermore’s Flux team collaborates with other LLNL teams (e.g., Cancer Moonshot pilot project, machine learning strategic initiative, uncertainty quantification pipeline) that increasingly require sophisticated workflow schemes as well as external research groups and teams at IBM Watson, Oak Ridge National Laboratory, and Argonne National Laboratory.
- In the image above, Flux’s fully hierarchical scheduling model supports nesting of Flux instances within batch allocations created by other traditional schedulers or by Flux itself.
- Flux: Building a Framework for Resource Management
- Flux on GitHub