Our research projects vary in size, scope, and duration, but they share a focus on developing tools and methods that help LLNL deliver on its missions to the nation and, more broadly, advance the state of the art in scientific HPC. Projects are organized here in three ways: Active projects are those currently funded and regularly updated. Legacy projects are no longer actively developed. The A-Z option sorts all projects alphabetically, both active and legacy.
ADAPD
ADAPD integrates expertise from DOE national labs to analyze growing global data streams and traditional intelligence data, enabling early warning of nuclear proliferation activities.
AIMS
AIMS (Analytics and Informatics Management Systems) develops integrated cyberinfrastructure for big climate data discovery, analytics, simulations, and knowledge innovation.
Alkemi
Simulation workflows for ALE methods often require a manual tuning process. We are developing novel predictive analytics for simulations and an infrastructure for integration of analytics.
Apollo
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
Application-Level Resilience
Application-level resilience is emerging as an alternative to traditional fault tolerance approaches because it provides fault tolerance at a lower cost than traditional approaches.
Ardra
Researchers are testing and enhancing a neutral particle transport code and its algorithm to ensure that they successfully scale to larger and more complex computing systems.
ASC Proxy Apps
Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.
Ascending to Exascale
El Capitan will have a peak performance of more than 2 exaflops—roughly 16 times faster on average than the Sierra system—and is projected to be several times more energy efficient than Sierra.
AutomaDeD
This tool that automatically diagnoses performance and correctness faults in MPI applications. It identifies abnormal MPI tasks and code regions and finds the least-progressed task.
Automated Testing System
LLNL’s Python 3–based ATS tool provides scientific code teams with automated regression testing across HPC architectures.
Autonomous MultiScale
AMS is a machine learning solution embedded into scientific applications to automatically replace fine-scale simulations with ancillary models.
Axom
Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.
Backlighter Tech
The latest generation of a laser beam–delay technique owes its success to collaboration, dedication, and innovation.
BLAST
BLAST is a high-order finite element hydrodynamics research code that improves the accuracy of simulations and provides a path to extreme parallel computing and exascale architectures.
BLT
BLT software supports HPC software development with built-in CMake macros for external libraries, code health checks, and unit testing.
BUILD
BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.
Caliper
Caliper enables users to build customized performance measurement and analysis solutions by connecting independent context annotations, measurement services, and data processing services.
CHAI
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Cluster Management Tools
Large Linux data centers require flexible system management. At Livermore Computing, we are committed to supporting our Linux ecosystem at the high end of commodity computing.
COVID-19 Operations
LivIT tackles challenges of workforce safety, telecommuting, cyber security protocols, National Ignition Facility software updates, and more.
COVID-19 R&D
From molecular screening, a software platform, and an online data to the computing systems that power these projects.
Cram
Cram lets you easily run many small MPI jobs within a single, large MPI job by splitting MPI_COMM_WORLD up into many small communicators to run each job in the cram file independently.
CT Image Enhancement
Researchers are developing enhanced computed tomography image processing methods for explosives identification and other national security applications.
Data-Intensive Computing Solutions
New platforms are improving big data computing on Livermore’s high performance computers.