Our research projects vary in size, scope, and duration, but they share a focus on developing tools and methods that help LLNL deliver on its missions to the nation and, more broadly, advance the state of the art in scientific HPC. Projects are organized here in three ways: Active projects are those currently funded and regularly updated. Legacy projects are no longer actively developed. The A-Z option sorts all projects alphabetically, both active and legacy.

Active | A-Z | Legacy

ADAPD

Advanced Data Analytics for Proliferation Detection

ADAPD integrates expertise from DOE national labs to analyze growing global data streams and traditional intelligence data, enabling early warning of nuclear proliferation activities.

AIMS

Analytics and Informatics Management Systems

AIMS (Analytics and Informatics Management Systems) develops integrated cyberinfrastructure for big climate data discovery, analytics, simulations, and knowledge innovation.

Alkemi

Improving Simulation Workflows with Machine Learning and Big Data

Simulation workflows for ALE methods often require a manual tuning process. We are developing novel predictive analytics for simulations and an infrastructure for integration of analytics.

Apollo

Fast, Lightweight, Dynamic Tuning for Data-Dependent Code

Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.

Application-Level Resilience

Efficient Algorithmic Fault Tolerance

Application-level resilience is emerging as an alternative to traditional fault tolerance approaches because it provides fault tolerance at a lower cost than traditional approaches.

Ardra

Scaling Up Transport Sweep Algorithms

Researchers are testing and enhancing a neutral particle transport code and its algorithm to ensure that they successfully scale to larger and more complex computing systems.

ASC Proxy Apps

Prepare for Testing and Porting Applications

Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.

Legacy

Ascending to Exascale

LLNL Prepares for El Capitan

El Capitan will have a peak performance of more than 2 exaflops—roughly 16 times faster on average than the Sierra system—and is projected to be several times more energy efficient than Sierra.

Legacy

AutomaDeD

Diagnosing Performance and Correctness Faults

This tool that automatically diagnoses performance and correctness faults in MPI applications. It identifies abnormal MPI tasks and code regions and finds the least-progressed task.

Automated Testing System

Ensuring Reliability of HPC Codes

LLNL’s Python 3–based ATS tool provides scientific code teams with automated regression testing across HPC architectures.

Autonomous MultiScale

Embedded Machine Learning for Smart Simulations

AMS is a machine learning solution embedded into scientific applications to automatically replace fine-scale simulations with ancillary models.

Axom

Providing Shared Computer Science Infrastructure to HPC Applications

Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.

Backlighter Tech

Automated Software to Optimize NIF Shot Efficiency

The latest generation of a laser beam–delay technique owes its success to collaboration, dedication, and innovation.

BLAST

High-Order Finite Element Hydrodynamics

BLAST is a high-order finite element hydrodynamics research code that improves the accuracy of simulations and provides a path to extreme parallel computing and exascale architectures.

BLT

Build, Link, and Test

BLT software supports HPC software development with built-in CMake macros for external libraries, code health checks, and unit testing.

Legacy

BUILD

Solving the Software Complexity Puzzle

BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.

Caliper

Application Introspection System

Caliper enables users to build customized performance measurement and analysis solutions by connecting independent context annotations, measurement services, and data processing services.

CHAI

Copy Hiding Application Interface

A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.

Cluster Management Tools

Flexible Support for Our Linux Ecosystem

Large Linux data centers require flexible system management. At Livermore Computing, we are committed to supporting our Linux ecosystem at the high end of commodity computing.

Legacy

COVID-19 Operations

LivIT and NIF Support

LivIT tackles challenges of workforce safety, telecommuting, cyber security protocols, National Ignition Facility software updates, and more.

Legacy

COVID-19 R&D

Computing Responds to Pandemic

From molecular screening, a software platform, and an online data to the computing systems that power these projects.

Legacy

Cram

Running Millions of Concurrent MPI Jobs

Cram lets you easily run many small MPI jobs within a single, large MPI job by splitting MPI_COMM_WORLD up into many small communicators to run each job in the cram file independently.

CT Image Enhancement

Novel Processing Pipeline for Threat Detection

Researchers are developing enhanced computed tomography image processing methods for explosives identification and other national security applications.

Legacy

Data-Intensive Computing Solutions

Addressing Growing Demands

New platforms are improving big data computing on Livermore’s high performance computers.