Our research projects vary in size, scope, and duration, but they share a focus on developing tools and methods that help LLNL deliver on its missions to the nation and, more broadly, advance the state of the art in scientific HPC. Projects are organized here in three ways: Active projects are those currently funded and regularly updated. Legacy projects are no longer actively developed. The A-Z option sorts all projects alphabetically, both active and legacy.

Active | A-Z | Legacy

COVID-19 Operations

LivIT and NIF Support

LivIT tackles challenges of workforce safety, telecommuting, cyber security protocols, National Ignition Facility software updates, and more.

LivCloud

Migrating Data and Services to the AWS Cloud

The Livermore Information Technology (LivIT) program is the first organization at LLNL to commit to migrating all services and applications to the Amazon Web Services cloud.

Deterrence in Cyberspace

Defending Our Critical Infrastructure

LLNL’s cyber programs work across a broad sponsor space to develop technologies addressing sophisticated cyber threats directed at national security and civilian critical infrastructure.

Ascending to Exascale

LLNL Prepares for El Capitan

El Capitan will have a peak performance of more than 2 exaflops—roughly 16 times faster on average than the Sierra system—and is projected to be several times more energy efficient than Sierra.

Vidya

Creating Machine Learning Tools to Optimize Design Simulations

This project advances research in physics-informed ML, invests in validated and explainable ML, creates an advanced data environment, builds ML expertise across the complex, and more.

Innovative HPC Architectures

Mission-Driven Science

LC sited two different AI accelerators in 2020: the Cerebras wafer-scale AI engine attached to Lassen; and an AI accelerator from SambaNova Systems into the Corona cluster.

VBL++

Virtual Beamline Code

Upgraded with the C++ programming language, VBL provides high-fidelity models and high-resolution calculations of laser performance predictions.

MAPP

Multiphysics Simulations for the Exascale Era

The MAPP incorporates multiple software packages into one integrated code so that multiphysics simulation codes can perform at scale on present and future supercomputers.

RAJA Portability Suite

Enabling Performance Portable CPU and GPU HPC Applications

A Livermore-developed programming approach helps software to run on different platforms without major disruption to the source code.

Axom

Providing Shared Computer Science Infrastructure to HPC Applications

Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.

BUILD

Solving the Software Complexity Puzzle

BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.

StarSapphire

Data-Driven Modeling and Analysis

StarSapphire is a collection of scientific data mining projects focusing on the analysis of data from scientific simulations, observations, and experiments.

ASC Proxy Apps

Prepare for Testing and Porting Applications

Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.

fpzip

Compressed Large Multidimensional Floating-Point Arrays

fpzip is a library for lossless or lossy compression of multidimensional floating-point arrays. It was primarily designed for lossless compression.

SAMRAI

Structured Adaptive Mesh Refinement Application Infrastructure

The SAMRAI library is the code base in CASC for exploring application, numerical, parallel computing, and software issues associated with structured adaptive mesh refinement.

Maestro Workflow Conductor

Developing Sustainable Computational Workflows

The Maestro Workflow Conductor is a lightweight, open-source Python tool that can launch multi-step software simulation workflows in a clear, concise, consistent, and repeatable manner.

ADAPD

Advanced Data Analytics for Proliferation Detection

ADAPD integrates expertise from DOE national labs to analyze growing global data streams and traditional intelligence data, enabling early warning of nuclear proliferation activities.

TEIMS

Taurus Environmental Information Management System

TEIMS manages collaborative tasks, site characterization, risk assessment, decision support, compliance monitoring, and regulatory reporting for the Environmental Restoration Department.

Alkemi

Improving Simulation Workflows with Machine Learning and Big Data

Simulation workflows for ALE methods often require a manual tuning process. We are developing novel predictive analytics for simulations and an infrastructure for integration of analytics.

FGPU

GPU Portability for FORTRAN Codes

FGPU provides code examples that port FORTRAN codes to run on IBM OpenPOWER platforms like LLNL's Sierra supercomputer.

HYPRE

Scalable Linear Solvers and Multigrid Methods

The hypre library's comprehensive suite of scalable parallel linear solvers makes large-scale scientific simulations possible by solving problems faster.

Umpire

Managing Heterogeneous Memory Resources

Umpire is a resource management library that allows the discovery, provision, and management of memory on next-generation architectures.

Parallel Software Development Tools

R&D for Exascale Architectures

Users need tools that address bottlenecks, work with programming models, provide automatic analysis, and overcome the complexities and changing demands of exascale architectures.

Unify

Distributed Burst Buffer File System

This open-source file system framework supports hierarchical HPC storage systems by utilizing node-local burst buffers.