Unlocking secrets and driving innovation
Tzanio Kolev and team discuss one of their BLAST images
Research & Development

Promoting innovation through cutting-edge R&D

Our responsibility for enabling science goes beyond helping to solve today’s problems; we must be prepared to solve tomorrow’s known and anticipated challenges through research and advanced development. We conduct collaborative scientific investigations that require the power of high performance computers and the efficiency of modern computational methods. Our research, some of which is described here, focuses on issues that will enable the next generation of computing applications for LLNL and our partners.

Our research explores a number of different scientific simulation fields, most of which have particular significance to LLNL programs (e.g., high-energy-density physics). Other scientific simulation work showcases the Lab’s high performance computing capabilities in collaborative efforts with scientists at other institutions. View content related to Computational Sciences/Simulation.

Computational Biology
Computational Fluid Dynamics
Computational Seismology
Materials Science/Molecular Dynamics
Parallel Discrete Event Simulation
Plasma Physics
Power Grid
Transport Methods
Ultra-short Pulse Laser Propagation

We’re conducting research and developing game-changing tools to meet the nation’s top priorities to enhance security in a highly interconnected world. In particular, our work is focused on developing a distributed approach to real-time situational awareness; advancing predictive and scalable simulations to design and analyze complex systems, such as space system protection and large-scale cyber defense, where full-scale experiments are not possible; and creating analytic methodologies and data management and fusion tools that are needed for the next generations of intelligence applications. View content related to Cyber Security

Analysis of Industrial Control Systems
Cyber Data Analytics
Education and Outreach
Modeling and Simulation
Network Mapping
Network System Design and Engineering
Reverse Engineering
Secure Coding

Data Analytics and Management is the branch of computer science that is concerned with extracting usable information from data. At LLNL, we’re working with data in many forms: text, images, videos, semantic graphs, and more. This data may be "at rest" in files or databases, or "in motion" as it streams in from sensors or other live sources. Our informatics research aims to gain insight from data that is very large, geographically distributed, complex, fast moving, or some combination of these characteristics. Applications for this work span a wide range of LLNL missions, including energy security and efficiency, biosecurity, computer security, and climate change. View content related to Data Analytics and Management.

Analysis of Large Graphs
Data Analytics for Facilities
Machine Learning
Network Analysis and Mapping
Text Analysis

Achieving 50–100 times performance increase on real applications over today's most powerful HPC platforms, such as Sierra, as quickly and energy efficiently as possible, will force fundamental changes in all computer components. To reach this goal—and to prepare for future exascale systems—we are developing new algorithms and tools that help researchers address these complexities and fully exploit the systems’ performance. We are also partnering with industry experts to develop well-coordinated solutions to hardware and software design challenges. View content related to Extreme Computing.

Algorithm Development at Extreme Scale
Fault Tolerance
Hybrid Architecture
Interconnection Networks
Power-Aware Computing

We are determining how to build future generations of supercomputers. We are actively exploring issues such as possible uses of persistent memory (non-volatile random access memory or NVRAM) and methods to reduce power consumption or to increase reliability while maintaining (or even reducing) cost and maintaining (or improving) performance. We are also closely interacting with industry through local initiatives and programs such as FastForward. Throughout these activities, we combine unique research capabilities with our prove track record of building and deploying reliable and productive large-scale systems. View content related to Hardware Architecture.

Hardware Testbeds
Memory-Centric Architectures

Disk- and tape-delivered I/O bandwidths are being rapidly outpaced by capacity increases, which means valuable processor time is being wasted while waiting for data delivery. For extreme-scale machines to be productive, bandwidth challenges throughout the entire I/O stack must be addressed. We’re working on techniques and technologies that leverage node-local or near-node storage, refactor parallel file systems, and evolve tertiary storage software to enable efficient extreme-scale computing environments. View content related to I/O, Networking, and Storage.

Checkpointing Strategies
Data Management and Movement
Non-volatile Storage
Parallel File Systems

We’re developing fast and scalable algorithms for solving partial differential equations that dynamically adjust the computation mesh in order to improve accuracy and make the best use of computational resources. We research new methods for block-structured adaptive mesh refinement and high-order unstructured curvilinear mesh optimization, targeting applications with moving and deforming meshes. Our algorithms can be used to accurately represent the moving and deforming geometry as well as to resolve internally moving features such as material interfaces, shocks, and reaction fronts. View content related to Mesh Management.

Structured AMR Frameworks
Unstructured Finite Elements

We’re developing next-generation numerical methods to enable more accurate and efficient simulations of physical phenomena such as wave propagation, turbulent incompressible and high-speed reacting flows, shock hydrodynamics, fluid–structure interactions, and kinetic simulation. Our application-driven research is focused on designing, analyzing, and implementing new high-order finite difference, finite volume, and finite element discretization algorithms, with an emphasis on increased robustness, parallel scalability, and better utilization of modern computer architectures. View content related to Numerical PDEs/High-Order Discretization Modeling.

High-order Finite Difference Methods for Wave Propagation
High-order Finite Elements
High-order Finite Volume Methods

We're working on a new generation of tools to help our users with exascale machine bottlenecks. Our research emphasizes performance analysis and code correctness and aims to address these main challenges: seamless integration with programming models, scalability, automatic analysis, detection of inefficient resource usage, and tool modularity. View content related to Parallel Software Development Tools

Debugging and Correctness Tools
Job Scheduling & Resource Management
Middleware for Parallel Performance Tools
Performance Analysis Tools
Tuning at Runtime

Programming models and languages are essential for expressing our computational problems in ways that take best advantage of the massive capability of current and future computers at LLNL. Our research efforts extend and improve existing programming models, such as OpenMP and MPI. Using tools like the ROSE compiler technology and Babel, we’re researching new ways to transform, analyze, optimize, combine, and interoperate languages. View content related to Programming Models and Languages.

Compiler Technology
Portable Performance

Our SQA experts provide guidance on SQA policy for the entire LLNL community and ensure compliance with DOE and NNSA orders and policies. We emphasize what has to be done, not how to do it. By taking a risk-based graded approach to LLNL’s unique and varied software projects, we’re able to maximize each project’s effectiveness and assist in identification and mitigation of project risks. Our team consults throughout the DOE complex, and we provide scientists and researchers with simple, user-friendly tools, templates, checklists, classes, and other related resources. View content related to Software Quality Assurance.

Improvements of Large Scientific Software

We’re developing algorithms and software to enable the scalable solution of equations central to large-scale science simulations. Our research involves developing new mathematics and computing techniques, with a major focus on methods (e.g., multilevel methods) suitable for the next generation of extreme-scale supercomputers. View content related to Solvers.

Multigrid and Multilevel Solvers
Nonlinear Solvers
Optimization Methods
Time Integration

We’re creating an LLNL commodity cluster system software environment based on Linux/Open-Source. We use the Red Hat Enterprise Linux distribution, stripping out the modules we don’t need and adding and modifying components as required. Working in open source allows for important HPC customizations and builds in-house expertise. Having in-house software developers is necessary to quickly resolve problems (especially at scale) on our cutting-edge hardware without having to wait for the vendors. The environment includes Linux kernel modifications, cluster management tools, monitoring and failure detection, resource management, authentication and access control, and parallel file system software (detailed elsewhere). These clusters provide users with a production solution capable of running MPI jobs at scale. View content related to System Software.

Cluster Management Tools
Resource Management
User Productivity Tools

We’re developing techniques to quantify numerical error in multiphysics simulations. Understanding approximation error is an important component of a broader UQ strategy, and we’re investigating both adjoint and forward propagation methods. We’re also developing mathematical and statistical techniques to quantify different types of uncertainties (aleatory, epistemic, model form) that are present in multiphysics simulation models. These non-intrusive techniques include those for parameter screening, global sensitivity analysis, response surface analysis, and Bayesian inferences. In addition, we’re investigating hybrid UQ methodologies that enable the blending of the more rigorous and efficient intrusive UQ methods with non-intrusive and semi-intrusive methods at physics module level. Many of these methods have been incorporated into an open source software package called PSUADE. In addition, we're investigating hybrid UQ methodologies that enable the blending of the more rigorous and efficient intrusive UQ methods with non-intrusive and semi-intrusive methods at physics module level. This flexible methodology facilitates a plug-and-play concept for in-situ UQ and sensitivity analysis that will be useful for high-fidelity stochastic multiphysics simulations. We’re also researching and developing stochastic data assimilation methods to quantify uncertainties associated with high-dimensional stochastic source inversion. These methods are useful in applications such as seismic and power grid analysis. We're exploring efficient nonlinear and non-Gaussian methods such as kernel principal component analysis and adjoint-based Bayesian inference. View content related to Uncertainty Quantification

Error Estimation
Hybrid and Semi-intrusive UQ Models
Non-intrusive UQ Methods
Stochastic Data Assimilation
UQ Software

We provide tools and technology to enable scientists and engineers to gain understanding and exploit their data, whether it comes from large-scale simulations or extreme-scale sensor technologies. We created and continue to develop VisIt, a full-featured, cutting-edge visualization and analysis application that is scalable to tens of thousands of cores and is capable of analyzing and visualizing extreme-scale simulations. In addition, we have a world-class research group that is advancing the state of the art in extreme-scale data streaming, uncertainty visualization and analysis, topological and feature-based analysis, and high performance video processing and analysis for multi-gigapixel sensors. View content related to Visualization and Scientific Data Analysis.

Compression Techniques
Feature Detection/Extraction
Image Processing
Multiresolution Algorithms
Scientific Data Management
Scientific Visualization
Streaming Data Analysis
Video Processing