Our research projects vary in size, scope, and duration, but they share a focus on developing tools and methods that help LLNL deliver on its missions to the nation and, more broadly, advance the state of the art in scientific HPC. Projects are organized here in three ways: Active projects are those currently funded and regularly updated. Legacy projects are no longer actively developed. The A-Z option sorts all projects alphabetically, both active and legacy.
Livermore builds an open-source community around its award-winning HPC package manager.
High-resolution finite volume methods are being developed for solving problems in complex phase space geometries, motivated by kinetic models of fusion plasmas.
Researchers are developing a standardized and optimized operating system and software for deployment across Linux clusters to enable HPC at a reduced cost.
LLNL’s Stack Trace Analysis Tool helps users quickly identify errors in code running on today’s largest machines.
ROSE, an open-source project maintained by Livermore researchers, provides easy access to complex, automated compiler technology and assistance.
Master Block List is a service and data aggregation tool that aids Department of Energy facilities in creating filters and blocks to prevent cyber attacks.
Researchers are testing and enhancing a neutral particle transport code and its algorithm to ensure that they successfully scale to larger and more complex computing systems.
The Earth System Grid Federation is a web-based tool set that powers most global climate change research.
New platforms are improving big data computing on Livermore’s high performance computers.
LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.
Researchers are developing enhanced computed tomography image processing methods for explosives identification and other national security applications.
Livermore computer scientists have helped create a flexible framework that aids programmers in creating source code that can be used effectively on multiple hardware architectures.
LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.
Livermore Computing staff is enhancing the high-speed InfiniBand data network used in many of its high performance computing and file systems.
LLNL and University of Utah researchers have developed an advanced, intuitive method for analyzing and visualizing complex data sets.
Testbed Environment for Space Situational Awareness software helps to track satellites and space debris and prevent collisions.
Computer scientists are incorporating ZFS into their high performance parallel file systems for better performance and scalability.
The flourishing of simulation-based scientific discovery has also resulted in the emergence of the UQ discipline, which is essential for validating and verifying computer models.
zfp is an open-source C/C++ library for compressed floating-point and integer arrays that support high throughput read and write random access.
Cram lets you easily run many small MPI jobs within a single, large MPI job by splitting MPI_COMM_WORLD up into many small communicators to run each job in the cram file independently.
Performance analysis of parallel scientific codes is difficult. The HAC model allows direct comparison of data across domains with data viz and analysis tools available in other domains.
Fast Global File Status (FGFS) is an open-source package that provides scalable mechanisms and programming interfaces to retrieve global information of a file.
Spindle improves the library-loading performance of dynamically linked HPC applications by plugging into the system’s dynamic linker and intercepting its file operations.
Caliper enables users to build customized performance measurement and analysis solutions by connecting independent context annotations, measurement services, and data processing services.