Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
Topic: HPC Systems and Software
Highlights include complex simulation codes, uncertainty quantification, discrete event simulation, and the Unify file system.
Highlights include recent LDRD projects, Livermore Tomography Tools, our work with the open-source software community, fault recovery, and CEED.
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Sphinx, an integrated parallel microbenchmark suite, consists of a harness for running performance tests and extensive tests of MPI, Pthreads and OpenMP.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
Highlights include the HYPRE library, recent data science efforts, the IDEALS project, and the latest on the Exascale Computing Project.
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
Large Linux data centers require flexible system management. At Livermore Computing, we are committed to supporting our Linux ecosystem at the high end of commodity computing.
Babel is a high-performance language interoperability tool. The project is mainly developed at the Center for Applied Scientific Computing (CASC) LLNL. Babel started as an internal Lab Directed Research and Development (LDRD) project in 2000 and has been under constant development since then. It is now funded mainly under the U.S. Department of Energy (DOE) Office of Science's SciDAC program.
The sheer size of data poses significant problems in all stages of the visualization pipeline, from offline pre-processing of simulation data, to interactive queries, to real-time rendering. Moreover, visualization data is often unstructured in nature, which further complicates its management and representation. The goal of this project is to develop techniques for reducing bandwidth requirements for large unstructured data, both explicitly, by making use of data compression, and implicitly, by optimizing the layout of the data for better locality and cache reuse.
Livermore builds an open-source community around its award-winning HPC package manager.
Researchers have been developing a standardized and optimized operating system and software for deployment across a series of Linux clusters to enable high performance computing at a reduced cost.
LLNL’s Stack Trace Analysis Tool helps users quickly identify errors in code running on today’s largest machines.
A Livermore-developed programming approach helps software to run on different platforms without major disruption to the source code.
ROSE, an open-source project maintained by Livermore researchers, provides easy access to complex, automated compiler technology and assistance.
Livermore researchers have developed a toolset for solving data center bottlenecks.
New platforms are improving big data computing on Livermore’s high performance computers.
LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.
Livermore computer scientists have helped create a flexible framework that aids programmers in creating source code that can be used effectively on multiple hardware architectures.
LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.
Livermore Computing staff is enhancing the high-speed InfiniBand data network used in many of its high performance computing and file systems.
Computer scientists are incorporating ZFS into their high-performance parallel file systems for better performance and scalability.
Performance analysis of parallel scientific codes is becoming increasingly difficult, and existing tools fall short in revealing the root causes of performance problems. We have developed the HAC model, which allows us to directly compare the data across domains and use data visualization and analysis tools available in other domains.
Fast Global File Status (FGFS) is an open-source package that provides scalable mechanisms and programming interfaces to retrieve global information of a file.