Computer scientist Greg Becker contributes to HPC research and development projects for LLNL’s Livermore Computing division.
Topic: HPC Systems and Software
Highlights include debris and shrapnel modeling at NIF, scalable algorithms for complex engineering systems, magnetic fusion simulation, and data placement optimization on GPUs.
Users need tools that address bottlenecks, work with programming models, provide automatic analysis, and overcome the complexities and changing demands of exascale architectures.
Highlights include CASC director Jeff Hittinger's vision for the center as well as recent work with PruneJuice DataRaceBench, Caliper, and SUNDIALS.
LLNL's interconnection networks projects improve the communication and overall performance of parallel applications using interconnect topology-aware task mapping.
The PRUNERS Toolset offers four novel debugging and testing tools to assist programmers with detecting, remediating, and preventing errors in a coordinated manner.
LLNL's Advanced Simulation Computing program formed the Advanced Architecture and Portability Specialists team to help LLNL code teams identify and implement optimal porting strategies.
BLT software supports HPC software development with built-in CMake macros for external libraries, code health checks, and unit testing.
Highlights include the latest work with RAJA, the Exascale Computing Project, algebraic multigrid preconditioners, and OpenMP.
Highlights include complex simulation codes, uncertainty quantification, discrete event simulation, and the Unify file system.
Highlights include recent LDRD projects, Livermore Tomography Tools, our work with the open-source software community, fault recovery, and CEED.
A new software model helps move million-line codes to various hardware architectures by automating data movement in unique ways.
Sphinx, an integrated parallel microbenchmark suite, consists of a harness for running performance tests and extensive tests of MPI, Pthreads and OpenMP.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
Highlights include the HYPRE library, recent data science efforts, the IDEALS project, and the latest on the Exascale Computing Project.
Apollo, an auto-tuning extension of RAJA, improves performance portability in adaptive mesh refinement, multi-physics, and hydrodynamics codes via machine learning classifiers.
Large Linux data centers require flexible system management. At Livermore Computing, we are committed to supporting our Linux ecosystem at the high end of commodity computing.
This project's techniques reduce bandwidth requirements for large unstructured data by making use of data compression and optimizing the layout of the data for better locality and cache reuse.
LLNL’s Stack Trace Analysis Tool helps users quickly identify errors in code running on today’s largest machines.
Researchers are developing a standardized and optimized operating system and software for deployment across Linux clusters to enable HPC at a reduced cost.
ROSE, an open-source project maintained by Livermore researchers, provides easy access to complex, automated compiler technology and assistance.
New platforms are improving big data computing on Livermore’s high performance computers.
LLNL researchers are finding some factors are more important in determining HPC application performance than traditionally thought.
Livermore computer scientists have helped create a flexible framework that aids programmers in creating source code that can be used effectively on multiple hardware architectures.
LLNL computer scientists use machine learning to model and characterize the performance and ultimately accelerate the development of adaptive applications.