Providing a stable, usable, leading-edge parallel application development environment
Development Environment Group in front of the Sierra supercomputer
Livermore Computing: Development Environment Group

We meet the needs of today's code developers

The Development Environment Group (DEG) endeavors to provide a stable, usable, leading-edge parallel application development environment that significantly increases the productivity of LLNL applications developers. We strive to do this by enabling better scalable performance and enhancing the reliability of LLNL applications.

DEG partners with its application development user community to identify user requirements and evaluate tool effectiveness. Through collaborations with vendors and other third party software developers, DEG ensures a complete environment in the most cost effective way possible and meets the needs of today’s code developers while steering their code development to exploit emerging technologies.

DEG, part of Livermore Computing, is currently involved in the following projects and activities:

  • Compilers—Compilers for Fortran 90/95, Fortran 77, ANSI C, and C++; details are available about compilers currently installed on Livermore Computing platforms.
  • Debuggers—See Supported Software and Computing Tools for the available debugging tools, their locations, the machines on which they run, and available documentation (if any).
  • Languages—The primary standardized languages used for scientific computing are Fortran, C, and C++. The international organization responsible for standardization in the field of information technology is the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC). The US counterpart is the International Committee for Information Technology Standards (INCITS). JTC 1, SC 22 manages programming languages, their environments, and system software interfaces.
  • Parallel tools—Tools are provided on most platforms to allow programmers to take advantage of the parallel nature of the machines. MPI is available on all platforms. See Supported Software and Computing Tools for available parallel tools, their locations, the machines on which they run, and available documentation (if any).
  • Performance analysis tools—Various performance analysis tools are available that provide information regarding memory use, hardware counter data, system resource use, and communication. Each tool varies both in ease of use and application perturbation. Several tools provide a GUI for visualization and data reporting. Examination of data is typically done in a postmortem manner; however, some tools have run-time reporting capability. See Supported Software and Computing Tools for available performance analysis tools, their locations, the machines on which they run, and available documentation (if any).
  • Scalable I/O—Providing high-performance parallel file system and I/O library support for all major platforms at LLNL, working closely with end users for all parallel I/O issues, performing tests using locally developed tools, and collaborating with platform partners, academic researchers, and vendors to address ASC high-performance I/O needs.


Name E-mail ( Assignment/Interests
Scott Futral futral DEG group leader; general environment support, Run/Proxy, findentry, flint
Dong Ahn dahn TotalView support and development projects (including scalability project and BGQ port); scalable debugging tools (including STAT), techniques, and infrastructure (including LaunchMON and Fast Global File Status); hardware performance counter (e.g., PAPI); massively parallel loading (SPINDLE); and next generation resource manager
Blaise Barney blaiseb HPC training/workshops, MPI, OpenMP, Pthreads, Totalview, ASC Alliances
Greg Becker becker33 Spack package manager, SCR, other ProTools development
Chris Chambreau chcham Memory, profiling, and MPI tracing tool support, including mpiP, TAU, Vampir/VampirTrace, and memP
Tamara Dahlgren dahlgren1 ProTools development team, Spack, RADIUSS project
Chris Earl earl2 Clang/LLVM support; compiler-based optimizations
Todd Gamblin (CASC) gamblin2 Performance measurement and analysis; distributed clustering; scalable in-situ analysis techniques; load balance for AMR; run-time systems; collaborative development tools (, Confluence, JIRA, Greenhopper, source hosting, code review, build & test, etc.); CMake and other build systems
Elsa Gonsiorowski gonsie I/O user support and parallel file systems performance, MPI-IO, HDF5, and NetCDF
John Gyllenhaal gyllen Enhancing the CORAL development environment (lrun, lalloc, the srun emulator, wrappers around compilers, bsub, jsrun, etc.), CORAL compiler support, Valgrind/memcheck_all support, debugging strange supercomputer issues
Ian Karlin (LC-ATO) karlin1 Performance analysis and optimization, benchmarking, LULESH point of contact, Intel LLNS TR for PathForward, Sandia point of contact for El Capitan Center of Excellence and Sierra support
Greg Lee lee218 Parallel tool development, Stack Trace Analysis Tool (STAT), Python, Intel software (compilers, Inspector, VTune Amplifier, Advisor, Intel MPI, Trace Analyzer and Collector, and Pin), compilers (PGI), debuggers, PRUNERS tools, and math libraries (MKL, Petsc, and FFTW)
Matt LeGendre legendre1 Performance analysis tool support, tool component support
Edgar Leon leon NNSA-CEA TR for Hardware and Software Co-Design, MPI/OpenMP affinity, exascale architectures, and system noise
Marty McFadden mcfadden8 Support for Umpire, umap, spindle, OMPD
Kathryn Mohror (CASC) mohror1 Scalable fault tolerant computing, performance measurement and analysis, scalable I/O systems
Adam Moody moody20 Machine learning framework support
Ramesh Pankajakshan pankajakshan1 Porting and optimization of institutional codes ( Hypre, SW4) to next generation architectures(GPGPU,MIC)
Barry Rountree (CASC) rountree4 Statistical and algorithmic debugging, power-aware supercomputing
Local Vendor Support  
Max Katz katz12 NVIDIA and GPUs expert
James Lamb lamb28 IBM and GPFS/ESS expert
Roy Musselman musselman4 IBM application analyst, BGQ compiler and MPI support