Center for Applied Scientific Computing

Parallel Systems Group

The Parallel Systems Group carries out research to facilitate the use of extreme scale computers for scientific discovery. We are especially focused on tools research to maximize the effectiveness of applications running on today’s largest parallel computers. Our expertise includes performance measurement, analysis, and optimization in addition to debugging and power optimization. We also have expertise in existing and new programming models, power-aware supercomputing, fault tolerant computing, and numerical kernel optimization.

Group Lead

Tom Epperly:¬†optimization, component technology, language interoperability, software architecture

Research Staff

David Beckingsale: performance analysis, code optimization, mini-applications and parallel programming models

Abhinav Bhatele: parallel algorithms, performance analysis and modeling, HPC networks, communication optimization, network simulation, visualization and data analytics

David Boehme: performance analysis of parallel applications, parallel and distributed architectures, tool support for parallel programming, parallel programming paradigms

Todd Gamblin: performance monitoring & analysis, large-scale MPI applications, extreme scale tools

Giorgis Georgakoudis

Maya Gokhale: data intensive computing, reconfigurable computing, co-processor accelerators, high performance computing architectures

Ignacio Laguna: resilience, fault tolerance, large-scale debugging, software reliability and correctness, compiler analysis

Aniruddha Marathe: power-aware and power-constrained supercomputing, performance analysis and optimization, high performance computing in cloud

Harshitha Menon: fault tolerance, silent data corruption, run-time system, load balancing, charm++

Tapasya Patki: power-aware supercomputing, performance analysis and optimization, multi-constraint resource management at exascale

Barry Rountree: performance measurement and analysis, extreme scale fault-tolerance, extreme scale tools