Center for Applied Scientific Computing

Parallel Systems Group

The Parallel Systems Group carries out research to facilitate the use of extreme scale computers for scientific discovery. We are especially focused on tools research to maximize the effectiveness of applications running on today’s largest parallel computers. Our expertise includes performance measurement, analysis, and optimization in addition to debugging and power optimization. We also have expertise in existing and new programming models, power-aware supercomputing, fault tolerant computing, and numerical kernel optimization.

Group Lead

Tom Epperly: optimization, component technology, language interoperability, software architecture

Research Staff

David Beckingsale: performance analysis, code optimization, mini-applications and parallel programming models

David Boehme: performance analysis of parallel applications, parallel and distributed architectures, tool support for parallel programming, parallel programming paradigms

Stephanie Brink

Giorgis Georgakoudis

Maya Gokhale: data intensive computing, reconfigurable computing, co-processor accelerators, high performance computing architectures

Eric Green

Ignacio Laguna: resilience, fault tolerance, large-scale debugging, software reliability and correctness, compiler analysis

Aniruddha Marathe: power-aware and power-constrained supercomputing, performance analysis and optimization, high performance computing in cloud

Dan Milroy

Konstantinos Parasyris

Harshitha Menon: fault tolerance, silent data corruption, run-time system, load balancing, charm++

Tapasya Patki: power-aware supercomputing, performance analysis and optimization, multi-constraint resource management at exascale

Bo “Ivy” Peng

Barry Rountree: performance measurement and analysis, extreme scale fault-tolerance, extreme scale tools

Sergei Shudler