CASC-AEP hosts many visiting researchers who enrich the Laboratory’s research environment through research seminars. The Computing Masterworks Lecture Series is a special, high-profile series of seminars that features luminaries in scientific computing. Offered four times per year, these lectures provide opportunities for interchange of ideas at the cutting edge of HPC with these thought leaders.

Information about previous Computing Masterworks lectures is available below.


Youssef Saad headshot

Yousef Saad

University of Minnesota | October 2023

"Nonlinear Acceleration Techniques Based On Krylov Subspace Methods" | Full abstract and bio

There has been a surge of interest in recent years in general-purpose ‘acceleration’ methods that take a sequence of vectors converging to the limit of a fixed-point iteration and produce from it a faster converging sequence. A prototype of these methods that attracted much attention recently is the Anderson Acceleration (AA) procedure. We introduce the nonlinear Truncated Generalized Conjugate Residual (nlTGCR) algorithm, an alternative to AA which is designed from a careful adaptation of the Conjugate Residual method for solving linear systems of equations to the nonlinear context. The various links between nlTGCR and inexact Newton, quasi-Newton, and multisecant methods are exploited to build a method that has strong global convergence properties and that can also exploit symmetry when applicable. Taking this algorithm as a starting point we explore a number of other acceleration procedures including a short-term (‘symmetric’) version of Anderson Acceleration.

Jack Dongarra headshot

Jack Dongarra

University of Tennessee | May 2022

"A Not So Simple Matter of Software" | Full abstract and bio

This talk considers some of the changes that have occurred in high performance computing and the impact the changes are having on how our algorithms and software libraries are designed for our high-end computers. For nearly 40 years, Moore’s Law produced exponential growth in hardware performance, and during that same time, most software failed to keep pace with these hardware advances. We will look at some of the algorithmic and software changes that have tried to keep up with the advances in the hardware.

David Keyes headshot

David Keyes

King Abdullah University of Science and Technology | December 2021

"Nonlinear Preconditioning for Nonlinearly Stiff Multiscale Systems" | Full abstract and bio

Nonlinear preconditioning transforms a nonlinear algebraic system to a form for which Newton-type algorithms have improved success through quicker advance to the domain of quadratic convergence. We place these methods, which go back at least as far as the Additive Schwarz Preconditioned Inexact Newton (ASPIN, 2002), in the context of a proliferation distinguished by being left- or right-sided, multiplicative or additive, and partitioned by field, subdomain, or other criteria. We present the Nonlinear Elimination Preconditioned Inexact Newton (NEPIN, 2021), which is based on a heuristic “bad/good” heuristic splitting of equations and corresponding degrees of freedom. We augment basic forms of nonlinear preconditioning with three features of practical interest: a cascadic identification of the “bad” discrete equation set, an adaptive switchover to ordinary Newton as the domain of convergence is approached, and error bounds on output functionals of the solution. Various nonlinearly stiff algebraic and model PDE problems are considered for insight and we illustrate performance advantage and scaling potential on challenging two-phase flows in porous media.

Margot Gerritsen headshot

Margot Gerritsen

Stanford University | July 2021

"40 Years of Adventures in STEM" | Full abstract and bio

In the last 40 years, much changed in STEM: we changed from typewriters to PCs, from low performance to high performance computing, from data-supported research to data-driven research, fromtraditional languages such as Fortran to a plethora of programming environments. And the rate of change seems to increase constantly. Some things have stayed more or less the same, such as the gender make-up of the STEM community, the level of stress and the struggles we all experience (and the joys!). This talk reflects on those years, on lessons learned and not learned or unlearned, on things I wish I understood 40 years ago, and on things I still do not understand.

Keren Bergman headshot

Keren Bergman

Columbia University | May 2021

"Deeply Disaggregated High Performance Architectures with Embedded Photonics" | Full abstract and bio

High-performance systems are increasingly bottlenecked by the energy and communications costs of interconnecting numerous compute and memory resources. Integrated silicon photonics offer the opportunity of embedding optical connectivity that directly delivers high off-chip communication bandwidth densities with low power consumption. We review these advances and introduce the concept of embedded photonics for addressing data-movement challenges in high-performance systems. Beyond alleviating the bandwidth/energy bottlenecks, embedded photonics can enable new disaggregated architectures that leverage the distance independence of optical transmission. We discuss how the envisioned modular system interconnected by a unified photonic fabric can be flexibly composed to create custom architectures tailored for specific applications.

Hillery Hunter headshot

Hillery Hunter

IBM | January 2020

“Influences of the Pandemic on Cloud Computing” | Full abstract and bio

Dr. Hunter discusses factors in the pandemic that have raised interest in cloud technologies—from a need to accelerate drug discover and modeling, to tremendous pressures on various types of enterprise. Join the conversation to hear how cloud methodology can accelerate both “on-prem” and “off-prem” environments and about key technology capabilities that can break down barriers to adoption.

Bill Gropp headshot

Bill Gropp

University of Illinois at Urbana-Champaign | December 2019

"Challenges in Intranode and Internode Programming for HPC Systems" | Full abstract and bio

After over two decades of relative architectural stability for distributed memory parallel computers, the end of Dennard scaling and looming end of Moore’s “law” is forcing major changes in computing systems. To continue to provide increased performance computer architects are producing innovative new systems. This innovation is creating challenges for exascale systems that are different than the challenges for the extreme-scale systems of the past. This talk discusses some of the issues in building a software ecosystem for extreme-scale systems, with an emphasis on leveraging software for commodity elements while also providing the support needed by high performance applications.