A key difference in how we in CASC use ML compared to the commercial sector is that a working model is rarely the ultimate goal. High-sensitivity predictive models lead to new insights and enable us to form new hypotheses about physical phenomena.

We are developing techniques that reveal the interpretable components in these often opaque models as well as approaches for effective communication between model and domain user. This strategy calls for novel techniques that combine human understanding and machine intelligence.

Explainable artificial intelligence (AI) lies at the intersection of ML, statistics, visualization, human–computer interaction, and more. This emerging research area is rapidly becoming not only a crucial capability for LLNL but also a core strength. CASC’s integrated research teams jointly tackle these challenges, earning widespread recognition for their contributions.

part of the Cifar-10C dataset shown as candlesticks

Feature Importance Estimation

In this paper accepted at AAAI 2021, a research team describes PRoFILE, a novel feature importance estimation method.

topological analysis of X-ray CT data

AI and Aging Materials

AI-driven data analytics provide opportunities to accelerate materials design and optimization.

Brian in video chat, picture in picture, with moderator

AI Accelerators in HPC

CASC group leader Brian Van Essen talks to the Next Platform about the convergence of HPC and AI tech.