Explainable Artificial Intelligence
A key difference in how we in CASC use ML compared to the commercial sector is that a working model is rarely the ultimate goal. High-sensitivity predictive models lead to new insights and enable us to form new hypotheses about physical phenomena.
We are developing techniques that reveal the interpretable components in these often opaque models as well as approaches for effective communication between model and domain user. This strategy calls for novel techniques that combine human understanding and machine intelligence.
Explainable artificial intelligence (AI) lies at the intersection of ML, statistics, visualization, human–computer interaction, and more. This emerging research area is rapidly becoming not only a crucial capability for LLNL but also a core strength. CASC’s integrated research teams jointly tackle these challenges, earning widespread recognition for their contributions.