A key difference in how we in CASC use ML compared to the commercial sector is that a working model is rarely the ultimate goal. High-sensitivity predictive models lead to new insights and enable us to form new hypotheses about physical phenomena.

We are developing techniques that reveal the interpretable components in these often opaque models as well as approaches for effective communication between model and domain user. This strategy calls for novel techniques that combine human understanding and machine intelligence.

Explainable artificial intelligence (AI) lies at the intersection of ML, statistics, visualization, human–computer interaction, and more. This emerging research area is rapidly becoming not only a crucial capability for LLNL but also a core strength. CASC’s integrated research teams jointly tackle these challenges, earning widespread recognition for their contributions.

blue lines radiating along crossed roads with icons depicting airbags, check engine, and lane departure

The Perfect Metaphor for AI Security

How do we keep artificial intelligence safe and secure as it advances at breakneck speed? Explore the risks of AI systems powering today’s chatbots, virtual assistants, and more.

three panels showing scale up from nano to macro

Surprising Places You'll Find ML

Researchers explain why water filtration, wildfires, and carbon capture are becoming more solvable thanks to groundbreaking data science methodologies.