Machine learning (ML) is revolutionizing scientific applications—developing new drugs, understanding cancer, creating fusion energy, inventing smart materials, and more.

At LLNL, ML has permeated virtually all aspects of our research. Our teams develop, adapt, and apply the latest advances to some of the most complex problems while using the some of the world’s most powerful supercomputers and advanced experiments.

Whether the need is representation learning to bridge the gap between computational models and large-scale experiments, computer vision and inverse problems to understand everything from satellite imagers to airport security scans, or fundamental research on ML safety and interpretability to promote trust and understanding, our unique research environment couples fundamental ML research with high-impact scientific endeavors.

Multidisciplinary teams working closely together are pushing the limits of what is considered possible. Driven by some of society’s most important challenges and enabled by ML, the future of large-scale science is happening first at LLNL.

drawing of a brain made up of network lines and a computer chip

A Winning Strategy for DNNs

New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to DL that outperforms traditional methods without sacrificing accuracy.

Bhavya's portrait next to a slide from a presentation that diagrams data poisoning

Two CVPR Papers

The 2021 Conference on Computer Vision and Pattern Recognition features two papers co-authored by an LLNL researcher targeted at improving the understanding of robust ML models.

illustration showing the LbC method of simulator inputs into a prediction estimator

Learn by Calibrating

A new DL approach to designing emulators for scientific processes is more accurate and efficient than existing methods.

plots showing airfoil self-noise dataset and reservoir model dataset

Calibration-Driven Deep Models

The January 26 Nature Communications Focus features a CASC paper on the Learn-by-Calibrating model.

NeurIPS logo

Two NeurIPS Acceptances

The 34th Conference on Neural Information Processing Systems features research on the reliability of DL for mission-critical applications.

Illustration of the behavior of UM-GNN for two datasets under varying types and levels of poisoning attacks

Uncertainty-Matching GNNs

This paper, accepted at AAAI 2021, introduces the Uncertainty Matching Graph Neural Network aimed at improving the robustness of GNN models.