Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.
An LLNL team has developed a “Learn-by-Calibrating” method for creating powerful scientific emulators that could be used as proxies for far more computationally intensive simulators.
The 34th Conference on Neural Information Processing Systems features two papers advancing the reliability of deep learning for mission-critical applications at LLNL.
Two papers featuring LLNL scientists were accepted in the 2020 International Conference on Machine Learning (ICML), one of the world’s premier conferences of its kind.
LLNL's Jay Thiagarajan joins the Data Skeptic podcast to discuss his recent paper "Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models." The episode runs 35:50.
LLNL’s Data Science Institute hosted its second annual workshop with the University of California, emphasizing key challenges with machine learning and artificial intelligence in scientific research.
Highlights include perspectives on machine learning and artificial intelligence in science, data driven models, autonomous vehicle operations, and the OpenMP standard 5.0.
As demonstrated by CASC computer scientists, LLNL's innovative data-driven machine learning techniques teach computers to solve real-world problems.
With nearly 100 publications, CASC researcher Jayaraman “Jay” Thiagarajan explores the possibilities of artificial intelligence and machine learning technologies.