A CASC researcher and collaborators study model failure and resilience in a paper accepted to the 2024 International Conference on Machine Learning.
Topic: Deep Learning
This issue highlights some of CASC’s contributions to the DOE's Exascale Computing Project.
LLNL researchers collaborated with Washington University in St. Louis to devise a state-of-the-art ML–based reconstruction tool for when high-quality computed tomography data is in low supply.
Cindy Gonzales earned a bachelor’s degree and master’s degree and changed careers—all while working at the Lab. Meet the deputy director of LLNL’s Data Science Institute.
A novel ML method discovers and predicts key data about networked devices.
From our fall 2022 hackathon, watch as participants trained an autonomous race car with reinforcement learning algorithms.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
In a time-trial competition, participants trained an autonomous race car with reinforcement learning algorithms.
LLNL is participating in the 33rd annual Supercomputing Conference (SC21), which will be held both virtually and in St. Louis on November 14–19, 2021.
New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
Our researchers will be well represented at the virtual SIAM Conference on Computational Science and Engineering (CSE21) on March 1–5. SIAM is the Society for Industrial and Applied Mathematics with an international community of more than 14,500 individual members.
Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.
Lawrence Livermore National Lab has named Stefanie Guenther as Computing’s fourth Sidney Fernbach Postdoctoral Fellow in the Computing Sciences. This highly competitive fellowship is named after LLNL’s former Director of Computation and is awarded to exceptional candidates who demonstrate the potential for significant achievements in computational mathematics, computer science, data science, or scientific computing.
Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.
Rafael Rivera-Soto is passionate about artificial intelligence, deep learning, and machine learning technologies. He works in LLNL’s Global Security Computing Applications Division, also known as GSCAD.
Highlights include perspectives on machine learning and artificial intelligence in science, data driven models, autonomous vehicle operations, and the OpenMP standard 5.0.
Highlights include recent LDRD projects, Livermore Tomography Tools, our work with the open-source software community, fault recovery, and CEED.