Congratulations to the Livermore researchers whose work has been accepted to the 14th International Conference on Learning Representations (ICLR) on April 23–27 in Rio de Janeiro. The event is known around the world for showcasing deep learning in artificial intelligence, statistics, and data science as well as a range of application areas. Follow LLNL Computing on X with the #ICLR26 hashtag. Workshop websites and preprints are linked below.


Poster: Better Learning-Augmented Spanning Tree Algorithms via Metric Forest Completion (preprint PDF) | Min Priest, Trevor Steil, Keita Iwabuchi, T.S. Jayram, Grace Li, Geoff Sanders

Poster: FACET: A Fragment-Aware Conformer Ensemble Transformer (preprint PDF) | Hyojin Kim, Jonathan Allen

Poster: Get RICH or Die Scaling: Profitably Trading Inference Compute for Robustness (preprint PDF) | Tavish McDonald, Bhavya Kailkhura, Brian Bartoldson

Workshop: AI&PDE: ICLR 2026 Workshop on AI and Partial Differential Equations

  • COARSERL: A Graph Reinforcement Learning Method for Algebraic Multigrid Coarsening | Kowshik Thopalli, Ruipeng Li

Workshop: I Can't Believe It's Not Better: Where Large Language Models Need to Improve

  • The Anatomy of Uncertainty in LLMs (preprint PDF) | Kowshik Thopalli, Vivek Narayanaswamy

Workshop: Learning Meaningful Representations of Life (LMRL)

  • Orthogonal Evaluations Enable More Robust Predictions of Protein-Ligand Interactions | Joseph Wakim, Irene Kim, Adam Zemla