Congratulations to the Livermore researchers whose work has been accepted to the 13th International Conference on Learning Representations (ICLR) on April 24–28 in Singapore. The event is known around the world for showcasing deep learning in artificial intelligence, statistics, and data science as well as a range of application areas. Follow LLNL Computing on X with the #ICLR25 hashtag. Workshop websites and preprints are linked below.
Poster: ELFS: Label-Free Coreset Selection with Proxy Training Dynamics | Brian Bartoldson, Bhavya Kailkhura
Poster: Spectral-Refiner: Accurate Fine-Tuning of Spatiotemporal Fourier Neural Operator for Turbulent Flows | Ruipeng Li
Workshop: Building Trust in Language Models and Applications
- AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security | Brian Bartoldson, Bhavya Kailkhura
Workshop: Generative and Experimental Platforms for Biomolecular Design
- Generative Protein Design for Overlapping Genes | Chenling Xu, Jennifer Chlebek, Jonathan Allen, Dan Park
Workshop: Open Science for Foundation Models
- ASYNC-TB: Scaling Off-Policy Exploration for LLM Reinforcement Learning | Brian Bartoldson, James Diffenderfer, Tal Ben-Nun, Bhavya Kailkhura
- Scalable Pretraining of Retrieval Models | Neel Jain, Brian Bartoldson, Bhavya Kailkhura
- Scaling Up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach | Neel Jain, Brian Bartoldson, Bhavya Kailkhura