LLNL researchers have posters and workshop papers accepted to the 13th International Conference on Learning Representations on April 24–28.
Topic: AI/ML
The February 28 event brought together over 1,400 Department of Energy scientists across multiple sites to explore how cutting-edge AI models could transform scientific research.
Increased resource utilization is one goal of new architectures. At Livermore Computing, these include AI accelerators such as Samba Nova and Cerebras systems and El Capitan's Rabbits.
Over the next three years, CASC researchers and collaborators will integrate LLMs into HPC software to boost performance and sustainability.
Highlights include ML techniques for computed tomography, a scalable Gaussian process framework, safe and trustworthy AI, and autonomous multiscale simulations.
The latest issue of LLNL's magazine explains how the world’s most powerful supercomputer helps scientists safeguard the U.S. nuclear stockpile.
LLNL's Bruce Hendrickson joins other HPC luminaries in this op-ed about the future of the field.
The latest episode of the Big Ideas Lab podcast investigates the use of artificial intelligence for drug discovery and other uses.
Todd Gamblin has a well-deserved reputation in the HPC software community as a passionate engineer who enjoys rolling up his sleeves and diving into technical problems. It’s not a stretch to see how he got hooked on HPC.
Presented last fall at a conference, a new approach to software binary analysis incorporates large-scale training data and hierarchical embeddings.
The DarkStar inverse design technique blends AI, machine learning, and advanced hydrodynamics simulations to optimize science and engineering solutions starting from the final state.
This interview with HPC-AI Vanguard Kathryn Mohror covers her thoughts on teamwork, her projects, the field, and more.
SC24, held recently in Atlanta, was a landmark event, setting new records and demonstrating LLNL's unparalleled contributions to HPC innovation and impact.
The Generative Unconstrained Intelligent Drug Engineering (GUIDE) program accelerates development of medical countermeasure candidates to redefine biological defense.
LLNL is participating in the 36th annual Supercomputing Conference (SC24) in Atlanta on November 17–22, 2024.
Learn about the game-changing potential of El Capitan and discover how it will not only transform HPC and AI but also revolutionize scientific research across multiple domains.
A groundbreaking multidisciplinary team is combining the power of exascale computing with AI, advanced workflows, and GPU acceleration to advance scientific innovation and revolutionize digital design.
A CASC researcher and collaborators study model failure and resilience in a paper accepted to the 2024 International Conference on Machine Learning.
LLNL researchers study model robustness in a paper accepted to the 2024 International Conference on Machine Learning.
The collaboration has enabled expanding systems of the same architecture as LLNL’s upcoming exascale supercomputer, El Capitan, featuring AMD’s cutting-edge MI300A processors.
In two papers from the 2024 International Conference on Machine Learning, Livermore researchers investigate how LLMs perform under measurable scrutiny.
To keep employees abreast of the latest tools, two data science–focused projects are under way as part of Lawrence Livermore’s Institutional Scientific Capability Portfolio.
The proposed Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative will advance national security; attract and build a talented workforce; harness AI for scientific discovery; address energy challenges; develop technical expertise necessary for AI governance.
This issue highlights some of CASC’s contributions to the DOE's Exascale Computing Project.
LLNL is applying ML to real-world applications on multiple scales. Researchers explain why water filtration, wildfires, and carbon capture are becoming more solvable thanks to groundbreaking data science methodologies on some of the world’s fastest computers.