Over the next three years, CASC researchers and collaborators will integrate LLMs into HPC software to boost performance and sustainability.
Topic: Data Science
Highlights include ML techniques for computed tomography, a scalable Gaussian process framework, safe and trustworthy AI, and autonomous multiscale simulations.
The latest issue of LLNL's magazine explains how the world’s most powerful supercomputer helps scientists safeguard the U.S. nuclear stockpile.
LLNL's Bruce Hendrickson joins other HPC luminaries in this op-ed about the future of the field.
The latest episode of the Big Ideas Lab podcast investigates the use of artificial intelligence for drug discovery and other uses.
Todd Gamblin has a well-deserved reputation in the HPC software community as a passionate engineer who enjoys rolling up his sleeves and diving into technical problems. It’s not a stretch to see how he got hooked on HPC.
The team used a Bayesian approach to quantify metal strength uncertainty with tantalum and two common explosive materials and integrated it into a coupled metal/high-explosive model.
Presented last fall at a conference, a new approach to software binary analysis incorporates large-scale training data and hierarchical embeddings.
The DarkStar inverse design technique blends AI, machine learning, and advanced hydrodynamics simulations to optimize science and engineering solutions starting from the final state.
This interview with HPC-AI Vanguard Kathryn Mohror covers her thoughts on teamwork, her projects, the field, and more.
SC24, held recently in Atlanta, was a landmark event, setting new records and demonstrating LLNL's unparalleled contributions to HPC innovation and impact.
Drawing more than 300 attendees, this year’s “D3” workshop focused on tackling pressing data challenges in nuclear security, energy and collaborative scientific discovery, and featured a host of talks, presentations and panels.
The Generative Unconstrained Intelligent Drug Engineering (GUIDE) program accelerates development of medical countermeasure candidates to redefine biological defense.
LLNL is participating in the 36th annual Supercomputing Conference (SC24) in Atlanta on November 17–22, 2024.
AMS is a machine learning solution embedded into scientific applications to automatically replace fine-scale simulations with ancillary models.
A groundbreaking multidisciplinary team is combining the power of exascale computing with AI, advanced workflows, and GPU acceleration to advance scientific innovation and revolutionize digital design.
Learn about the game-changing potential of El Capitan and discover how it will not only transform HPC and AI but also revolutionize scientific research across multiple domains.
A CASC researcher and collaborators study model failure and resilience in a paper accepted to the 2024 International Conference on Machine Learning.
LLNL researchers study model robustness in a paper accepted to the 2024 International Conference on Machine Learning.
The collaboration has enabled expanding systems of the same architecture as LLNL’s upcoming exascale supercomputer, El Capitan, featuring AMD’s cutting-edge MI300A processors.
In two papers from the 2024 International Conference on Machine Learning, Livermore researchers investigate how LLMs perform under measurable scrutiny.
To keep employees abreast of the latest tools, two data science–focused projects are under way as part of Lawrence Livermore’s Institutional Scientific Capability Portfolio.
The proposed Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative will advance national security; attract and build a talented workforce; harness AI for scientific discovery; address energy challenges; develop technical expertise necessary for AI governance.
This issue highlights some of CASC’s contributions to the DOE's Exascale Computing Project.
LLNL is applying ML to real-world applications on multiple scales. Researchers explain why water filtration, wildfires, and carbon capture are becoming more solvable thanks to groundbreaking data science methodologies on some of the world’s fastest computers.