Held May 7–8 in Washington, DC, the Special Competitive Studies Project (SCSP) AI Expo showcased groundbreaking initiatives in AI and emerging technologies. Kim Budil and other Lab speakers presented at center stage and the DOE exhibition booth.
Topic: AI/ML
In a groundbreaking development for addressing future viral pandemics, a multi-institutional team involving LLNL researchers has successfully combined an AI-backed platform with supercomputing to redesign and restore the effectiveness of antibodies whose ability to fight viruses has been compromi
Throughout the workshop, speakers, panelists and attendees focused on algorithm development, the potential dangers of superhuman AI systems and the importance of understanding and mitigating the risks to humans, as well as urgent measures needed to address the risks both scientifically and politically.
LLNL’s fusion ignition breakthrough, more than 60 years in the making, was enabled by a combination of traditional fusion target design methods, HPC, and AI techniques.
By taking weather variables such as wildfire, flooding, wind, and sunlight that directly impact the electrical grid into consideration, researchers can improve electrical grid model projections for a more stable future.
New research reveals subtleties in the performance of neural image compression methods, offering insights toward improving these models for real-world applications.
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
Merlin is an open-source workflow orchestration and coordination tool that makes it easy to build, run, and process large-scale workflows.
Cindy Gonzales earned a bachelor’s degree and master’s degree and changed careers—all while working at the Lab. Meet the deputy director of LLNL’s Data Science Institute.
CASC computational mathematician Andrew Gillette has always been drawn to mathematics and says it’s about more than just crunching numbers.
Using explainable artificial intelligence techniques can help increase the reach of machine learning applications in materials science, making the process of designing new materials much more efficient.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
In a time-trial competition, participants trained an autonomous race car with reinforcement learning algorithms.
LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.
Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.
From molecular screening, a software platform, and an online data to the computing systems that power these projects.
LLNL’s cyber programs work across a broad sponsor space to develop technologies addressing sophisticated cyber threats directed at national security and civilian critical infrastructure.
This project advances research in physics-informed ML, invests in validated and explainable ML, creates an advanced data environment, builds ML expertise across the complex, and more.
Highlights include power grid challenges, performance analysis, complex boundary conditions, and a novel multiscale modeling approach.
Brian Gallagher works on applications of machine learning for a variety of science and national security questions. He’s also a group leader, student mentor, and the new director of LLNL’s Data Science Challenge.
New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.
Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.
StarSapphire is a collection of scientific data mining projects focusing on the analysis of data from scientific simulations, observations, and experiments.
