CASC computational mathematician Andrew Gillette has always been drawn to mathematics and says it’s about more than just crunching numbers.
Topic: AI/ML
The event brought together 35 University of California students—ranging from undergraduates to graduate-level students from a diversity of majors—to work in groups to solve four key tasks, using actual electrocardiogram data to predict heart health.
Using explainable artificial intelligence techniques can help increase the reach of machine learning applications in materials science, making the process of designing new materials much more efficient.
The “crystal ball” that provided increased pre-shot confidence in LLNL's fusion ignition breakthrough involved a combination of detailed HPC design and a suite of methods combining physics-based simulation with machine learning—called cognitive simulation, or CogSim.
The report lays out a comprehensive vision for the DOE Office of Science and NNSA to expand their work in scientific use of AI by building on existing strengths in world-leading high performance computing systems and data infrastructure.
The new model addresses a problem in simulating RAS behavior, where conventional methods come up short of reaching the time- and length-scales needed to observe biological processes of RAS-related cancers.
A principal investigator at LLNL shares how machine learning on the world’s fastest systems catalyzed the lab’s breakthrough.
Collaborative autonomy software apps allow networked devices to detect, gather, identify and interpret data; defend against cyber-attacks; and continue to operate despite infiltration.
From our fall 2022 hackathon, watch as participants trained an autonomous race car with reinforcement learning algorithms.
A new collaboration will leverage advanced LLNL-developed software to create a “digital twin” of the near-net shape mill-products system for producing aerospace parts.
Adding machine learning and other artificial intelligence methods to the feedback cycle of experimentation and computer modeling can accelerate scientific discovery.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
In a time-trial competition, participants trained an autonomous race car with reinforcement learning algorithms.
LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.
Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.
From molecular screening, a software platform, and an online data to the computing systems that power these projects.
LLNL’s cyber programs work across a broad sponsor space to develop technologies addressing sophisticated cyber threats directed at national security and civilian critical infrastructure.
This project advances research in physics-informed ML, invests in validated and explainable ML, creates an advanced data environment, builds ML expertise across the complex, and more.
Highlights include power grid challenges, performance analysis, complex boundary conditions, and a novel multiscale modeling approach.
Brian Gallagher works on applications of machine learning for a variety of science and national security questions. He’s also a group leader, student mentor, and the new director of LLNL’s Data Science Challenge.
New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.
Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.
BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.
Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.
StarSapphire is a collection of scientific data mining projects focusing on the analysis of data from scientific simulations, observations, and experiments.