The proposed Frontiers in Artificial Intelligence for Science, Security and Technology (FASST) initiative will advance national security; attract and build a talented workforce; harness AI for scientific discovery; address energy challenges; develop technical expertise necessary for AI governance.
Topic: Computational Science
This issue highlights some of CASC’s contributions to the DOE's Exascale Computing Project.
LLNL is applying ML to real-world applications on multiple scales. Researchers explain why water filtration, wildfires, and carbon capture are becoming more solvable thanks to groundbreaking data science methodologies on some of the world’s fastest computers.
Developed by LLNL and Portland State University researchers, innovative matrix-free solvers offer performance gains for complex multiphysics simulations.
In a milestone for supercomputing-aided drug design, LLNL and BridgeBio Oncology Therapeutics today announced clinical trials have begun for a first-in-class medication that targets specific genetic mutations implicated in many types of cancer.
LLNL’s HPC capabilities play a significant role in international science research and innovation, and Lab researchers have won 10 R&D 100 Awards in the Software–Services category in the past decade.
Randles, a former Lawrence fellow and current LLNL collaborator, was recognized for “groundbreaking contributions to computational health through innovative algorithms, tools and high performance computing methods for diagnosing and treating a variety of human diseases.”
In a groundbreaking development for addressing future viral pandemics, a multi-institutional team involving LLNL researchers has successfully combined an AI-backed platform with supercomputing to redesign and restore the effectiveness of antibodies whose ability to fight viruses has been compromi
LLNL researchers have achieved a milestone in accelerating and adding features to complex multiphysics simulations run on GPUs, a development that could advance HPC and engineering.
LLNL’s fusion ignition breakthrough, more than 60 years in the making, was enabled by a combination of traditional fusion target design methods, HPC, and AI techniques.
By taking weather variables such as wildfire, flooding, wind, and sunlight that directly impact the electrical grid into consideration, researchers can improve electrical grid model projections for a more stable future.
MuyGPs helps complete and forecast the brightness data of objects viewed by Earth-based telescopes.
The Enabling Technologies for High-Order Simulations (ETHOS) project performs research of fundamental mathematical technologies for next-generation high-order simulations algorithms.
Thirteen students traveled to Livermore in early December for a computer science course simulating pond ecology and evolution.
Carolyn Albiston is a research software engineer in NIF Shot Data Systems. Her career is a culmination of her wide range of varied interests and skills.
The MFEM virtual workshop highlighted the project’s development roadmap and users’ scientific applications. The event also included Q&A, student lightning talks, and a visualization contest.
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
NIF Computing deploys regular updates to its computer control systems to ensure NIF continues to achieve ignition.
Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.
Using explainable artificial intelligence techniques can help increase the reach of machine learning applications in materials science, making the process of designing new materials much more efficient.
With simple mathematical modifications to a common model of clouds and turbulence, LLNL scientists and their collaborators helped minimize nonphysical results.
Responding to a DOE grid optimization challenge, an LLNL-led team developed the mathematical, computational, and software components needed to solve problems of the real-world power grid.
libROM is a library designed to facilitate Proper Orthogonal Decomposition (POD) based Reduced Order Modeling (ROM).
A new component-wise reduced order modeling method enables high-fidelity lattice design optimization.
A high-fidelity, specialized code solves partial differential equations for plasma simulations.
