An LLNL-led effort that performed an unprecedented global climate model simulation on the world’s first exascale supercomputer has won the first-ever Association for Computing Machinery (ACM) Gordon Bell Prize for Climate Modelling, ACM officials announced.
Topic: Computational Science
The MFEM virtual workshop highlighted the project’s development roadmap and users’ scientific applications. The event also included Q&A, student lightning talks, and a visualization contest.
LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
In recent years, the Lab has boosted its recruiting profile even further by offering the prestigious Sidney Fernbach Postdoctoral Fellowship in the Computing Sciences. The fellowship fosters creative partnerships between new and experienced scientists. In short, it ensures an annual cycle that refreshes advanced research in computer sciences at the Lab.
NIF Computing deploys regular updates to its computer control systems to ensure NIF continues to achieve ignition.
Hosted at LLNL, the Center for Efficient Exascale Discretizations’ annual event featured breakout discussions, more than two dozen speakers, and an evening of bocce ball.
Using explainable artificial intelligence techniques can help increase the reach of machine learning applications in materials science, making the process of designing new materials much more efficient.
With simple mathematical modifications to a common model of clouds and turbulence, LLNL scientists and their collaborators helped minimize nonphysical results.
From wind tunnels and cardiovascular electrodes to the futuristic world of exascale computing, Brian Gunney has been finding solutions for unsolvable problems.
Responding to a DOE grid optimization challenge, an LLNL-led team developed the mathematical, computational, and software components needed to solve problems of the real-world power grid.
libROM is a library designed to facilitate Proper Orthogonal Decomposition (POD) based Reduced Order Modeling (ROM).
A new component-wise reduced order modeling method enables high-fidelity lattice design optimization.
A high-fidelity, specialized code solves partial differential equations for plasma simulations.
Combining specialized software tools with heterogeneous HPC hardware requires an intelligent workflow performance optimization strategy.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
The second annual MFEM workshop brought together the project’s global user and developer community for technical talks, Q&A, and more.
Presented at the 2022 International Conference on Computational Science, the team’s research introduces metrics that can improve the accuracy of blood flow simulations.
The Earth System Grid Federation is a web-based tool set that powers most global Earth system research.
The latest generation of a laser beam–delay technique owes its success to collaboration, dedication, and innovation.
Kevin McLoughlin has always been fascinated by the intersection of computing and biology. His LLNL career encompasses award-winning microbial detection technology, a COVID-19 antiviral drug design pipeline, and work with the ATOM consortium.
As group leader and application developer in the Global Security Computing Applications Division, Jarom Nelson develops intrusion detection and access control software.
One of the most widely used tactical simulations in the world, JCATS is installed in hundreds of U.S. military and civilian organizations, in NATO, and in more than 30 countries.
From molecular screening, a software platform, and an online data to the computing systems that power these projects.
LLNL’s cyber programs work across a broad sponsor space to develop technologies addressing sophisticated cyber threats directed at national security and civilian critical infrastructure.
This project advances research in physics-informed ML, invests in validated and explainable ML, creates an advanced data environment, builds ML expertise across the complex, and more.
