Topic: AI/ML

The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations and workshops centered around HPC.

News Item

The award recognizes progress in the team's ML-based approach to modeling ICF experiments, which has led to the creation of faster and more accurate models of ICF implosions.

News Item

In a time-trial competition, participants trained an autonomous race car with reinforcement learning algorithms.

News Item

The second article in a series about the Lab's stockpile stewardship mission highlights computational models, parallel architectures, and data science techniques.

News Item

The Adaptive Computing Environment and Simulations (ACES) project will advance fissile materials production models and reduce risk of nuclear proliferation.

News Item

More than 100 million smart meters have been installed in the U.S. to record and communicate electric consumption, voltage, and current to consumers and grid operators. LLNL has developed GridDS to help make the most of this data.

News Item

An LLNL team will be among the first researchers to perform work on the world’s first exascale supercomputer—Oak Ridge National Laboratory’s Frontier—when they use the system to model cancer-causing protein mutations.

News Item

Livermore’s machine learning experts aim to provide assurances on performance and enable trust in machine-learning technology through innovative validation and verification techniques.

News Item

The Accelerating Therapeutic Opportunities in Medicine (ATOM) consortium is showing “significant” progress in demonstrating that HPC and machine learning tools can speed up the drug discovery process, ATOM co-lead Jim Brase said at a recent webinar.

News Item

LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.

News Item

Winning the best paper award at PacificVis 2022, a research team has developed a resolution-precision-adaptive representation technique that reduces mesh sizes, thereby reducing the memory and storage footprints of large scientific datasets.

News Item

Technologies developed through the Next-Generation High Performance Computing Network project are expected to support mission-critical applications for HPC, AI and ML, and high performance data analytics.

News Item

Sponsored by the DSI, LLNL’s winter hackathon took place on February 16–17. In addition to traditional hacking, the hackathon included a special datathon competition in anticipation of the Women in Data Science (WiDS) conference on March 7.

News Item

From molecular screening, a software platform, and an online data to the computing systems that power these projects.

Project

LLNL’s cyber programs work across a broad sponsor space to develop technologies addressing sophisticated cyber threats directed at national security and civilian critical infrastructure.

Project

This project advances research in physics-informed ML, invests in validated and explainable ML, creates an advanced data environment, builds ML expertise across the complex, and more.

Project

LC sited two different AI accelerators in 2020: the Cerebras wafer-scale AI engine attached to Lassen; and an AI accelerator from SambaNova Systems into the Corona cluster.

Project

LLNL researchers and collaborators have developed a highly detailed, ML–backed multiscale model revealing the importance of lipids to RAS, a family of proteins whose mutations are linked to many cancers.

News Item

Brian Gallagher works on applications of machine learning for a variety of science and national security questions. He’s also a group leader, student mentor, and the new director of LLNL’s Data Science Challenge.

People Highlight

New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.

News Item

BUILD tackles the complexities of HPC software integration with dependency compatibility models, binary analysis tools, efficient logic solvers, and configuration optimization techniques.

Project

Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.

News Item