#### MFEM

The open-source MFEM library enables application scientists to quickly prototype parallel physics application codes based on PDEs discretized with high-order finite elements.

#### ETHOS

The Enabling Technologies for High-Order Simulations (ETHOS) project performs research of fundamental mathematical technologies for next-generation high-order simulations algorithms.

#### SUNDIALS

SUNDIALS is a SUite of Nonlinear and DIfferential/ALgebraic equation Solvers for initial value problems for ordinary differential equation systems, sensitivity analysis capabilities, additive Runge-…

#### Pagination

#### Steven Roberts

As Computing’s fifth Fernbach Fellow, postdoctoral researcher Steven Roberts will develop, analyze, and implement new time integration methods.

#### Stefanie Guenther

Lawrence Livermore National Lab has named Stefanie Guenther as Computing’s fourth Sidney Fernbach Postdoctoral Fellow in the Computing Sciences. This highly competitive fellowship is named after…

#### Alyson Fox

Alyson Fox is a math geek. She has three degrees in the subject—including a Ph.D. in Applied Mathematics from the University of Colorado at Boulder—and her passion for solving complex challenges…

#### Pagination

#### Inaugural industry forum inspires ML community

LLNL held its first-ever Machine Learning for Industry Forum (ML4I) on August 10–12. Co-hosted by the Lab’s High-Performance Computing Innovation Center and Data Science Institute, the virtual event brought together more than 500 attendees from the Department of Energy (DOE) complex, commercial companies, professional societies, and academia.

#### Conference papers highlight importance of data security to machine learning

The 2021 Conference on Computer Vision and Pattern Recognition, the premier conference of its kind, will feature two papers co-authored by an LLNL researcher targeted at improving the understanding of robust machine learning models.

#### A winning strategy for deep neural networks

New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.