New research debuting at ICLR 2021 demonstrates a learning-by-compressing approach to deep learning that outperforms traditional methods without sacrificing accuracy.

# Topic: *Computational Math*

The latest issue of LLNL's *Science & Technology Review* magazine showcases Computing in the cover story alongside a commentary by Bruce Hendrickson.

Highlights include scalable deep learning, high-order finite elements, data race detection, and reduced order models.

The *hypre* team's latest work gives scientists the ability to efficiently utilize modern GPU-based extreme scale parallel supercomputers to address many scientific problems.

SIAM announced its 2021 Class of Fellows, including LLNL computational mathematician Rob Falgout. Falgout is best known for his development of multigrid methods and for hypre, one of the world’s most popular parallel multigrid codes.

Our researchers will be well represented at the virtual SIAM Conference on Computational Science and Engineering (CSE21) on March 1–5. SIAM is the Society for Industrial and Applied Mathematics with an international community of more than 14,500 individual members.

Three papers address feature importance estimation under distribution shifts, attribute-guided adversarial training, and uncertainty matching in graph neural networks.

An LLNL team has developed a “Learn-by-Calibrating” method for creating powerful scientific emulators that could be used as proxies for far more computationally intensive simulators.

Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.

The Association for Women in Mathematics has named computational scientist Carol Woodward as a 2021 fellow, recognizing her commitment to supporting and advancing women in the mathematical sciences.

The SAMRAI library is the code base in CASC for exploring application, numerical, parallel computing, and software issues associated with structured adaptive mesh refinement.

This summer, the Computing Scholar Program welcomed 160 undergraduate and graduate students into virtual internships. The Lab’s open-source community was already primed for student participation.

Lawrence Livermore National Lab has named Stefanie Guenther as Computing’s fourth Sidney Fernbach Postdoctoral Fellow in the Computing Sciences. This highly competitive fellowship is named after LLNL’s former Director of Computation and is awarded to exceptional candidates who demonstrate the potential for significant achievements in computational mathematics, computer science, data science, or scientific computing.

This video describes MFEM (Modular Finite Element Methods), an open-source software library that provides advanced mathematical algorithms for use by scientific applications.

The Center for Efficient Exascale Discretizations recently released MFEM v4.1, which introduces features important for the nation’s first exascale supercomputers. LLNL's Tzanio Kolev explains.

Highlights include response to the COVID-19 pandemic, high-order matrix-free algorithms, and managing memory spaces.

Alyson Fox is a math geek. She has three degrees in the subject—including a Ph.D. in Applied Mathematics from the University of Colorado at Boulder—and her passion for solving complex challenges drives her work at LLNL’s Center for Applied Scientific Computing (CASC).

The early-March event was the third annual WiDS Livermore event, featuring speakers, a career panel, mentoring, and a livestream.

LLNL bested more than two dozen teams to place first overall in Challenge 1 of the DOE Grid Optimization Competition, aimed at developing a more reliable, resilient, and secure U.S. electrical grid.

The extreme-scale scientific software development kit (xSDK) is an ecosystem of independently developed math libraries and scientific domain components.

Researchers develop innovative data representations and algorithms to provide faster, more efficient ways to preserve information encoded in data.

Highlights include perspectives on machine learning and artificial intelligence in science, data driven models, autonomous vehicle operations, and the OpenMP standard 5.0.

Simulation workflows for ALE methods often require a manual tuning process. We are developing novel predictive analytics for simulations and an infrastructure for integration of analytics.

The *hypre* library's comprehensive suite of scalable parallel linear solvers makes large-scale scientific simulations possible by solving problems faster.

Highlights include debris and shrapnel modeling at NIF, scalable algorithms for complex engineering systems, magnetic fusion simulation, and data placement optimization on GPUs.