LLNL is participating in the 35th annual Supercomputing Conference (SC23), which will be held both virtually and in Denver on November 12–17, 2023.
The Data and Visualization efforts in the DOE’s Exascale Computing Project provide an ecosystem of capabilities for data management, analysis, lossy compression, and visualization.
Using explainable artificial intelligence techniques can help increase the reach of machine learning applications in materials science, making the process of designing new materials much more efficient.
The Lab’s workhorse visualization tool provides expanded color map features, including for visually impaired users.
This issue highlights some of CASC’s contributions to making controlled laboratory fusion possible at the National Ignition Facility.
Two LLNL-led teams received SciVis Test of Time awards at the 2022 IEEE VIS conference for papers that have achieved lasting relevancy in the field of scientific visualization.
Researchers are starting a three-year project aimed at improving methods for visual analysis of large heterogeneous datasets as part of a recent DOE funding opportunity.
LLNL held its first-ever Machine Learning for Industry Forum (ML4I) on August 10–12, co-hosted by the Lab’s High-Performance Computing Innovation Center and Data Science Institute.
The Livermore-led VisIt visualization and analysis tool has supported scalable, high-quality evaluation of simulation results for over 20 years.
Our use of supercomputers is enabled by the codes developed to model and simulate complex physical phenomena on massively parallel architectures.
Highlights include perspectives on machine learning and artificial intelligence in science, data driven models, autonomous vehicle operations, and the OpenMP standard 5.0.
Rushil Anirudh describes the machine learning field as undergoing a “gold rush.”
SOAR (Stateless, One-pass Adaptive Refinement) is a view-dependent mesh refinement and rendering algorithm.
This project's techniques reduce bandwidth requirements for large unstructured data by making use of data compression and optimizing the layout of the data for better locality and cache reuse.
LLNL and University of Utah researchers have developed an advanced, intuitive method for analyzing and visualizing complex data sets.