LLNL's Ian Lee joins a Dots and Bridges panel to discuss HPC as a critical resource for data assimilation and numerical weather prediction research.
Topic: Storage, File Systems, and I/O
Innovative hardware provides near-node local storage alongside large-capacity storage.
A research team from Oak Ridge and Lawrence Livermore national labs won the first IPDPS Best Open-Source Contribution Award for the paper “UnifyFS: A User-level Shared File System for Unified Access to Distributed Local Storage.”
LC’s adaptation of OpenZFS software provides high performance parallel file systems with better performance and scalability.
A multidecade, multi-laboratory collaboration evolves scalable long-term data storage and retrieval solutions to survive the march of time.
LLNL is home to the world’s largest Spectra TFinityTM system, which offers the speed, agility, and capacity required to take LLNL into the exascale era.
As Computing’s sixth Fernbach Fellow, postdoctoral researcher Chen Wang will work on a new I/O programming paradigm and improve HPC storage consistency models under the mentorship of Kathryn Mohror.
Employees gathered for the Lab’s first-ever Employee Engagement Day, held Oct. 11. The event featured food, drink, informative displays, historical films and more.
Computer scientist Kathryn Mohror is among LLNL's recipients of the Department of Energy’s Early Career Research Program awards.
After 30 years, the High Performance Storage System (HPSS) collaboration continues to lead and adapt to the needs of the time while honoring its primary mission of long-term data stewardship of the crown jewels of data for government, academic and commercial organizations around the world.
Livermore’s archive leverages a hierarchical storage management application that runs on a cluster architecture that is user-friendly, extremely scalable, and lightning fast.
This year marks the 30th anniversary of the High Performance Storage System (HPSS) collaboration, comprising five DOE HPC national laboratories: LLNL, Lawrence Berkeley, Los Alamos, Oak Ridge, and Sandia, along with industry partner IBM.
LLNL participates in the International Parallel and Distributed Processing Symposium (IPDPS) on May 30 through June 3.
The Exascale Computing Project (ECP) 2022 Community Birds-of-a-Feather Days will take place May 10–12 via Zoom. The event provides an opportunity for the HPC community to engage with ECP teams to discuss our latest development efforts.
From molecular screening, a software platform, and an online data to the computing systems that power these projects.
El Capitan will have a peak performance of more than 2 exaflops—roughly 16 times faster on average than the Sierra system—and is projected to be several times more energy efficient than Sierra.
Supported by the Advanced Simulation and Computing program, Axom focuses on developing software infrastructure components that can be shared by HPC apps running on diverse platforms.
CTO Bronis de Supinski discusses the integrated storage strategy of the future El Capitan exascale supercomputing system, which will have in excess of 2 exaflops of raw computing power spread across nodes.
A near node local storage innovation called Rabbit factored heavily into LLNL’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan.
Proxy apps serve as specific targets for testing and simulation without the time, effort, and expertise that porting or changing most production codes would require.
This open-source file system framework supports hierarchical HPC storage systems by utilizing node-local burst buffers.
Highlights include complex simulation codes, uncertainty quantification, discrete event simulation, and the Unify file system.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
Livermore Computing staff is enhancing the high-speed InfiniBand data network used in many of its high performance computing and file systems.
Spindle improves the library-loading performance of dynamically linked HPC applications by plugging into the system’s dynamic linker and intercepting its file operations.