I/O, Networking, and Storage
Disk- and tape-delivered I/O bandwidths are being rapidly outpaced by capacity increases, which means valuable processor time is being wasted while waiting for data delivery. For extreme-scale machines to be productive, bandwidth challenges throughout the entire I/O stack must be addressed. We’re working on techniques and technologies that leverage node-local or near-node storage, refactor parallel file systems, and evolve tertiary storage software to enable efficient extreme-scale computing environments. View content related to I/O, Networking, and Storage.
Lawrence Livermore heads to the ISC High Performance Conference (ISC19) in Frankfurt, Germany, on June 16–20. The event will draw more than 3,500 participants from the research and commercial communities, and 160 exhibitors will share the latest technology and products of interest to HPC developers and users. Be sure to follow LLNL Computing on Twitter with the #ISC19 hashtag.
When it comes to IT needs, a 6,500-employee national laboratory is effectively a small city. Computation’s enterprise computing–focused divisions provide technical support and customer service for LLNL’s vast inventory of desktop computing hardware/software, servers, mobile devices, as well as other IT services fulfilling Livermore’s requirements in communication, collaboration, and cyber security.
Dozens of members of LLNL’s Computation Directorate will attend the 2018 Supercomputing Conference. The Laboratory’s presence includes tutorials, poster and paper sessions, and the Job Fair.
I/O and application-level benchmarks put Intel’s Optane 3D XPoint non-volatile memory technology to the test.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
LLNL is home to one of the world’s preeminent data archives, run by Livermore Computing’s Data Storage Group.