I/O, Networking, and Storage
Disk- and tape-delivered I/O bandwidths are being rapidly outpaced by capacity increases, which means valuable processor time is being wasted while waiting for data delivery. For extreme-scale machines to be productive, bandwidth challenges throughout the entire I/O stack must be addressed. We’re working on techniques and technologies that leverage node-local or near-node storage, refactor parallel file systems, and evolve tertiary storage software to enable efficient extreme-scale computing environments. View content related to I/O, Networking, and Storage.
Lawrence Livermore heads to the ISC High Performance Conference (ISC19) in Frankfurt, Germany, on June 16–20. The event will draw more than 3,500 participants from the research and commercial communities, and 160 exhibitors will share the latest technology and products of interest to HPC developers and users. Be sure to follow LLNL Computing on Twitter with the #ISC19 hashtag.
When it comes to IT needs, a 6,500-employee national laboratory is effectively a small city. Computation’s enterprise computing–focused divisions provide technical support and customer service for LLNL’s vast inventory of desktop computing hardware/software, servers, mobile devices, as well as other IT services fulfilling Livermore’s requirements in communication, collaboration, and cyber security.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
Livermore’s archive leverages High Performance Storage System (HPSS), a hierarchical storage management (HSM) application that runs on a cluster architecture that is user-friendly, extremely scalable, and lightning fast. The result: Vast amounts of data can be both stored securely and accessed quickly for decades to come.