LLNL is home to one of the world’s preeminent data archives, run by Livermore Computing’s Data Storage Group.
I/O, Networking, and Storage
Disk- and tape-delivered I/O bandwidths are being rapidly outpaced by capacity increases, which means valuable processor time is being wasted while waiting for data delivery. For extreme-scale machines to be productive, bandwidth challenges throughout the entire I/O stack must be addressed. We’re working on techniques and technologies that leverage node-local or near-node storage, refactor parallel file systems, and evolve tertiary storage software to enable efficient extreme-scale computing environments. View content related to I/O, Networking, and Storage.
Livermore’s archive leverages High Performance Storage System (HPSS), a hierarchical storage management (HSM) application that runs on a cluster architecture that is user-friendly, extremely scalable, and lightning fast. The result: Vast amounts of data can be both stored securely and accessed quickly for decades to come.
First use of Amazon Web Services promises lower cost, higher performance IT services.
LLNL studies in networking and noise reduction suggest a better way to configure systems to enable cost-effective scalability and more consistent performance.
Livermore computer scientists are incorporating ZFS into their high-performance parallel file systems for better performance and scalability.
Livermore Computing staff is enhancing the high-speed InfiniBand data network used in many of its high-performance computing and file systems.
More than 60 LLNL staff members are contributing to the intellectual vitality and smooth operations of the 2015 International Conference for High Performance Computing, Networking, Storage, and Analysis.
Former LLNL computer scientist Martin Casado, developer of software-defined networking, was inducted into the Laboratory’s Entrepreneurs’ Hall of Fame.