An LLNL Distinguished Member of Technical Staff, Gokhale is considered an expert in her field, and continues to enjoy the fast pace of innovation and change in computing.
Topic: HPC Systems and Software
Supercomputers broke the exascale barrier, marking a new era in processing power, but the energy consumption of such machines cannot run rampant.
LC’s adaptation of OpenZFS software provides high performance parallel file systems with better performance and scalability.
LLNL’s archives recount the contributions of women who developed code during the Lab's early decades.
UCLA's Institute for Pure & Applied Mathematics hosted LLNL's Erik Draeger for a talk about the challenges and possibilities of exascale computing.
“I am delighted to be recognized by HPCwire,” Quinn said. “I feel the recognition has as much to do with the stature of Livermore Computing as the opportunity I’ve had to contribute. "
LLNL’s archives provide a glimpse into the career and contributions of a computing pioneer.
This year, the DOE honored 44 teams including LLNL's Exascale Computing Facility Modernization Project team for significant power and cooling upgrades to support upcoming exascale supercomputers.
LLNL's popular lecture series, “Science on Saturday,” runs February 4–25. The February 18 lecture is titled "Supersizing Computing: 70 Years of HPC."
Computer scientist Johannes Doerfert was recognized as a 2023 BSSw fellow. He plans to use the funding to create videos about best practices for interacting with compilers.
A multidecade, multi-laboratory collaboration evolves scalable long-term data storage and retrieval solutions to survive the march of time.
High performance computing was key to the December 5 breakthrough at the National Ignition Facility.
Two supercomputers powered the research of hundreds of scientists at Livermore’s NNSA National Ignition Facility, which recently achieved ignition.
ASC’s Advanced Memory Technology research projects are developing technologies that will impact future computer system architectures for complex modeling and simulation workloads.
LLNL is home to the world’s largest Spectra TFinityTM system, which offers the speed, agility, and capacity required to take LLNL into the exascale era.
Combining specialized software tools with heterogeneous HPC hardware requires an intelligent workflow performance optimization strategy.
The 2022 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) returned to Dallas as a large contingent of LLNL staff participated in sessions, panels, paper presentations and workshops centered around HPC.
Highlights include MFEM community workshops, compiler co-design, HPC standards committees, and AI/ML for national security.
As Computing’s sixth Fernbach Fellow, postdoctoral researcher Chen Wang will work on a new I/O programming paradigm and improve HPC storage consistency models under the mentorship of Kathryn Mohror.
LLNL is participating in the 34th annual Supercomputing Conference (SC22), which will be held both virtually and in Dallas on November 13–18, 2022.
The latest issue of Science & Technology Review highlights the R&D 100 award–winning Flux software framework.
This 2021 R&D 100 award-winning software solves data center bottlenecks by enabling resource types, schedulers, and framework services to be deployed as data centers evolve.
Science & Technology Review highlights the Exascale Computing Facility Modernization project that delivered the infrastructure required to bring exascale computing online in 2023.
An LLNL Distinguished Member of Technical Staff, Todd Gamblin leads the Spack project, an open-source package manager with a rapidly growing global community that has changed the way people use HPC software.
The Exascale Computing Project has compiled a playlist of videos from multiple national labs to highlight the impacts of exascale computing.