Beginning with a carefully reviewed proposal, NARAC’s software development team will rebuild its Central System GUI framework with web-based technologies.
We’re creating an LLNL commodity cluster system software environment based on Linux/Open-Source. We use the Red Hat Enterprise Linux distribution, stripping out the modules we don’t need and adding and modifying components as required. Working in open source allows for important HPC customizations and builds in-house expertise. Having in-house software developers is necessary to quickly resolve problems (especially at scale) on our cutting-edge hardware without having to wait for the vendors. The environment includes Linux kernel modifications, cluster management tools, monitoring and failure detection, resource management, authentication and access control, and parallel file system software (detailed elsewhere). These clusters provide users with a production solution capable of running MPI jobs at scale. View content related to System Software.
HPC Cluster Engineer Academy is a paid internship that will give you direct experience with running and maintaining high performance computing (HPC) systems.
In the early hours of the morning on July 15, 2016, participants from around the Lab began to gather to continue their projects on the second day of Computation’s summer hackathon.
Tammy Dahlgren has worked primarily in software development and research, as well as on efforts ranging from systems and middleware to applications development and software quality assurance. “I like challenges, trying different things, and the opportunity to make a positive impact,” she says.
Lawrence Livermore’s 2016 spring hackathon saw an all-time high of 68 people participate and matched last year’s spring high of 39 different projects.
For experiments performed at the National Ignition Facility (NIF), the goal is not just to measure the interaction between the target and the laser beams. Researchers also need to understand, on a fundamental level, what is happening to the material in the target and how it’s changing over time. This capability to see inside the target as it interacts with the laser beams is especially useful in high-energy-density experiments and those exploring material strength.
Livermore researchers have developed a toolset for solving data center bottlenecks.
On November 12 and 13, staff members from nearly every division of the Computation Directorate turned out for the fall hackathon, held at the High Performance Computing Innovation Center on the Livermore Valley Open Campus.
Julia Ramirez helps automate and streamline LLNL processes for preparing reports and responding to audits.
Cab is a large capacity computing resource shared by the M&IC and ASC programs for running small to moderate parallel jobs.