Interweaving Timelines to Save Time
Computer simulations in numerous areas rely on the solution of time-dependent problems, from the evolution of a supernova to the propagation of seismic waves to the metabolism of pharmaceuticals in the human body. Generally, solutions to these problems are reached by addressing one time step after another, in order. The sequential nature of traditional time stepping has challenged programmers seeking to coax more efficiency from their codes.
For decades, these codes have benefited from a steady increase in computer chip speeds with each new generation of computer system. So long as the chips continued to get faster, the time it took to compute each individual time step was reduced. Spatial refinements made to a code to increase accuracy often must be balanced by temporal refinements, but chip speed increases enabled more time steps to be calculated without increasing the total compute time. Chip speeds have plateaued in recent years, though, meaning that any further increases in the number of time steps will simply increase the total compute time.
“Now, to take advantage of bigger machines and see speed-ups, all parts of these codes need to be able to take advantage of parallelism,” explains Lawrence Livermore computational mathematician Jacob Schroder. This includes time, one of the final frontiers in algorithm parallelization. Schroder and a team of Livermore scientists, in collaboration with researchers at Memorial University and Belgium’s Katholieke Universiteit Leuven, have developed a method for solving all of the time steps simultaneously, with the help of a new multilevel algorithm called XBraid and the massively parallel processing capabilities of today’s high-performance computing (HPC) systems. The approach has already been shown to dramatically decrease solution time for various simulations, some by as much as tenfold.
While they are not the first researchers to explore the idea of solving time steps out of order, their method offers advantages over most other methods. In particular, XBraid obviates the need for a complete code rewrite, which for a complex simulation code could be an enormous time investment. Schroder observes, “One of the real novelties of our approach is that it’s largely nonintrusive. Our goal is for users to come to us with an existing sequential time-stepping application they’ve been working on for maybe 10 or 20 years. Then, all they have to do is wrap it with some code using a conceptually simple interface.”
For this effort, the XBraid team applied their extensive experience in developing scalable multigrid spatial solvers to the time dimension. Multigrid approaches solve systems at various levels of granularity. “We’re combining multiple timelines of differing accuracies to get the solution significantly faster. The key point to the algorithm is that by ‘braiding’ together the timelines, you don’t have to solve any individual timeline sequentially,” explains Schroder. The solver begins with a “guess” of the solution and then uses an algorithm to generate an improved guess. This procedure repeats until the iterations converge to the solution.
By solving coarser (and less computationally expensive) versions of the same problem and feeding those solutions back into the finer scale version of the problem, the iterative process is accelerated. Since the computational cost is proportional to the number of equations, a large problem can be solved in the same time as a smaller one simply by increasing the number of processors working on the calculation. Importantly, the solutions from XBraid and sequential time stepping are identical, up to a user-defined tolerance.
The benefits of this parallel-in-time approach are twofold. By incorporating XBraid into parallel codes solving for large numbers of time steps, researchers eliminate a performance bottleneck and reach a solution more quickly. They can also make better use of today’s and tomorrow’s HPC systems. “Exascale computers will be even more massively parallel than Sequoia,” says Schroder. “We need our algorithms to be able to make the best use of that level of parallelism.”
In the year and a half since the project’s genesis, the XBraid team has developed foundational theory for the methodology with support from the Department of Energy’s Office of Science and developed proof of concept demonstrations and software using funding from Livermore’s Laboratory Directed Research and Development program. Both linear and nonlinear parabolic problems, their first target area, have demonstrated excellent results, with XBraid solving the problems up to 10 times faster than before. XBraid also shows promise for fluid dynamics calculations. For instance, a project to develop a next-generation helicopter design code in collaboration with the Department of Defense, while still in early stages, has demonstrated an eightfold speedup with XBraid.
The team is continuing to expand and improve XBraid and the types of problems that it can help solve. This includes validating the parallel-in-time method on hyperbolic problems such as wave propagation, a challenging problem type for multigrid methods. They also hope to expand awareness and usage of the open-source XBraid software in the broader scientific community. “Parallel in time is a timely topic right now,” Schroder says, “as people grapple with big changes in computer architecture.”