Privacy and Legal Notice





Building Executables

Environment Variables

SMT and

Performance Results

Open Issues, Gotchas, and Recent Changes





Parallel programming (particularly message-passing parallel programming) is a vital component for solving the most challenging scientific computing problems. Over the past 20 years, the development of distributed-memory machines (including clusters) has facilitated the usage and importance of message-passing programming techniques.

Message-passing programming coordinates multiple computing elements (processes) through primitives (such as sending a message to one or more other computing elements, receiving a message from a computing element, and synchronizing with other computing elements) so that a process can exchange information (such as an array of floats) with a second process. The synchronizing primitives allow two or more processes to ensure that each process is "ready" for the next step in a parallel algorithm.

Message-passing allows cooperating processes to be remote from each other (i.e., each process of a multiple-process application may be on separate computers), thus allowing programming on architectures comprised of hundreds (or even thousands) of commodity processors connected via a common network. Whereas yesteryear's machines were usually implemented with a custom architecture using a high-performance memory system and a modest number of processors (e.g., fewer than 128), message-passing programming can efficiently be mapped onto a collection of commodity off-the-shelf hardware components without custom memory or processor designs and with large numbers of processors (e.g., 8,000 or more).

MPI, the de facto standard for message-passing, began as a new specification by a group of computer scientists calling themselves the MPI Forum around 1994, the result of which was called MPI-1. The group met again in 1997 to make a few minor corrections and to add significant new capabilities for parallel I/O and one-sided communication. This second expanded specification,called MPI-2, is available electronically from the MPI Forum. ROMIO is a high-performance, portable implementation of MPI-IO, the I/O chapter in MPI-2.

LLNL is active in industrial collaborations to further the capabilities of MPI. One such effort is funded by the ASC PathForward Program, a DOE program whose key strategy is to construct future high-end computing systems and environments by scaling commercially viable building blocks, both hardware and software.

The MPI PathForward is a tri-lab industrial collaboration with Verari Systems, Inc. (formerly MPI Software Technology, Inc. and RackSaver Inc.). The primary goal of this PathForward is to provide a robust and high-performance MPI library and supporting MPI infrastructure across all tri-lab ultrascale platforms.

High Performance Computing at LLNL    Lawrence Livermore National Laboratory

Last modified September 7, 2006