README | brief text description of the code |
pdpta_pthreads.fm.ps | PDPTA'99 paper discussing Pthreads tests |
hpdc.2col.color.fm.ps | HPDC 8 paper discussing accurate tests for fan-out MPI collective operations |
autodist.c | code for main part of test harness, including automatic data point generation |
autodist.h | header file for autodist.c functions |
automeasure.c | code for determining if measurement is complete and computing basic result |
automeasure.h | header file for automeasure.c functions |
bind.c | code to make bind interface work correctly on Sun platforms (resolves a CPU numbering issue) |
col_test1.h | header file for MPI collective tests |
data_list.c | code for result lists functions |
mw.c | code for MPI master/worker tests (SKaMPI artifact; use at your own risk) |
mw_test1.h | header file for MPI master/worker tests |
p2p_test1.h | header file for MPI point-to-point tests |
pattern.h | header file for test patterns |
pqtypes.c | code for priority queue functions |
pqtypes.h | header file for priority queue functions |
simple_test1.h | header file for Pthreads tests |
sphinx.c | code for main function |
sphinx.h | primary header file |
sphinx_any.h | debugging,utility functions header file |
sphinx_aux.h | extern variables header file |
sphinx_aux_test.c | code for auxiliary measurements mechanism functions |
sphinx_aux_test.h | header file for auxiliary measurements mechanism functions |
sphinx_call.c | code for functions for setting up independent variable values in measurement structure |
sphinx_call.h | header file for functions for setting up independent variable values in measurement structure |
sphinx_col.c | code for MPI collective tests |
sphinx_error.c | code for error functions |
sphinx_error.h | header file for error functions |
sphinx_mem.c | code for memory allocation, message buffer set up functions |
sphinx_mem.h | header file for memory allocation, message buffer set up functions |
sphinx_omp.c | code for OpenMP tests |
sphinx_omp.h | header file for OpenMP tests |
sphinx_p2p.c | code for MPI point-to-point tests |
sphinx_params.c | code for reading in input file, setting up parameters and tests |
sphinx_params.h | header file for sphinx_params.c functions |
sphinx_post.c | code for post-processing (SKaMPI artifact; use at your own risk) |
sphinx_post.h | header file for sphinx_post.c functions |
sphinx_simple.c | code for simple pattern and tests |
sphinx_threads.c | code for Pthreads tests |
sphinx_threads.h | header file for processor binding routines needed for PThreads tests |
sphinx_tools.c | code for utility functions |
sphinx_tools.h | header file for utility functions |
yieldtest.c | code for simple stand-alone test to determine if thread scheduling is round robin when threads are bound to same CPU and call sched_yield |
sphinx_defaults | text file showing default values for Sphinx input file parameters |
test.sphinx | sample input file for MPI tests |
test.sphinx.old | sample old style input file for MPI tests |
test.sphinx.threads | sample input file for Pthreads tests |
test.sphinx.threads.old | sample old style input file for Pthreads tests |
test.sphinx.root.acker | sample input file for accurate fan-out MPI collective tests |
test.sphinx.pdpta.dec test.sphinx.pdpta.dec.timeslice test.sphinx.pdpta.ibm test.sphinx.pdpta.sgi test.sphinx.pdpta.sun |
input files corresponding to PDTPA'99 paper |
test.sphinx.col.15per.white.MPICH.scale.new test.sphinx.col.15per.white.scale test.sphinx.col.15per.white.scale.new test.sphinx.col.15per.white.scale.shmem.new test.sphinx.col.15per.white.scale.shmem.new.fillin test.sphinx.col.16per.white.MPICH.scale.new test.sphinx.col.16per.white.scale test.sphinx.col.16per.white.scale.new test.sphinx.col.16per.white.scale.shmem.new test.sphinx.col.1per.blue test.sphinx.col.1per.blue.new test.sphinx.col.1per.snow test.sphinx.col.1per.snow.MPICH test.sphinx.col.1per.snow.MPICH.new test.sphinx.col.1per.snow.new test.sphinx.col.1per.white.MPICH.scale.new test.sphinx.col.1per.white.scale test.sphinx.col.1per.white.scale.new test.sphinx.col.3per.blue test.sphinx.col.3per.blue.new test.sphinx.col.4per.blue test.sphinx.col.4per.blue.new test.sphinx.col.7per.snow test.sphinx.col.7per.snow.MPICH test.sphinx.col.7per.snow.MPICH.new test.sphinx.col.7per.snow.new test.sphinx.col.7per.snow.shmem test.sphinx.col.7per.snow.shmem.new test.sphinx.col.8per.snow test.sphinx.col.8per.snow.MPICH test.sphinx.col.8per.snow.MPICH.new test.sphinx.col.8per.snow.new test.sphinx.col.8per.snow.shmem test.sphinx.col.8per.snow.shmem.new test.sphinx.col2.1per.blue test.sphinx.col2.1per.blue.16 test.sphinx.col2.1per.blue.MPICH test.sphinx.col2.1per.snow test.sphinx.col2.1per.snow.8 test.sphinx.col2.1per.snow.MPICH test.sphinx.col2.1per.snow.MPICH.8 test.sphinx.col2.3per.blue test.sphinx.col2.4per.blue test.sphinx.col2.4per.blue.8 test.sphinx.col2.7per.snow test.sphinx.col2.7per.snow.MPICH test.sphinx.col2.7per.snow.shmem test.sphinx.col2.7per.snow.shmem.long test.sphinx.col2.8per.snow test.sphinx.col2.8per.snow.MPICH test.sphinx.col2.8per.snow.shmem test.sphinx.col2.8per.snow.shmem.long test.sphinx.p2p.1node.blue test.sphinx.p2p.1node.blue.shmem test.sphinx.p2p.1node.frost test.sphinx.p2p.1node.frost.shmem test.sphinx.p2p.1node.snow test.sphinx.p2p.1node.snow.MPICH test.sphinx.p2p.1node.snow.MPICH.shmem test.sphinx.p2p.1node.snow.shmem test.sphinx.p2p.2nodes.blue test.sphinx.p2p.2nodes.frost test.sphinx.p2p.2nodes.snow test.sphinx.p2p.2nodes.snow.MPICH test.sphinx.p2p.2nodes.snow.new test.sphinx.threads.blue test.sphinx.threads.frost test.sphinx.threads.snow test.sphinx.threads.snow.new |
input files for ASCI PSE Milepost te |
Building the Code
To build the code, type "make ARCH_COMPILER_MPI_OPTION" where ARCH_COMPILER_MPI_OPTION names the platform/compiler/MPI implementation that you want. For example, to build the IBM SP version that uses IBM's MPI library, type "Make IBM" while "Make IBM_MPICH" uses MPICH instead. For the full list of currently supported ARCH_COMPILER_MPI_OPTION choices, see the Makefile. It may be necessary to edit the Makefile.ARCH.COMPILER.MPI_OPTION file depending on the location of your compiler, etc. or to add a new one.
Running the Code
The basic command for running the code is:
sphinx_version input_filename
If input_filename is omitted then a simple MPI pingpong test is run. The exact command to run the code depends on the system (i.e. use poe on IBM SPs). Use the machine's parallel job start up mechanism to determine the number of MPI tasks. The OpenMP function omp_get_max_threads determines the limit on OpenMP threads although a lower limit can be specified in the input file.