Sphinx: Integrated Parallel Microbenchmark Suite

Input File Format

This section details the input file format for Sphinx and the tests included in this distribution. Sphinx determines parameters for a run by reading and parsing an input file. The parameters for the run determine which tests to perform and variables that control the harness operation, such as the number of iterations per repetition. The input file format consists of several different “modes” that determine what run parameters are being specified. The tests to perform are specified in one or more MEASUREMENTS modes. MEASUREMENTS modes consist of a list of tests to perform; test entries include test-specific parameters. The input file format may seem complex due to the flexibility provided by the test harness. However, experienced users can quickly create the desired input file, due in part to the flexibility of the input file format.

In general, Sphinx input file processing is very forgiving - mode or parameter identifiers must contain a string that uniquely determines them but can contain random other characters, allowing input file processing to complete even with most typos. Further, the determining string is not case specific. Each test in Sphinx has several different possible parameters. A sensible default is used if the input file doesn’t specify anything for a given mode or parameter. The default values can be overridden for the entire input file or for a specific test.

The Sphinx input file format is fairly free form; an “@” as the first character of a line changes the input mode and modes can occur in any order. Most of the modes are optional; the only mode that is required is at least one MEASUREMENTS mode. The last occurrence of a mode determines the value for that mode except for the COMMENT and MEASUREMENTS modes; for these two modes, multiple occurrences are concatenated to form a single value. The following table describes the Sphinx input file modes (names listed in ALL CAPS for historic reasons).

Sphinx Input File Modes

MODE DESCRIPTION DEFAULT
COMMENT any comments that you’d like to include in the input file; this mode can be used to omit test entries without deleting them from the file; not included in output file NULL
USER a text field with no semantic implications; can be used to provide a short description of the user running the tests; included in output file NULL
MACHINE a text field with no semantic implications; can be used to provide a short description of the machine on which the tests are run; included in output file NULL
NETWORK a text field with no semantic implications; can be used to provide a short description of the network on which the tests are run; included in output file NULL
NODE a text field with no semantic implications; can be used to provide a short description of the nodes of the machine on which the tests are run; included in output file NULL
MPILIB_NAME a text field with no semantic implications; can be used to provide a short description of the MPI library with which the tests are run; included in output file NULL
OUTFILE a text field that specifies the output filename input_filename.out
LOGFILE a text field that specifies the log filename input_filename.log
CORRECT_FOR_OVERHEAD A yes or no text field that specifies whether test results should be corrected for any test harness overhead incurred in the measurement; overhead is generally a function call but depends on the test no
MEMORY an integer field that specifies the size in kilobytes of the buffer to allocate in each task for message passing tests; maximum message lengths are a function of this parameter and the test being run; generally maximum message lengths are equal to this parameter or half of it or this parameter divided by the number of tasks 4096 (i.e. 4MB)
MAXREPDEFAULT an integer field that specifies default limit on the number of timings until a test is declared “UNSETTLED” 20
MINREPDEFAULT an integer field that specifies default minimum number of timings to average for a test result 4
ITERSPERREPDEFAULT an integer field that specifies default number of iterations per timing of the code being measured 1
STANDARDDEVIATIONDEFAULT A double field that specifies the default fraction of mean of timings that standard deviation must be less than for test to be declared settled. Sphinx uses standard deviation which may never achieve this mean, unlike SKaMPI which uses standard error which is guaranteed to achieve the percentage for a sufficiently large number of timings; thus MAXREPDEFAULT is more significant for Sphinx 0.05
DIMENSIONS_DEFAULT an integer field that specifies default number of independent variables for a test 1
VARIATION a text field that specifies the default independent variable; see below for valid independent variables NO_VARIATION
VARIATION_LIST a space-delimited text field that specifies the default independent variables NO_VARIATION for all positions
SCALE a text field that specifies the default scale to use for independent variable; see below for valid scale values FIXED_LINEAR
SCALE_LIST a space-delimited text field that specifies the default scales FIXED_LINEAR for all positions
MAXSTEPSDEFAULT an integer field that specifies default limit on the number of values for independent variables 16
MAXSTEPSDEFAULT_LIST a space-delimited integers field that specifies default limits on the numbers of values for independent variables 16 for all positions
START an integer field that specifies default minimum value to use for independent variables; MIN_ARGUMENT has Sphinx use the minimum value semantically allowed for the independent variable (e.g. 1 for number of tasks) MIN_ARGUMENT
START_LIST a space-delimited integers field that specifies default minimum values to use for independent variables MIN_ARGUMENT for all positions
END an integer field that specifies default maximum value to use for independent variables; MAX_ARGUMENT has Sphinx use the maximum value semantic ally allowed for the independent variable (e.g. size of MPI_COMM_WORLD for number of tasks) MAX_ARGUMENT
END_LIST a space-delimited integers field that specifies default maximum values to use for independent variables MAX_ARGUMENT for all positions
STEPWIDTH a double field that specifies default distance between independent variable values 1.00
STEPWIDTH_LIST a space-delimited doubles field that specifies default distances between the values of independent variables 1.00 for all positions
MINDIST SKaMPI artifact; an integer field that apparently was intended to specify a minimum distance between independent variable values; currently has no effect but may be supported in the future 1
MINDIST_LIST a space-delimited integers field that specifies MIN_DIST values 1 for all positions
MAXDIST SKaMPI artifact; an integer field that apparently was intended to specify a maximum distance between independent variable values; currently has no effect but may be supported in the future (less likely than MINDIST) 10 for all positions
MAXDIST_LIST a space-delimited integers field that specifies MAX_DIST values 10
MESSAGELEN an integer field that specifies default message length in bytes 256
MAXOVERLAP an integer field that specifies default maximum iterations of the overlap for loop 0
THREADS an integer field that specifies default number of threads value returned by omp_get_max_threads
WORK_FUNCTION_DEFAULT a text field that specifies the default function used inside OpenMP loops; see below for a list of valid work function values SIMPLE_WORK
WORK_AMOUNT_DEFAULT an integer field that specifies default duration of function used inside OpenMP loops 10
SCHEDULE_DEFAULT a text field that specifies default OpenMP schedule option; see below for a list of valid schedule options STATIC_SCHED
SCHEDULE_CAP_DEFAULT an integer field that specifies default schedule cap for OpenMP tests 10
SCHEDULE_CHUNK_DEFAULT an integer field that specifies default OpenMP schedule chunk size 1
OVERLAP_FUNCTION a text field that specifies the default overlap function used in mixed non-blocking MPI/OpenMP tests; see below for valid overlap function values SEQUENTIAL
CHUNKS an integer field that specifies default number of chunks 6
MEASUREMENTS a text field that determines the actual tests run including any default parameter overrides; see below for a description of the format of this field NULL

Many of the modes have list variants, as indicated. These allow the specification of different defaults for independent variables X0, X1, X2,… These variants are needed since Sphinx supports multiple independent variables per test, such as varying both the message size and the number of tasks for a MPI collective test. If a test uses more independent variables than specified in a corresponding list, then the non-list default is used for the additional defaults.

The MEASUREMENTS MODE is a structured text field. It describes the tests that will be run for the input file. The format is a series of test descriptions; blank lines between test descriptions are discarded. A test description consists of a name followed by a left curly brace ({) (optionally on a new line), followed by parameters specific to the test; a right curly brace (}) marks the end of the description. Each parameter field of a test description must be on a separate line; the general format of parameter field line is “parameter_name = value”. The following table describes the parameter fields of the test description.

Sphinx Test Description Fields

PARAMETER NAME DESCRIPTION
Type Type of test; this field determines the actual test run; see below for a description of the different test types available in Sphinx
Correct_for_overhead See CORRECT_FOR_OVERHEAD mode
Max_Repetition See MAXREPDEFAULT mode
Min_Repetition See MINREPDEFAULT mode
Standard_Deviation See STANDARDDEVIATIONDEFAULT mode
Dimensions See DIMENSIONS_DEFAULT mode
Variation See VARIATION_LIST mode
Scale See SCALE_LIST mode
Max_Steps See MAXSTEPSDEFAULT_LIST mode
Start See START_LIST mode
End See END_LIST mode
Stepwidth See STEPWIDTH_LIST mode
Min_Distance See MINDISTANCE_LIST mode
Max_Distance See MAXDISTANCE_LIST mode
Default_Message_length See MESSAGELEN mode
Default_Chunks See CHUNKS mode

All fields of a test description other than the type field are optional. Test descriptions often consist only of a name, a {, a Type = X line and a }. Properly specified defaults enable this simplicity.

A measurement name is text string without spaces and need not have any relation to the type. This is unfortunate; future versions may augment names in the output file with a type specific string.

If the same name is used for several test descriptions, Sphinx will automatically extend the second and later occurrences with a unique integer. This mechanism ensures that all test descriptions result in a test run.

Sphinx Independent Variable Types

NAME SEMANTIC MEANING
NO_VARIATION No independent variable
ITERS Number of iterations per timing
NODES Number of MPI tasks
LENGTH Message length (output format is in bytes)
ROOT Root task (relevant to asynchronous MPI collective tests)
ACKER Task that sends acknowledgement message (relevant to fan-out MPI collective tests)
OVERLAP Computational overlap time
SECOND_OVERLAP Second computational overlap time
MASTER_BINDING CPU to which master (i.e. first/timing) thread is bound
SLAVE_BINDING CPU to which slave (i.e. second) thread is bound
THREADS Number of threads
SCHEDULE OpenMP scheduling option
SCHEDULE_CAP Iterations per OpenMP THREAD(?)
SCHEDULE_CHUNK OpenMP schedule chunk size option
WORK_FUNCTION Function used inside OpenMP loops
WORK_AMOUNT Parameter that determines duration of function used inside OpenMP loops
OVERLAP_FUNCTION Overlap function for mixed non-blocking MPI/OpenMP tests
CHUNKS Number of chunks for master/worker tests (SKaMPI artifact; use at your own risk)

Some independent variables do not alter anything for some test types. In general, an effort has been made to allow variation of these variables although some may lead to internally detected errors. In any event, care should be taken with independent variable selections to ensure that test descriptions test interesting variations and to ensure that overall run-time is not excessive.

Sphinx Test Types

NUMBER DESCRIPTION TIMING RESULT
1 MPI Ping-pong using MPI_Send and MPI_Recv Round trip latency
2 MPI Ping-pong using MPI_Send and MPI_Recv with MPI_ANY_TAG Round trip latency
3 MPI Ping-pong using MPI_Send and MPI_Irecv Round trip latency
4 MPI Ping-pong using MPI_Send and MPI_Iprobe/MPI_Recv combination Round trip latency
5 MPI Ping-pong using MPI_Ssend and MPI_Recv Round trip latency
6 MPI Ping-pong using MPI_Isend and MPI_Recv Round trip latency
7 MPI Ping-pong using MPI_Bsend and MPI_Recv Round trip latency
8 MPI bidirectional communication using MPI_Sendrecv in both tasks Operation latency
9 MPI bidirectional communication using MPI_Sendrecv_replace in both tasks Operation latency
10 SKaMPI artifact: master/worker with MPI_Waitsome Not clear (use at own risk)
11 SKaMPI artifact: master/worker with MPI_Waitany Not clear (use at own risk)
12 SKaMPI artifact: master/worker with MPI_Recv with MPI_ANY_SOURCE Not clear (use at own risk)
13 SKaMPI artifact: master/worker with MPI_Send Not clear (use at own risk)
14 SKaMPI artifact: master/worker with MPI_Ssend Not clear (use at own risk)
15 SKaMPI artifact: master/worker with MPI_Isend Not clear (use at own risk)
16 SKaMPI artifact: master/worker with MPI_Bsend Not clear (use at own risk)
17 Round of MPI_Bcast over all tasks Lower bound of operation latency
18 Repeated MPI_Barrier calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
19 Round of MPI_Reduce over all tasks Lower bound of operation latency
20 Repeated MPI_Alltoall calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
21 Repeated MPI_Scan calls Per task overhead at task zero
22 Repeated MPI_Comm_split (provides a reasonable lower bound of operation latency) (note: “leaks” MPI_Comm results; future changes will eliminate this problem) Per task overhead at task zero
23 Repeated memcpy calls Time per memcpy call
24 Repeated MPI_Wtime calls Clock overhead
25 Repeated MPI_Comm_rank calls Time per MPI_Comm_rank call
26 Repeated MPI_Comm_size calls Time per MPI_Comm_size call
27 Repeated MPI_Iprobe calls with no message expected MPI_Iprobe call overhead
28 Repeated MPI_Buffer_attach and MPI_BUffer_detach MPI_Buffer_attach/detach call overhead
29 Empty function call with point to point pattern Function call overhead
30 Empty function call with master/worker pattern Function call overhead
31 Empty function call with collective pattern Function call overhead
32 Empty function call with simple pattern Function call overhead
33 Round of MPI_Gather over all tasks Lower bound of operation latency
34 Round of MPI_Scatter over all tasks Lower bound of operation latency
35 Repeated MPI_Allgather calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
36 Repeated MPI_Allreduce calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
37 Round of MPI_Gatherv over all tasks Lower bound of operation latency
38 Round of MPI_Scatterv over all tasks Lower bound of operation latency
39 Repeated MPI_Allgatherv calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
40 Repeated MPI_Alltoallv calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
41 Repeated MPI_Reduce_scatter calls Per task overhead at task zero
42 Repeated calls to MPI_Bcast, each followed by an MPI_Barrier call Upper bound of operation latency
43 Repeated calls to MPI_Bcast Per task overhead at root task
44 Round of MPI_Bcast over all tasks (identical to type 17) Lower bound of operation latency
45 Repeated calls to MPI_Bcast, each followed by an acknowledgement from every other task to root task Upper bound of operation latency
46 Repeated calls to MPI_Bcast, each followed by an acknowledgement from one task to root task; tested over all acknowledgers provides accurate measure of operation latency Operation latency to acknowledging task
47 Repeated calls to MPI_Alltoall, each call followed by a barrier implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
48 Repeated calls to MPI_Gather, each call followed by a broadcast implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
49 Repeated calls to MPI_Scatter, each followed by an acknowledgement from one task to root task; tested over all acknowledgers provides accurate measure of operation latency Operation latency to acknowledging task
50 Repeated calls to MPI_Allgather, each call followed by a barrier implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
51 Repeated calls to MPI_Allreduce, each call followed by a barrier implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
52 Repeated calls to MPI_Gatherv, each call followed by a broadcast implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
53 Repeated calls to MPI_Scatterv, each followed by an acknowledgement from one task to root task; tested over all acknowledgers provides accurate measure of operation latency Operation latency to acknowledging task
54 Repeated calls to MPI_Allgatherv, each call followed by a barrier implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
56 Repeated calls to MPI_Reduce_scatter, each call followed by a barrier implemented with MPI_Send and MPI_Recv operations Upper bound of operation latency
57 Repeated calls to MPI_Alltoallv, each call followed by a barrier implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
58 Repeated calls to MPI_Reduce, each call followed by a broadcast implemented with MPI_Send and MPI_Recv operations (provides a reasonable upper bound of operation latency) Upper bound of operation latency
59 Function call with for loop of number of tasks iterations in collective pattern Overhead of function call with for loop
60 Computation used for non-blocking MPI tests Time of overlap computation
61 Overlap of computation with MPI_Isend (not fully tested; use at own risk) Overlap potential of MPI_Isend
62 Overlap of computation with MPI_Isend plus overlap of acknowledgement message (not fully tested; use at own risk) Overlap potential of MPI_Isend
63 Overlap of computation with MPI_Irecv (not fully tested; use at own risk) Overlap potential of MPI_Irecv
64 Repeated MPI_Reduce calls Per task overhead at root task
65 Repeated MPI_Gather calls Per task overhead at root task
66 Repeated MPI_Gatherv calls Per task overhead at root task
67 Repeated MPI_Comm_dup calls (internal difference improves scalability compared to test 69) (note: “leaks” MPI_Comm results; future changes will eliminate this problem) Per task overhead at task zero
68 Repeated MPI_Comm_split calls (internal difference improves scalability compared to test 22) (note: “leaks” MPI_Comm results; future changes will eliminate this problem) Per task overhead at task zero
69 Repeated MPI_Comm_dup calls (note: “leaks” MPI_Comm results; future changes will eliminate this problem) Per task overhead at task zero
70 Repeated calls to MPI_Scan, each followed by an acknowledgement from one task to task zero; tested over all acknowledgers provides accurate measure of operation latency Operation latency to acknowledging task
71 Repeated MPI_Scan calls (internal difference improves scalability compared to test 21) Per task overhead at task zero
101 Ping-pong using pthread_cond_signal and pthread_cond_wait “Round trip” latency
102 Repeated calls to pthread_cond_signal; as many as the slave thread can wait for are caught Overhead of pthread_cond_signal
103 Repeated uncaught calls to pthread_cond_signal Overhead of pthread_cond_signal
104 Repeated calls to pthread_cond_wait; matching calls to pthread_cond_signal are made “as quickly as possible” Overhead of pthread_cond_wait
105 Ping-pong using pthread_mutex_lock and pthread_mutex_unlock (four separate locks) “Round trip” latency
106 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (four separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
107 Repeated pthread_mutex_lock and pthread_mutex_unlock calls (one lock) Overhead of pthread_mutex_lock and pthread_mutex_unlock
108 Repeated spin on shared variable; measures per thread time slice when bound to the same CPU Per thread time slice
109 Chain of pthread_create calls for detached process scope threads Overhead of pthread_create
110 Repeated calls to sched_yield (thr_yield for Suns); measures thread context switch time when bound to the same CPU (use with care, depends on OS thread scheduling) Thread context switch time
111 Repeated pthread_mutex_lock calls (large array of locks) then repeated calls of pthread_mutex_unlock calls (large array of locks); each set of calls is timed separately Overhead of pthread_mutex_lock and overhead of pthread_mutex_unlock (separate measurements)
112 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (two separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
113 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (three separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
114 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (five separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
115 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (six separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
116 Ping-pong using pthread_mutex_lock and pthread_mutex_unlock (array of four locks) “Round trip” latency
117 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (array of four locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
118 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (large array of locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
119 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (seven separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
120 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (eight separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
121 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (nine separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
122 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (ten separate locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
123 Repeated interleaved pthread_mutex_lock and pthread_mutex_unlock calls (large array of locks, round robin access order) Overhead of pthread_mutex_lock and pthread_mutex_unlock
124 Repeated pthread_mutex_lock and pthread_mutex_unlock calls (one lock, two tight pairs of calls) Overhead of pthread_mutex_lock and pthread_mutex_unlock
125 Repeated pthread_mutex_lock and pthread_mutex_unlock calls (one lock, three tight pairs of calls) Overhead of pthread_mutex_lock and pthread_mutex_unlock
126 Repeated pthread_mutex_lock and pthread_mutex_unlock calls (one lock, four tight pairs of calls) Overhead of pthread_mutex_lock and pthread_mutex_unlock
127 Repeated calls to sched_yield (thr_yield for Suns); measures thread context switch time when bound to the same CPU (uses two “new” threads; can overcome some scheduling quirks) (use with care, depends on OS thread scheduling) Thread context switch time
128 Chain of pthread_create calls for detached system scope threads Overhead of pthread_create
129 Chain of pthread_create calls for undetached process scope threads Overhead of pthread_create
130 Chain of pthread_create calls for undetached system scope threads Overhead of pthread_create
131 Function call with for loop of number of tasks iterations in simple pattern Overhead of function call with for loop
201 Repeated calls to work function (not fully tested, use at own risk) Reference measurement for OpenMP parallel construct
202 Repeated calls to an OpenMP parallel region of work function Overhead of OpenMP parallel construct
203 Repeated calls to for loop over work function (not fully tested, use at own risk) Reference measurement for OpenMP parallel for construct
204 Repeated calls to an OpenMP parallel for over work function Overhead of OpenMP parallel for construct
205 Repeated calls to an OpenMP parallel for with variable chunk sizes over work function Overhead of OpenMP parallel for with variable chunk sizes construct
206 Repeated calls to an OpenMP parallel for loop over work function (not fully tested, use at own risk) Reference measurement for OpenMP ordered construct
207 Repeated calls to an OpenMP parallel for with ordered clause over work function Overhead of OpenMP parallel for with ordered clause
208 Repeated calls to an OpenMP parallel for with ordered work function calls Overhead of OpenMP ordered construct
209 Repeated calls to for loop over work function (not fully tested, use at own risk) Reference measurement for OpenMP single and barrier constructs
210 Repeated calls to for loop over work function inside OpenMP single construct (not fully tested, use at own risk) Overhead of OpenMP single construct
211 Repeated calls to for loop over work function following an OpenMP barrier construct (not fully tested, use at own risk) Overhead of OpenMP barrier construct
212 Repeated calls to for loop over work function, results summed (not fully tested, use at own risk) Reference measurement for OpenMP reduction construct
213 Repeated calls to an OpenMP parallel for loop with reduction clause over work function (not fully tested, use at own risk) Overhead of OpenMP reduction construct
214 Repeated calls to integer increment and work function (not fully tested, use at own risk) Overhead of OpenMP single construct
215 Repeated calls to for loop over integer increment inside an OpenMP atomic construct and work function (not fully tested, use at own risk) Overhead of OpenMP barrier construct
301 Repeated calls to a mixed OpenMP/MPI barrier followed by work function call (provides a reasonable lower bound of operation latency) OpenMP-test-style overhead of mixed OpenMP/MPI barrier
302 Repeated mixed OpenMP/MPI barrier calls (provides a reasonable lower bound of operation latency) Per task overhead at task zero
303 Repeated calls to a mixed OpenMP/MPI reduce across all threads in all tasks followed by work function call (provides a reasonable lower bound of operation latency) OpenMP-test-style overhead of mixed OpenMP/MPI all reduce
304 Repeated calls to mixed OpenMP/MPI reduce across all threads in all tasks (provides a reasonable lower bound of operation latency) Per task overhead at task zero
305 Repeated calls to mixed OpenMP/MPI reduce across all threads in all tasks (essentially) followed by a mixed OpenMP/generic MPI barrier Upper bound of operation latency
306 Computation used for non-blocking MPI mixed with OpenMP tests Time of threaded overlap computation
307 Overlap of OpenMP threaded computation with MPI_Isend (not fully tested; use at own risk) Overlap potential of MPI_Isend
308 Overlap of OpenMP threaded computation with MPI_Isend plus overlap of acknowledgement message (not fully tested; use at own risk) Overlap potential of MPI_Isend
309 Overlap of OpenMP threaded computation with MPI_Irecv (not fully tested; use at own risk) Overlap potential of MPI_Irecv

The preceding table provides only a brief description of the tests and their results. The referenced papers provide further detail. Of course, a complete understanding can only result from careful consideration of the code. For details of the corrections used when the correct for overhead mode is used, consult the code.

Sphinx Independent Variable Scales

NAME DESCRIPTION
FIXED_LINEAR Fixed linear scale; use up to MAXTSTEPS values exactly STEPWIDTH apart
DYNAMIC_LINEAR Dynamic linear scale; use values exactly STEPWIDTH apart, then fill in into either exactly MAXSTEPS values or no “holes”
FIXED_LOGARITHMIC Fixed logarithmic scale; use up to MAXTSTEPS values “logarithmically” exactly STEPWIDTH apart
DYNAMIC_LOGARITHMIC Dynamic linear scale; use values “logarithmically” exactly STEPWIDTH apart, then fill in into either exactly MAXSTEPS values or no “holes”

Linear scales are reasonably intuitive; a STEPWIDTH of the square root of 2 will result in doubling the previous value with a fixed logarithmic scale. The default stepwidth actually varies with the scale since a STEPWIDTH of 1.00 would not result in any variation with logarithmic scales. The default is the ssquare root of two with logarithmic scales.

Sphinx Work Functions

NAME DESCRIPTION
SIMPLE_WORK A simple for loop of WORK_AMOUNT iterations, each iteration has a single FMA plus a few branch statements based on mod tests and possibly an integer shift
BORS_WORK Complex set of array operations; duration per WORK_AMOUNT unit is relatively long
SPIN_TIMED_WORK Loop over checks to see if work function has lasted WORK_AMOUNT nanoseconds
SLEEP_TIMED_WORK Loop over checks to see if work function has lasted WORK_AMOUNT nanoseconds followed by sleep and usleep of remaining time

The effect of varying work functions should be limited to cache effects. A future paper will present results addressing the validity of this expectation.

Sphinx Schedule Values

VALUE DESCRIPTION
STATIC_SCHED static
DYNAMIC_SCHED static
GUIDED_SCHED static

The standard OpenMP names for the scheduling options are actually sufficient since Sphinx uses a non-case specific minimum string mechanism to determine the value. Support for the OpenMP runtime schedule option may be added in the future.

Sphinx Overlap Function Values

VALUE DESCRIPTION
SEQUENTIAL sequential work function (i.e. not in an OpenMP parallel region)
PARALLEL work function inside OpenMP parallel region
PARALLEL_FOR work function inside OpenMP parallel for construct
PARALLEL_FOR_CHUNKS work function inside OpenMP parallel for construct with variable chunks

The log mechanism inherited from SKaMPI allows multiple runs of the same input file to run the full set of test descriptions to completion. If the log file contains and end of run message, then the log file and output file are moved to file names extended with an integer and a new run of the full set is started. Sphinx includes corrections to some bugs in this mechanism. These corrections ensure that each test description is run to completion exactly once, regardless of its name or the status of partial runs.