ICC Home Privacy and Legal Notice LC User Documents Banner

UCRL-WEB-201386

SLURM Reference Manual


Affinity or NUMA Constraints

These SRUN constraints apply only to machines where the task-affinity or the NUMA (NonUniform Memory Access) plugins have been enabled by the operating system.

-B (--extra-node-info)
adds detail to node allocations and hence is explained in the "SRUN Resource-Allocation Options" section above.
--cpu_bind=[quiet,|verbose,]type
binds tasks to CPUs (to prevent the operating system scheduler from moving the tasks and spoiling possible memory optimization arrangements).
q[uiet]
(default) quietly binds CPUs before the tasks run.
v[erbose]
verbosely reports CPU binding before the tasks run.
Here type can be any one of these mutually exclusive alternatives:
no[ne]
(default) does not bind tasks to CPUs.
rank
binds tasks to CPUs by task rank.
map_cpu:idlist
binds by mapping CPU IDs to tasks as specified in idlist, a comma-delimited list cpuid1,cpuid2,...,cpuidn. SRUN interprets CPU IDs as decimal values unless you precede each with 0x to specify hexadecimal values.
mask_cpu:mlist
binds by setting CPU masks on tasks as specified in mlist, a comma-delimited list mask1,mask2,...,maskn. SRUN always interprets masks as hexadecimal values (so using the 0x prefix is optional).
To have SLURM always report on the selected CPU binding for all SRUN instances executed in a shell, you can enable verbose mode directly by typing:
setenv SLURM_CPU_BIND verbose
However, SLURM_CPU_BIND will not propagate to tasks (binding by default only affects the first execution of SRUN). To propagate --cpu_bind to successive SRUN instances, excute the following in each task:
setenv SLURM_CPU_BIND \
${SLURM_CPU_BIND_VERBOSE},
${SLURM_CPU_BIND_TYPE}
${SLURM_CPU_BIND_LIST}
--hint=type
binds tasks to suit the needs of different applications, according to the type that you specify. Here the type choices are:
compute_bound
selects settings for compute-bound applications (for example, uses all the cores in each physical CPU).
memory_bound
selects settings for memory-bound applications (for example, uses only one core in each physical CPU).
[no]multithread
[does not] deploy extra threads for in-core multithreading (can benefit communication intensive applications).
help
lists available hint types as a reminder.
--mem_bind=[quiet,|verbose,]type
binds tasks to memory (to stabilize possible memory optimization arrangements). WARNING: the resolution of CPU and memory binding may differ on some architectures. CPU binding may occur at the level of cores within a processor while memory binding may occur at the level of nodes (whose definition may vary from one system to another). Hence, the use of any type other than NONE or LOCAL is not recommended.
q[uiet]
(default) quietly binds memory before the tasks run.
v[erbose]
verbosely reports memory binding before the tasks run.
Here type can be any one of these mutually exclusive alternatives:
no[ne]
(default) does not bind tasks to memory.
rank
binds tasks to memory by task rank.
local
uses memory local to the processor on which each task runs.
map_mem:idlist
binds by mapping a node's memory to tasks as specified in idlist, a comma-delimited list cpuid1,cpuid2,...,cpuidn. SRUN interprets CPU IDs as decimal values unless you precede each with 0x to specify hexadecimal values.
mask_mem:mlist
binds by setting memory masks on tasks as specified in mlist, a comma-delimited list mask1,mask2,...,maskn. SRUN always interprets masks as hexadecimal values (so using the 0x prefix is optional).
To have SLURM always report on the selected memory binding for all SRUN instances executed in a shell, you can enable verbose mode directly by typing:
setenv SLURM_MEM_BIND verbose
However, SLURM_MEM_BIND will not propagate to tasks (binding by default only affects the first execution of SRUN). To propagate --mem_bind to successive SRUN instances, excute the following in each task:
setenv SLURM_MEM_BIND \
${SLURM_MEM_BIND_VERBOSE},
${SLURM_MEM_BIND_TYPE}
${SLURM_MEM_BIND_LIST}
--ntasks-per-node=ntasks
requests that no more than ntasks be invoked on each node. This option yields results similar to using --cpus-per-task (in the "Resource Allocation" section above) but without needing to know in advance the CPUs per node where your job will run. This can be useful for mixed MPI/OpenMP applications. See also --ntasks-per-socket and --ntasks-per-core.
--ntasks-per-socket=ntasks
requests that no more than ntasks be invoked on each socket. (This is similar to --ntasks-per-node but works at the socket level instead.) Tasks will be bound to sockets unless you also specify --cpu_bind=none. (This option requires the CR_Socket or CR_Socket_Memory SLURM configuration.) See also --ntasks-per-node and --ntasks-per-core.
--ntasks-per-core=ntasks
requests that no more than ntasks be invoked on each core. (This is similar to --ntasks-per-node but works at the core level instead.) Tasks will be bound to cores unless you also specify --cpu_bind=none. (This option requires the CR_Core or CR_Core_Memory SLURM configuration.) See also --ntasks-per-node and --ntasks-per-socket.



Navigation Links: [ Document List ] [ HPC Home ] [ Next ]