HP XC System Software Administration Guide Version 3.0

Configuring SLURM
The HP XC system provides global and local directories for SLURM files:
The /hptc_cluster/slurm directory is the sharable location for SLURM files that need to be shared
between the nodes. The SLURM slurmctld state files, job logging files, and the slurm.conf
configuration file reside there.
The location for SLURM files that should remain local to a given node is /var/slurm; the files in this
directory are not shared between nodes. All SLURM daemon logs and slurmd state information are
stored there.
As a resource manager on an HP XC system, SLURM allocates exclusive or nonexclusive access to resources
on compute nodes for users to perform work; it provides a framework to start, execute, and monitor work
(normally parallel jobs) on the set of allocated nodes.
All SLURM configuration options are set and stored in the /hptc_cluster/slurm/etc/slurm.conf
file; For information about available options, see slurm.conf(5). The slurm.conf file also contains
useful commentary on the purpose of each setting.
The following SLURM configuration settings are preset statically on HP XC systems:
StateSaveLocation=/hptc_cluster/slurm/state
SlurmdSpoolDir=/var/slurm/state
SlurmUser=slurm
SlurmctldLogFile=/var/slurm/log/slurmctld.log
SlurmdLogFile=/var/slurm/log/slurmd.log
SlurmctldPidFile=/var/slurm/run/slurmctld.pid
SlurmdPidFile=/var/slurm/run/slurmd.pid
AuthType=auth/munge
JobCompType=jobcomp/filetxt
SwitchType=switch/none
JobCompLoc=/hptc_cluster/slurm/job/slurm.job.log
JobAcctType=jobacct/log
JobAcctLoc=/hptc_cluster/slurm/job/jobacct.log
ReturnToService=1
PropagateResourceLimitsExcept=RLIMIT_NPROC
The slurm.conf file must be available on each node of the HP XC system.
Table 12-1. displays the SLURM configuration settings that are set (and, if necessary, adjusted) during the
execution of the cluster_config utility.
Table 12-1. SLURM Configuration Settings SLURM Configuration Settings
Default Value*Setting
Lowest-numbered resource management nodeControlMachine
Second-lowest resource management node (if available)BackupController
'slurm'SlurmUser
all compute nodes, plus 'Procs=2'NodeName
'lsf RootOnly=YES Shared=FORCE Nodes=compute_nodesPartitionName
switch/elan for systems with the Quadrics interconnect
switch/none for systems with any other interconnect
SwitchType
* Default values can be adjusted during installation.
You can also use the scontrol show config command to examine the current SLURM configuration.
102 Managing SLURM