HP XC System Software Administration Guide Version 3.0

Example 12-1. Using a SLURM Feature to Manage Multiple Node Types Using a SLURM Feature to Manage
Multiple Node Types
1. Use the text editor of your choice to edit the slurm.conf file to change the node configuration to the
following:
NodeName=exn[1-64] Procs=2 Feature=single,compute
NodeName=exn[65-96] Procs=4 Feature=dual,compute
NodeName=exn[97-98] Procs=4 Feature=service
Save the file.
2. Update SLURM with the new configuration:
# scontrol reconfig
3. Verify the configuration with the sinfo command. The output has been edited to fit on the page.
# sinfo --long --exact --Node
NODELIST NODES PARTITION ... CPUS ... FEATURES REASON
exn[1-64] 64 lsf ... 2 ... single,com none
exn[65-96] 32 lsf ... 4 ... dual,compu none
exn[97-98] 2 lsf ... 4 ... service none
Alternatively, you could enter the short form of this command:
# sinfo -lNe
4. Launch a job with the srun command; use the --constraint option to request the nodes in dual:
# srun -n5 --constraint="dual" hostname
n65
n65
n66
n66
n67
If all the nodes in dual are busy in this example, the job waits until the nodes are available. Also specifying
the --immediate option causes the jobs to fail, instead of wait, if the nodes are busy.
You can execute a command on any compute node, regardless of the number of cores, with either of the
following command lines:
# srun -n5 --constraint="single|dual" hostname
# srun -n5 --constraint="compute" hostname
You can use the --constraint option under LSF-HPC by passing it through the External Scheduler. See
the
HP XC System Software User's Guide
for more information.
Propagating Resource Limits
Some systems may have compute nodes with varying system resources; for example, one compute node
may permit fewer open files per user than the submit node. You can establish limits to various resources so
that user applications have the same resources when they run; all or some of the values of the user's resource
soft limit values are propagated onto the compute nodes when the application is dispatched.
You can examine the resources and their soft limits with the following bash shell command:
$ ulimit -Sa
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 1024
max locked memory (kbytes, -l) 128
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 10240
Node Characteristics 107