HP XC System Software Administration Guide Version 3.1
Table 14-2 SLURM Partition Characteristics (continued)
DescriptionCharacteristic
Specifies the minimum number of nodes that can be allocated to any single
job.
The default is 1.
MinNodes
A text string that indicates whether node sharing for jobs is allowed:
YES
The node may be shared or not, depending on the allocation.
FORCE
The node is always available to be shared.
NO
The node is never available to be shared.
Shared
The state of the partition.
The possible values are UP or DOWN.
State
Consider a system that has 128 nodes. The following line in the
/hptc_cluster/slurm/etc/slurm.conf file indicates that partition lsf controls all 128 nodes:
PartitionName=lsf RootOnly=yes Shared=FORCE Nodes=n[1-128]
However, you might prefer not to run jobs on the head node, n128. Simply modify the line to the following:
PartitionName=lsf RootOnly=yes Shared=FORCE Nodes=n[1-127]
Consider an academic system with 256 nodes. Suppose you would like to allocate half the system for
faculty use and half for student use. Furthermore, the faculty prefers the order and control imposed by
LSF, while the students prefer to use the srun command. You might set up your partitions as follows:
PartitionName=lsf RootOnly=yes Shared=Force Nodes=n[1-128]
PartitionName=cs Default=YES Shared=YES Nodes=n[129-256]
If you make any changes, be sure to run the scontrol reconfigure command to update SLURM with
these new settings.
14.2.5 Configuring SLURM Features
A standard element of SLURM is the ability to configure and subsequently use a feature. You can use
features to assign characteristics to nodes to manage multiple node types.
SLURM features are specified in the slurm.conf file. After SLURM is updated with the new configuration,
a user can specify the --constraint option so that a job employs those features.
“Using a SLURM Feature to Manage Multiple Node Types” (page 163) shows how to configure SLURM
feature to differentiate node types, update the configuration, and launch a sample command. The following
provides background information on this example.
The HP XC system, xmp, contains 98 nodes:
xmp[1-64]
Two single-core processors per nodeCompute nodes
xmp[65-96]
Two dual-core processors per nodeCompute nodes
xmp[97,98]
Two dual-core processors per nodeService nodes
These nodes were initially configured as follows in the slurm.conf file:
NodeName=xmp[1-64] Procs=2
NodeName=xmp[65-98] Procs=4
Example 14-1 configures the compute nodes into two separate groups: single and dual. The two service
nodes are configured into their own group.
162 Managing SLURM