HP XC System Software Administration Guide Version 3.0
However, you might prefer not to run jobs on the head node, n128. Simply modify the line to the following:
PartitionName=lsf RootOnly=yes Shared=FORCE Nodes=n[1-127]
Consider an academic system with 256 nodes. Suppose you would like to allocate half the system for faculty
use and half for student use. Furthermore, the faculty prefers the order and control imposed by LSF, while
the students prefer to use the srun command. You might set up your partitions as follows:
PartitionName=lsf RootOnly=yes Shared=Force Nodes=n[1-128]
PartitionName=cs Default=YES Shared=YES Nodes=n[129-256]
If you make any changes, be sure to run the scontrol reconfigure command to update SLURM with
these new settings.
Configuring SLURM Features
A standard element of SLURM is the ability to configure and subsequently use a
feature
. You can use features
to assign characteristics to nodes to manage multiple node types.
SLURM features are specified in the slurm.conf file. After SLURM is updated with the new configuration,
a user can specify the --constraint option so that a job employs those features.
“Using a SLURM Feature to Manage Multiple Node Types” (page 107) shows how to configure SLURM
feature to differentiate node types, update the configuration, and launch a sample command. The following
provides background information on this example.
The HP XC system, xmp, contains 98 nodes:
xmp[1-64]Two single-core processors per nodeCompute nodes
xmp[65-96]Two dual-core processors per nodeCompute nodes
xmp[97,98]Two dual-core processors per nodeService nodes
These nodes were initially configured as follows in the slurm.conf file:
NodeName=xmp[1-64] Procs=2
NodeName=xmp[65-98] Procs=4
Example 12-1. configures the compute nodes into two separate groups: single and dual. The two service
nodes are configured into their own group.
106 Managing SLURM