HP XC System Software Administration Guide Version 3.0
# chkconfig --add slsf
# chkconfig --list slsf
slsf0:off 1:off 2:off 3:on 4:on 5:on 6:off
f. Edit the /opt/hptc/systemimager/etc/chkconfig.map file to add the following line to
enable this new "service" on all nodes in the HP XC system:
slsf 0:off 1:off 2:off 3:on 4:on 5:on 6:off
8. Update the node roles and re-image:
a. Use the stopsys command to shut down the other nodes of the HP XC system.
b. Change directory to /opt/hptc/config/sbin
c. Execute cluster_config utility.
• Select Modify Nodes. Remove the compute and resource_management role assignments
for the fat nodes.. Ensure that there is at least one resource management role remaining in
the HP XC system (HP recommends two resource management nodes).
• Do not reinstall LSF.
d. When the cluster_config utility completes, edit the
/hptc_cluster/slurm/etc/slurm.conf file to remove the names of the fat nodes from:
• The NodeName parameter assignment
• The PartitionName parameter assignment
e. Run the following command to update SLURM with the new information:
# scontrol reconfig
9. Use the startsys command to restart the HP XC system.
The nodes are reimaged after startsys command completes.
Note
Only the nodes on which role changes were made are reimaged
The standard LSF binaries, the slsf script, and its soft link are not on the thin nodes. For information
on the updateclient command to update the thin nodes with these latest file changes, see Chapter 8.:
Distributing Software Throughout the System (page 79) .
The thin nodes do not need to be updated with these files to complete this procedure. This is a matter
of consistency between all the nodes in the cluster. The "thin" nodes can be brought up-to-date with
these changes at a later time.
10. Use the sinfo and lshosts commands to verify the SLURM nodes and partitions and LSF hosts,
respectively:
# sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
lsf up infinite 6 idle xc[7-120]
# lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
lsfhost.loc SLINUX6 Itanium2 60.0 228 1973M - Yes (slurm)
xc1 LINUX64 Itanium2 60.0 8 3456M 6143M Yes ()
xc2 LINUX64 Itanium2 60.0 8 3456M 6143M Yes ()
xc3 LINUX64 Itanium2 60.0 8 3456M 6143M Yes ()
xc4 LINUX64 Itanium2 60.0 8 3456M 6143M Yes ()
xc5 LINUX64 Itanium2 60.0 8 3456M 6143M Yes ()
xc6 LINUX64 Itanium2 60.0 8 3456M 6143M Yes ()
11. Verify the procedure by switching to a user (other than superuser) and running some test jobs:
# su lsfadmin
$ bsub -I -n1 -R type=LINUX64 hostname
Job <176> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on xc1>>
xc1
$ bsub -I -n1 -R type=SLINUX64 hostname
Job <177> is submitted to default queue <normal>.
186 Installing Standard LSF on a Subset of Nodes