HP XC System Software Administration Guide Version 4.0

Table Of Contents
7.0 This directory remains in place and is imaged to each node of the HP XC system.
The SLURM resource is added to the configured LSF execution host
HP OEM licensing is configured.
HP OEM licensing is enabled in LSF with SLURM by adding the following string to the
configuration file, /opt/hptc/lsf/top/conf/lsf.conf. This tells LSF with SLURM
where to look for the shared object to interface with HP OEM licensing.
XC_LIBLIC=/opt/hptc/lib/libsyslic.so
Access to LSF with SLURM from every node in the cluster is configured.
Configuring all nodes in the HP XC system as LSF with SLURM floating client nodes makes
available access to LSF with SLURM from all nodes. Two files are edited to perform this
configuration:
Adding LSF_SERVER_HOSTS="lsfhost.localdomain" to the lsf.conf
configuration file.
Adding FLOAT_CLIENTS_ADDR_RANGE=172.20 on its own line in the Parameters
Section of the file /opt/hptc/lsf/top/conf/lsf.cluster.clustername.
The FLOAT_CLIENTS_ADDR_RANGE value (in this case 172.20) must be the management
network IP address range that is configured for the HP XC system. This value should
equal the value of nodeBase in the /opt/hptc/config/base_addr.ini file .
The HP XC system /etc/hosts file has an entry for lsfhost.localdomain, which
allows the LSF installation to install itself with the name lsfhost.localdomain. The
/opt/hptc/lsf/top/conf/hosts file maps lsfhost.localdomain and its virtual
IP to the designated LSF execution host
An initial LSF with SLURM hosts file to map the virtual host name
(lsfhost.localdomain) to an actual nodename is provided.
Sets the default LSF with SLURM environment for all users who log into the HP XC system.
Files named lsf.sh and lsf.csh are added to the /etc/profile.d/ directory; these
files source the respective /opt/hptc/lsf/top/conf/profile.lsf and
/opt/hptc/lsf/top/conf/cshrc.lsf files.
The JOB_ACCEPT_INTERVAL= entry in the lsf.params file is set to 0 (zero) to allow more
than one job to be dispatched to the LSF execution host per dispatch cycle. If this setting is
nonzero, jobs are dispatched at a slower rate.
A soft link from /etc/init.d/lsf to /opt/hptc/sbin/controllsf is created.
A scratch area for LSF with SLURM is created in the /hptc_cluster/lsf/tmp/ directory.
It must be readable and writable by all.
The controllsf set primary command is invoked with the highest-numbered node
that has the resource management role. If this is not done, LSF with SLURM starts on the
head node even if the head node is not a resource management node.
16.5 LSF with SLURM Startup and Shutdown
This section discusses starting up and shutting down LSF with SLURM.
16.5.1 Starting Up LSF with SLURM
LSF with SLURM is configured to start up automatically when the HP XC system starts up,
through the use of the /etc/init.d/lsf script.
If LSF with SLURM stops running, you can start it with the controllsf command, as shown
here:
# controllsf start
196 Managing LSF