HP XC System Software Administration Guide Version 3.1

work The work directory is moved to /hptc_cluster/lsf/work; it is linked through a soft link
to /opt/hptc/lsf/top/work.
log The log directory is moved to /var/lsf/log; it is linked through a soft link to
/opt/hptc/lsf/top/log.
This ensures that all LSF-HPC with SLURM logging remains local to the node currently
running LSF-HPC with SLURM.
6.2 This directory remains in place and is imaged to each node of the HP XC system.
The SLURM resource is added to the configured LSF execution host
HP OEM licensing is configured.
HP OEM licensing is enabled in LSF-HPC with SLURM by adding the following string to the
configuration file, /opt/hptc/lsf/top/conf/lsf.conf. This tells LSF-HPC with SLURM where
to look for the shared object to interface with HP OEM licensing.
XC_LIBLIC=/opt/hptc/lib/libsyslic.so
Access to LSF-HPC with SLURM from every node in the cluster is configured.
Configuring all nodes in the HP XC system as LSF-HPC with SLURM floating client nodes makes
available access to LSF-HPC with SLURM from all nodes. Two files are edited to perform this
configuration:
Adding LSF_SERVER_HOSTS="lsfhost.localdomain" to the lsf.conf configuration file.
Adding FLOAT_CLIENTS_ADDR_RANGE=172.20 on its own line in the Parameters Section of
the file /opt/hptc/lsf/top/conf/lsf.cluster.clustername.
The FLOAT_CLIENTS_ADDR_RANGE value (in this case 172.20) must be the management network
IP address range that is configured for the HP XC system. This value should equal the value of
nodeBase in the /opt/hptc/config/base_addr.ini file .
The HP XC system /etc/hosts file has an entry for lsfhost.localdomain, which allows the
LSF-HPC installation to install itself with the name lsfhost.localdomain. The
/opt/hptc/lsf/top/conf/hosts file maps lsfhost.localdomain and its virtual IP to the
designated LSF execution host
An initial LSF-HPC with SLURM hosts file to map the virtual host name (lsfhost.localdomain)
to an actual nodename is provided.
Sets the default LSF-HPC with SLURM environment for all users who log into the HP XC system.
Files named lsf.sh and lsf.csh are added to the /etc/profile.d/ directory; these files source
the respective /opt/hptc/lsf/top/conf/profile.lsf and
/opt/hptc/lsf/top/conf/cshrc.lsf files.
The JOB_ACCEPT_INTERVAL= entry in the lsf.params file is set to 0 (zero) to allow more than
one job to be dispatched to the LSF execution host per dispatch cycle. If this setting is nonzero, jobs
are dispatched at a slower rate.
A soft link from /etc/init.d/lsf to /opt/hptc/sbin/controllsf is created.
A scratch area for LSF-HPC with SLURM is created in the /hptc_cluster/lsf/tmp/ directory.
It must be readable and writable by all.
The controllsf set primary command is invoked with the highest-numbered node that has
the resource management role. If this is not done, LSF-HPC with SLURM starts on the head node
even if the head node is not a resource management node.
15.5 LSF-HPC with SLURM Startup and Shutdown
This section discusses starting up and shutting down LSF-HPC with SLURM.
15.5.1 Starting Up LSF-HPC with SLURM
LSF-HPC with SLURM is configured to start up automatically when the HP XC system starts up, through
the use of the /etc/init.d/lsf script.
If LSF-HPC with SLURM stops running, you can start it with the controllsf command, as shown here:
15.5 LSF-HPC with SLURM Startup and Shutdown 183