HP XC System Software Administration Guide Version 3.0
Installation of LSF-HPC on SLURM
When selected, LSF-HPC is automatically installed during cluster_config execution. This installation is
optimized for operational scalability and efficiency within the HP XC system, and is a very good solution
for the HP XC system. Depending how you manage your overall LSF cluster file system, this installation is
sufficient for adding the HP XC system to an existing LSF cluster. For more information, see “Installing LSF-HPC
for SLURM into an Existing Standard LSF Cluster ” (page 171).
The LSF-HPC tar files to be installed are located in the /opt/hptc/lsf/files directory. Before the
installation begins, you are prompted for the following information:
• Primary LSF administrator
This user account is necessary for establishing ownership of the LSF-HPC configuration file. If the
lsfadmin user account does not exist, it will be created locally within HP XC. You can configure other
LSF administrators after the installation has completed. For more information, see
Administering Platform
LSF
on the HP XC Documentation CD.
• The name of the LSF cluster
This name must be unrelated to any network host name. This name must be unique unless the intent is
to add the HP XC system to an existing LSF cluster. In such a case, the name must match the name of
the existing LSF cluster.
The default name is hptclsf.
After these values are obtained and verified, the LSF-HPC installation runs installing the appropriate files
under /opt/hptc/lsf/top/. On completion, the following post-installation procedures are performed:
• LSF-HPC directories are relocated to take advantage of the HP XC file system hierarchy.
The location of the LSF-HPC installation is /opt/hptc/lsf/top, which contains four directories:
conf The conf directory is moved to /hptc_cluster/lsf/conf; it is linked through a soft link
to /opt/hptc/lsf/top/conf.
work The work directory is moved to /hptc_cluster/lsf/work; it is linked through a soft link
to /opt/hptc/lsf/top/work.
log The log directory is moved to /var/lsf/log; it is linked through a soft link to
/opt/hptc/lsf/top/log.
This ensures that all LSF-HPC logging remains local to the node currently running LSF-HPC.
6.1 This directory remains in place and is imaged to each node of the HP XC system.
• The SLURM resource is added to the configured LSF execution host
• HP OEM licensing is configured.
HP OEM licensing is enabled in LSF-HPC by adding the following string to the LSF-HPC configuration
file, /opt/hptc/lsf/top/conf/lsf.conf. This tells LSF-HPC where to look for the shared object
to interface with HP OEM licensing.
XC_LIBLIC=/opt/hptc/lib/libsyslic.so
• Access to LSF-HPC from every node in the cluster is configured.
Configuring all nodes in the HP XC system as LSF-HPC floating client nodes makes available access to
LSF-HPC from all nodes. Two files are edited to perform this configuration:
• Adding LSF_SERVER_HOSTS="lsfhost.localdomain" to the lsf.conf configuration file.
• Adding FLOAT_CLIENTS_ADDR_RANGE=172.20 on its own line in the Parameters Section of
the file /opt/hptc/lsf/top/conf/lsf.cluster.clustername.
The FLOAT_CLIENTS_ADDR_RANGE value (in this case 172.20) must be the management network
IP address range that is configured for the HP XC system. This value should be equal to the value
of nodeBase in the /opt/hptc/config/base_addr.ini file .
The HP XC system /etc/hosts file has an entry for lsfhost.localdomain, which allows the
LSF-HPC installation to install itself with the name lsfhost.localdomain. The
122 Managing LSF