Installing LSF-HPC With SLURM Into an Existing Standard LSF Cluster
Introduction
This HP XC How To describes how you can install LSF-HPC for SLURM into an existing, standard LSF
cluster.
An understanding of standard LSF installation and configuration procedures is required to perform this
procedure. You should be familiar with the LSF installation documentation and the README file
provided in the LSF installation tar file. You should also be familiar with the normal procedures in
adding a node to an existing LSF cluster, such as establishing default communications (.rhosts or
ssh keys), setting up shared directories, and adding common users.
Before you begin, read the documentation for LSF-HPC with SLURM for XC, located in the HP XC
System Software Administration Guide.
An existing LSF cluster typically has a single common LSF_TOP location, also known as the LSF root
or LSF tree. In this location, multiple versions of LSF are installed and centrally maintained.
NOTE: LSF does not create an actual LSF_TOP environment variable; the term is used only to used to identify
the LSF root in the documentation.
The procedure described in this document assumes a single LSF_TOP location, shared by both the
existing LSF cluster and the XC cluster. The LSF_TOP location resides on the existing LSF cluster and it
is exported to the XC cluster. The XC cluster mounts the NFS LSF_TOP location from the existing LSF
cluster. Any changes that you make to the configuration files in LSF_TOP/conf are visible to both
clusters.
You will install LSF-HPC for XC into the LSF_TOP location on the existing LSF cluster, maintaining the
existing LSF cluster, while providing the SLURM support needed by the HP XC cluster.
Requirements
The procedure used in this HP XC How To has the following requirements:
• You can add LSF-HPC for XC only to an existing, standard LSF cluster running LSF V6.0 or later.
Recent versions of standard LSF contain the required schmod_slurm interface module.
• HP XC v2.1 contains the necessary support for this procedure. To install on a prior version, obtain
the latest hptc-lsf RPM package.
• LSF daemons communicate through pre-configured ports in lsf.conf. However, LSF commands
open random ports for receiving information when they communicate with the LSF daemons.
Because an LSF cluster needs this open network environment, maintaining a firewall is beyond the
scope of this XC How To. You can attempt the procedure, but it is not guaranteed to work with a
firewall in place.
The example used in this HP XC How To uses a sample cluster with the following characteristics:
• The cluster consists of an existing LSF cluster installed on a single node with a hostname of
plain.
• An HP XC LSF node with the hostname xclsf is added to the cluster.
• The name xclsf is the LSF alias associated with the HP XC cluster. Using an independent IP
and hostname as the LSF alias enables LSF-HPC failover capability. In the event of a failure,
another node automatically hosts the alias. (See the controllsf man page for more details on
setting up LSF failover.)
• The head node of the example HP XC cluster is xc-head.
• The node plain also serves the LSF tree to the HP XC cluster as an NFS file system.