HP XC System Software Administration Guide Version 3.0
"/shared/lsf/hpctmp/hpc6.1_hpcinstall/hpc_getting_started.html".
After setting up your LSF server hosts and verifying
your cluster "corplsf" is running correctly,
see "/shared/lsf/6.1/hpc_quick_admin.html"
to learn more about your new LSF cluster.
Perform Post Installation Tasks
The LSF documentation and instructions mentioned at the end of the hpc_install script are generic and
have not been tuned with the HP XC system. The following manual procedures cover every task that needs
to be done on the HP XC system:
1. Restore the original environment setup files.
Change directory back to the existing LSF_TOP/conf directory and rename the environment setup
files to distinguish the HP XC files and restore the original files. Using our example:
# cd /shared/lsf/conf
# mv profile.lsf profile.lsf.xc
# mv cshrc.lsf cshrc.lsf.xc
# mv profile.lsf.orig profile.lsf
# mv cshrc.lsf.orig cshrc.lsf
Note that the HP XC environment setup files now match the files configured in
/etc/profile.d/lsf.sh and /etc/profile.d/lsf.csh from the earlier step.
2. Obtain the HP XC internal network base.
Open /opt/hptc/config/base_addr.ini on the HP XC system and note the nodeBase setting.
By default this value is 172.20, and represents the internal HP XC system network. You'll need this
setting in the next step.
3. Edit the LSF_TOP/conf/lsf.cluster.clustername file using the text editor of your choice:
a. In the Host section, find the HP XC "node" and add slurm in the RESOURCES column. For our
example the new entry resembles the following:
Begin Host
HOSTNAME model type server r1m mem swp RESOURCES #Keywords
...
xclsf ! ! 1 3.5 () () (slurm)
End Host
b. In the Parameters section set up the floating client address range
(FLOAT_CLIENTS_ADDR_RANGE) using the nodeBase entry from Step 1. Using the default, the
new entry resembles the following:
Begin Parameters
PRODUCTS=LSF_Base ... Platform_HPC
FLOAT_CLIENTS_ADDR_RANGE=172.20
End Parameters
c. Save the file and exit the text editor.
4. Edit the LSF_TOP/conf/lsf.conf file using the text editor of your choice:
a. Create or modify the LSF_SERVER_HOSTS variable to add the HP XC LSF "node", along with
the other LSF execution hosts in the cluster:
LSF_SERVER_HOSTS="plain xclsf"
b. Enable HP OEM licensing by adding the following variable:
XC_LIBLIC=/opt/hptc/lib/libsyslic.so
c. Make sure the LSF_NON_PRIVILEGED_PORTS option is disabled or removed from this file ('N'
by default).
In Standard LSF v6.1 this is not supported, and you will get "bad port" messages from the
sbatchd and mbatchd daemons on a non-HP XC system node.
d. If you use ssh for node-to-node communication, set the following variable in lsf.conf (assuming
the ssh keys have been set up to allow access without a password):
178 Installing LSF-HPC for SLURM into an Existing Standard LSF Cluster