Installing Standard LSF on a Subset of HP XC Nodes
# source LSF
. /opt/hptc/lsf/top/conf/profile.lsf.notxc
# valid hosts for standard LSF on this cluster
hosts="xc1 xc2 xc3 xc4 xc5 xc6"
hostname=`hostname`
valid=0
for i in $hosts; do
if [ "$hostname" = "$i" ]; then
valid=1
fi
done
if [ "$valid" = "0" ]; then
exit 0
fi
lsf_daemons "$1"
2. Save and exit the file. Then set permissions, create the appropriate softlink, and enable it:
# chmod 555 /opt/hptc/lsf/etc/slsf
# ln -s /opt/hptc/lsf/etc/slsf /etc/init.d/slsf
# chkconfig --add slsf
# chkconfig --list slsf
slsf 0:off 1:off 2:off 3:on 4:on 5:on 6:off
3. Edit /opt/hptc/systemimager/etc/chkconfig.map and add the following line to
enable this new "service" on all nodes in the cluster:
slsf 0:off 1:off 2:off 3:on 4:on 5:on 6:off
Adjust JOB_STARTER script for LSF-HPC for
SLURM
3. If the XC cluster version is earlier than v2.1 and LSF-HPC is configured with the recommended
JOB_STARTER script, make the following small change to the JOB_STARTER script. At the top of the
file change:
which srun > /dev/null 2> /dev/null
if [ "$?" != "0" ]; then
4. to the following:
if [ -z "$SLURM_JOBID" ]; then
5. This prevents the JOB_STARTER script from trying to invoke srun on the fat nodes.