HP XC System Software Administration Guide Version 4.0

Table Of Contents
# controllsf show
LSF is currently shut down, and assigned to node .
Failover is disabled.
Head node is preferred.
The primary LSF host node is xc128.
SLURM affinity is enabled.
The virtual hostname is "xclsf".
A.8 Starting LSF on the HP XC System
At this point lsadmin reconfig followed by badmin reconfig can be run within the existing
LSF cluster (on plain in our example) to update LSF with the latest configuration changes. A
subsequent lshosts or bhosts displays the new HP XC "node", although it will be UNKNOWN
and unavailable, respectively.
LSF can now be started on XC:
# controllsf start
This command sets up the virtual LSF alias on the appropriate node and then starts the LSF
daemons. It will also create a $LSF_ENVDIR/hosts file (in our example $LSF_ENVDIR = /
shared/lsf/conf). This hosts file is used by LSF to map the LSF alias to the actual host name
of the node in HP XC system running LSF. See the Platform LSF documentation for information
on hosts files.
When the LSF daemons have started up and synchronized their data with the rest of the LSF
cluster, the lshosts and bhosts commands display all the nodes with their appropriate values
and indicate that they are ready for use:
$ lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
plain LINUX86 PC1133 23.1 2 248M 1026M Yes ()
xclsf SLINUX6 Intel_EM 60.0 256 3456M - Yes (slurm)
$ bhosts
HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV
plain ok - 2 0 0 0 0 0
xclsf ok - 256 0 0 0 0 0
302 Installing LSF with SLURM into an Existing Standard LSF Cluster