HP XC System Software Administration Guide Version 3.1

A.8 Starting LSF on the HP XC System
At this point lsadmin reconfig followed by badmin reconfig can be run within the existing LSF
cluster (on plain in our example) to update LSF with the latest configuration changes. A subsequent
lshosts or bhosts displays the new HP XC "node", although it will be UNKNOWN and unavailable,
respectively.
LSF can now be started on XC:
# controllsf start
This command sets up the virtual LSF alias on the appropriate node and then starts the LSF daemons. It
will also create a $LSF_ENVDIR/hosts file (in our example $LSF_ENVDIR = /shared/lsf/conf).
This hosts file is used by LSF to map the LSF alias to the actual host name of the node in HP XC system
running LSF. See the Platform LSF documentation for information on hosts files.
When the LSF daemons have started up and synchronized their data with the rest of the LSF cluster, the
lshosts and bhosts commands display all the nodes with their appropriate values and indicate that
they are ready for use:
$ lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
plain LINUX86 PC1133 23.1 2 248M 1026M Yes ()
xclsf SLINUX6 Intel_EM 60.0 256 3456M - Yes (slurm)
$ bhosts
HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV
plain ok - 2 0 0 0 0 0
xclsf ok - 256 0 0 0 0 0
A.9 Sample Running Jobs
Example A-1 Running Jobs as a User on an External Node Launching to a Linux Itanium Resource
$ bsub -I -n1 -R type=LINUX86 hostname
Job <411> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on plain>>
plain
Example A-2 Running Jobs as a User on an External Node Launching to an HP XC Resource
$ bsub -I -n1 -R type=SLINUX64 hostname
Job <412> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on xclsf>>
xc127
A.8 Starting LSF on the HP XC System 261