HP XC System Software Administration Guide Version 2.1

This valu e represents the m a ximu m value of disk s
pace on the
node with the least amount of disk space. Runnin
gajobonall
nodes must account for this value.
Here is an example of the LSF-HPC lshosts comma
nd.
$ lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
lsfhost.loc SLINUX6 Opteron8 16.0 22 2048M - Yes (slurm)
The lshosts com mand reports a hyphen (-) for all the other load index and resource
information. Note that initially SLURM i s not configu red w ith any m emory or temporary disk
space, so LIM reports the default value of 1 MB for each index.
See the lshosts
(1) for more information on this command.
The lsload command reports the load index, that is, the number of current log in users on the
LSF execution host.
Here is an example of the LSF-HPC lsload command.
$ lsload
HOST_NAME status r15s r1m r15m ut pg ls it tmp swp mem
lsfhost.localdo ok-----1----
See the lsload(1) for more information on this command.
In these examples, th ere are 22 processors on th is H P XC system available for use by LSF-HPC.
This information is obtained by LSF-HPC from SLURM, w hich can be verified with the
SLURM sinfo command:
$ sinfo --Node --long
NODELIST NODES PARTITION STATE CPUS MEMORY TMP_DISK WEIGHT FEATURES REASON
xc5n[1-10,16] 11 lsf idle 2 2048 1 1 (null) none
The output of the sinfo command shows that there are eleven nodes available, and that each
node has two processors.
Note that the LSF lshosts command and the SLURM sinfo command both report the
Memory for each node as 2,048 MB. T his memory value is configured for each node in
/hptc_cluster/slurm/etc/slurm.conf; it is not obtained directly from the nodes.
See the SLURM docum entation for more information on configuring the slurm.conf file.
12.6 Launching Jobs with LSF-HPC for SLURM
The LSF-HPC daemons ru n on one node, the LSF-HPC Execu tio n Ho st, on ly; therefore they
can dispatch jobs only on that node. The JOB_STARTER script, described in Section 12.1.1,
ensures that user jobs execute on their reser ved nodes, and that these j obs do not contend
for t he LSF-HPC Execu tion Host.
Consider an HP XC system in which noden120 is the LSF-HPC Execution Host and nodes
n1 through n99 are com pute nodes. The fo llo win g series of examples shows jobs launched
without the JOB_STARTER script with varied results.
Example 12 -1 illustrates the lau nchi ng of a job in its most basic f orm.
Example 12-1: A Basic Job Launch Without the JOB_STARTER Script Configured
$ bsub -I hostname
Job <20> is submitted to default queue <normal>.
<<Waiting for dispatch...>>
<<starting on lsfhost.localdomain>>
n120
LSF-HPC Administration 12-7