LSF Version 7.3 - Using Platform LSF HPC

Operating Platform LSF HPC for Linux/QsNet
RMS hosts and RMS jobs
Platform LSF RMS topology support plugin
LSF scheduling policies and RMS topology support
LSF host preference and RMS allocation options
RMS rail allocation options
RMS hosts and RMS jobs
An RMS host has the rms Boolean resource in the RESOURCES column of the host
section in
lsf.cluster.
cluster_name
.
An RMS job has appropriate external scheduler options at the command line
(
bsub -extsched) or queue level (DEFAULT_EXTSCHED or
MANDATORY_EXTSCHED in
rms queue in lsb.queues).
RMS jobs only run on RMS hosts, and non-RMS jobs only run on non-RMS hosts.
Platform LSF RMS topology support plugin
The Platform LSF RMS external scheduler plugin runs on each LSF host within an RMS
partition. The RMS plugin is started by
mbschd and handles all communication
between the LSF scheduler and RMS. It translates LSF concepts (hosts and job slots)
into RMS concepts (nodes, number of CPUs, allocation options, topology).
The Platform LSF topology adapter for RMS (RLA) is located on each LSF host within
an RMS partition. RLA is started by
sbatchd and is the interface for the LSF RMS
plugin and the RMS system.
To schedule a job, the RMS external scheduler plugin calls RLA to:
Report the number of free CPUs on every host requested by the job
Allocate an RMS resource with the specified topology
Deallocate RMS resources when the job finishes
LSF scheduling policies and RMS topology support
Supported RMS prun allocation options
RMS_SLOAD or RMS_SNODE Yes Yes Yes Yes
RMS_SLOAD, RMS_SNODE with
nodes/ptile/base specification
Yes Yes Yes Yes
RMS_MCONT Yes Yes Yes Yes
RMS_MCONT with nodes/ptile/base
specification
Yes Yes Yes Yes
-B Base node index -extsched "RMS[base=base_node_name]"
-C Number of CPUs per node
-C is obsolete in RMS prun
-extsched "RMS[ptile=cpus_per_node]"