HP XC System Software Administration Guide Version 3.0
For example, consider an LSF-HPC configuration in which node n20 is the LSF-HPC execution host and nodes
n[1-10] are in the SLURM lsf partition. The default normal queue contains the job starter script, but the
unscripted queue does not have the job starter script configured.
Example 13-2. Comparison of Queues and the Configuration of the Job Starter Script Comparison of
Queues and the Configuration of the Job Starter Script
$ bqueues -l normal | grep JOB_STARTER
JOB_STARTER: /opt/hptc/lsf/bin/job_starter.sh
$ bqueues -l unscripted | grep JOB_STARTER
JOB_STARTER:
$ bsub -Is hostname
Job <66> is submitted to the default queue <normal>.
<<Waiting for dispatch...>>
<<Starting on lsfhost.localdomain>>
n10
$ bsub -Is -q unscripted hostname
Job <67> is submitted to the default queue <unscripted>.
<<Waiting for dispatch...>>
<<Starting on lsfhost.localdomain>>
n20
This release of the HP XC System Software provides an LSF-HPC queue JOB_STARTER script, which is
configured for all default queues during HP XC installation. This JOB_STARTER script performs three tasks:
• It creates an accurate LSB_HOSTS environment variable.
• It creates an accurate LSB_MCPU_HOSTS environment variable.
• It uses a SLURM srun command to launch a user's interactive job on the first allocated compute node.
The LSB_HOSTS and LSB_MCPU_HOSTS environment variables, as initially established by LSF-HPC, do not
accurately reflect the host names of the HP XC system nodes that SLURM allocated for the user's job. This
JOB_STARTER script corrects these environment variables so that existing applications compatible with LSF
can use them without further adjustment.
The SLURM srun command used by the JOB_STARTER script ensures that every interactive job submitted
by a user begins on the first allocated node. Without the JOB_STARTER script, all interactive user jobs
would start on the LSF-HPC execution host. This behavior is not consistent with batch job submissions or
Standard LSF behavior in general, and creates the potential for a bottleneck in performance as both the
LSF-HPC daemons and local user tasks compete for processor cycles.
The JOB_STARTER script has one drawback: all interactive I/O runs through the srun command in the
JOB_STARTER script. This means full tty support is not available for interactive sessions, resulting in no
prompting when a shell is launched. The workaround is to set your display to support launching an xterm
instead of a shell.
The JOB_STARTER script is located at /opt/hptc/lsf/bin/job_starter.sh, and is preconfigured
for all of the queues created during the default LSF-HPC installation on the HP XC system. HP recommends
that you configure the JOB_STARTER script for all queues.
To disable the JOB_STARTER script, simply remove it or comment it out from the lsb.queues configuration
file. For more information on the JOB_STARTER option and configuring queues, see
Administering Platform
LSF
on the HP XC Documentation CD.
For more information on configuring JOB_STARTER scripts and how they work, see the Standard LSF
documentation
SLURM External Scheduler
The integration of LSF-HPC with SLURM includes the addition of a SLURM-based external scheduler. Users
can submit SLURM parameters in the context of their jobs. This enables users to make specific topology-based
allocation requests. See the
HP XC System Software User's Guide
for more information.
120 Managing LSF