HP XC System Software Administration Guide Version 3.1

The bqueues -l command displays the full queue configuration, including whether or not a job starter
script has been configured. See the Platform LSF documentation or bqueues(1) for more information on the
use of this command.
For example, consider an LSF-HPC with SLURM configuration in which node n20 is the LSF execution
host and nodes n[1-10] are in the SLURM lsf partition. The default normal queue contains the job
starter script, but the unscripted queue does not have the job starter script configured.
Example 15-2 Comparison of Queues and the Configuration of the Job Starter Script
$ bqueues -l normal | grep JOB_STARTER
JOB_STARTER: /opt/hptc/lsf/bin/job_starter.sh
$ bqueues -l unscripted | grep JOB_STARTER
JOB_STARTER:
$ bsub -Is hostname
Job <66> is submitted to the default queue <normal>.
<<Waiting for dispatch...>>
<<Starting on lsfhost.localdomain>>
n10
$ bsub -Is -q unscripted hostname
Job <67> is submitted to the default queue <unscripted>.
<<Waiting for dispatch...>>
<<Starting on lsfhost.localdomain>>
n20
This release of the HP XC System Software provides an LSF queue JOB_STARTER script, which is configured
for all default queues during HP XC installation. This JOB_STARTER script performs three tasks:
It creates an accurate LSB_HOSTS environment variable.
It creates an accurate LSB_MCPU_HOSTS environment variable.
It uses a SLURM srun command to launch a user's interactive job on the first allocated compute
node.
The LSB_HOSTS and LSB_MCPU_HOSTS environment variables, as initially established by LSF-HPC with
SLURM, do not accurately reflect the host names of the HP XC system nodes that SLURM allocated for
the user's job. This JOB_STARTER script corrects these environment variables so that existing applications
compatible with LSF can use them without further adjustment.
The SLURM srun command used by the JOB_STARTER script ensures that every interactive job submitted
by a user begins on the first allocated node. Without the JOB_STARTER script, all interactive user jobs
would start on the LSF execution host. This behavior is not consistent with batch job submissions or
Standard LSF-HPC behavior in general, and creates the potential for a bottleneck in performance as both
the LSF-HPC with SLURM daemons and local user tasks compete for processor cycles.
The JOB_STARTER script has one drawback: all interactive I/O runs through the srun command in the
JOB_STARTER script. This means full tty support is unavailable for interactive sessions, resulting in no
prompting when a shell is launched. The workaround is to set your display to support launching an xterm
instead of a shell.
The JOB_STARTER script is located at /opt/hptc/lsf/bin/job_starter.sh, and is preconfigured
for all the queues created during the default LSF-HPC with SLURM installation on the HP XC system. HP
recommends that you configure the JOB_STARTER script for all queues.
To disable the JOB_STARTER script, simply remove it or comment it out from the lsb.queues
configuration file. For more information on the JOB_STARTER option and configuring queues, see
Administering Platform LSF on the HP XC Documentation CD.
For more information on configuring JOB_STARTER scripts and how they work, see the Standard LSF
documentation
180 Managing LSF