HP XC System Software Administration Guide Version 3.1

# controllsf start
This command searches through a list of nodes with the lsf service until it finds a node to run LSF-HPC
with SLURM.
Alternatively, you can invoke the following command to start LSF-HPC with SLURM on the current node:
# controllsf start here
15.5.2 Shutting Down LSF-HPC with SLURM
At system shutdown, the /etc/init.d/lsf script ensures an orderly shutdown of LSF-HPC with
SLURM.
You can use the controllsf command, as shown here, to stop LSF-HPC with SLURM regardless of
where it is active in the HP XC system:
# controllsf stop
15.6 Controlling the LSF-HPC with SLURM Service
You can use the service command to start or stop the LSF-HPC with SLURM service on the HP XC
system, or to obtain the system's current status:
service lsf start
This command is primarily of interest for automated startup. If the current node is the primary LSF
execution host, it sets the state to RUNNING, then starts LSF-HPC with SLURM unless it is already
running somewhere on the HP XC system. If the node is not the primary LSF execution host, it ignores
the command.
service lsf stop
This command stops the LSF-HPC with SLURM environment if it is running on the current node.
Invoking this command on the LSF execution host or on the head node shuts down the LSF-HPC
with SLURM environment regardless where it is on the HP XC system, and sets the state to SHUT
DOWN to prevent any attempt to fail over the LSF-HPC with SLURM service to another node.
service lsf status
This command reports the current state (UP or DOWN) of LSF-HPC with SLURM.
This command has the same function as controllsf status.
15.7 Launching Jobs with LSF-HPC with SLURM
You may not submit LSF-HPC with SLURM jobs as superuser (root). You may find it convenient to run
jobs as the local lsfadmin user. An example would be a job to test a new queue configuration.
The LSF-HPC with SLURM daemons run on one node only: the LSF execution host. Therefore, they can
dispatch jobs only on that node. The JOB_STARTER script, described in “Job Starter Scripts” (page 179),
ensures that user jobs execute on their reserved nodes, and that these jobs do not contend for the LSF
execution host.
Consider an HP XC system in which node n120 is the LSF execution host, and nodes n1 through n99 are
compute nodes. The following series of examples shows jobs launched without the JOB_STARTER script
with varied results.
Example 15-3 illustrates the launching of a job in its most basic form.
Example 15-3 Basic Job Launch Without the JOB_STARTER Script Configured
$ bsub -I hostname
Job <20> is submitted to default queue <normal>.
<<Waiting for dispatch...>>
<<starting on lsfhost.localdomain>>
n120
184 Managing LSF