HP XC System Software Administration Guide Version 2.1
12.3.2 Shutting Down LSF-HPC
At system shutdown, the /etc/init.d/lsf script also ensures an orderly shutdown of
LSF-HPC, if LSF-HPC is not running on the head n ode.
You can use the controllsf command, as shown her
e, to stop LSF-HPC regardless of w here
it is active in the HP XC system.
# controllsf stop
If LSF-HPC failover is enabled, L SF-HPC is restarted o n th e head node when the stopsys
command is ex ecuted; this allows jo bs to be queued. How ever, this also m eans t hat the head
node is still the LSF execution host even after the HP XC system is restarted, even if another
node is specified as the primary LSF execution ho st, unless the head node is also rebooted.
12.4 Controlling the LSF-HPC Service
You can use the service command to start or stop the LSF-HPC service on the HP XC
system, or to obtain the system’s current status:
service lsf start
This command is primarily of inte
rest for automated startup. If the current node is the
primary LSF execution host, it s
ets the state to "running", then starts LSF-HPC unless
it is already running somewher
e on the HP XC system.
service lsf stop
This comman d stops the LSF-HPC environment if i t is running on the current node.
Invoking this comman d o n t he h ead n ode shuts down the LSF-H PC environment
regardless where it is on the HP XC system, and set the state to "shu t dow n" to prevent
any attempt to fail over the L S F- HP C service to ano ther node.
service lsf status
This command reports t
he current state (up or d own ) of LSF.
This comm and has the same function as controllsf status.
12.5 Load Indexes and Resource Information
LSF-HPC gathers limit
ed resource information and load indexes from the LSF execution h ost
and from its integra
tion w ith SLURM. Not all indexes are reported because SLURM does not
provide the same inf
ormation that L SF-HPC usually reports.
The LSF-HPC lshosts and lsload commands are two common com mands for obtaining
resource inform ation from LSF-HPC.
The lshosts command reports the follow ing resource information:
ncpus
The total number o
f available processors within the SLURM ’lsf’
partition.
maxmem
The minimum valu
e of configured SLURM memory for a ll nodes.
This value represents the maximum value of memory o n the node
with the least memory. Running a job on all nodes must account
for this value.
maxtmp
The minimum value of configured SLURM TmpDisk space for
all nodes.
12-6 LSF-HPC Administration