HP XC System Software Administration Guide Version 3.0
each node. Then the and then creating the /etc/profile.d/lsf.sh and /etc/profile.d/lsf.csh
files that reference the appropriate source file upon login are created.
Finally, LSF is configured to start when the HP XC boots up. A soft link from /etc/init.d/lsf to the
lsf_daemons startup script provided by LSF is created. All this configuration optimizes the installation
of LSF on HP XC.
The following LSF commands are particularly useful:
• The bhosts command is useful for viewing LSF batch host information.
• The lshosts command provides static resource information.
• The lsload command provides dynamic resource information.
• The bsub command is used to submit jobs to LSF.
• The bjobs command provides information on batch jobs.
For more information on using Standard LSF on the HP XC system, see the Platform LSF documentation
available on the HP XC documentation disk.
Administering LSF-HPC
The Platform Load Sharing Facility for High Performance Computing (LSF-HPC) product is installed and
configured as an embedded component of the HP XC system during installation. This product has been
integrated with SLURM to provide a comprehensive high-performance workload management solution for
the HP XC system. This section describes the LSF-HPC product, its installation, and its operation on the HP
XC system with SLURM, and explains the subtle differences between this product and the Platform Standard
LSF product. This section addresses the following topics:
• Integration of LSF-HPC with SLURM (page 118)
• Installation of LSF-HPC on SLURM (page 122)
• LSF-HPC Startup and Shutdown (page 123)
• Controlling the LSF-HPC Service (page 123)
• Load Indexes and Resource Information (page 124)
• Launching Jobs with LSF-HPC (page 125)
• Monitoring and Controlling LSF-HPC Jobs (page 126)
• Job Accounting (page 127)
• LSF-HPC Failover (page 127)
• LSF-HPC Monitoring (page 129)
• Enhancing LSF-HPC (page 129)
• Configuring an External Virtual Host Name for LSF-HPC on HP XC Systems (page 135)
• LSF Daemon Log Maintentance (page 136)
See “Troubleshooting” (page 159) for information on LSF-HPC troubleshooting.
See “Installing LSF-HPC for SLURM into an Existing Standard LSF Cluster ” (page 171) for information on
extending the LSF-HPC cluster.
Integration of LSF-HPC with SLURM
LSF-HPC acts primarily as the workload scheduler and node allocator running on top of SLURM. SLURM
provides a job execution and monitoring layer for LSF-HPC. LSF-HPC uses SLURM interfaces to perform the
following:
• To query system topology information for scheduling purposes.
• To create allocations for user jobs.
• To dispatch and launch user jobs.
• To monitor user job status.
118 Managing LSF