Using PBS Professional on HP XC
Configuring PBS Professional™ under HP XC
Unless specified otherwise, all the following configuration commands should be entered from the PBS
server (front-end node.)
Configuring the OpenSSH scp utility
By default, PBS Professional™ uses the rcp utility to copy files between nodes in the cluster. The
default HP XC configuration disables rcp in favor of the more secure scp command provided by
OpenSSH. To use PBS on XC, configure it to default to scp as follows:
1. Using a text editor, open the file /etc/pbs.conf on the server node.
2. Search for the configuration variable PBS_SCP, and assign it the value /usr/sbin/scp as
follows:
PBS_SCP=/usr/bin/scp
3. Repeat this operation on the PBS execution node, prior to performing the steps in the section
titled “Replicating execution nodes".
Removing nodes from the SLURM or LSF configuration
Prevent SLURM or LSF from allocating jobs to PBS execution nodes as follows:
1. Remove the PBS execution nodes from all SLURM partitions specified in the file
/hptc_cluster/slurm/etc/slurm.conf. See the HP XC System Software Administration
Guide for details on configuring SLURM partitions.
2. Enter the following reconfiguration commands to implement the changes:
# scontrol reconfig
# badmin reconfig
Adding nodes to the PBS Professional™ configuration
1. Create a list of nodes to manage in a file named PBS_HOME/server_priv/nodes, using the
following syntax:
<node_name>[:ts] pcpus=<number_of_cpus>
Where:
a. One node is specified per line.
b. [:ts] - Optionally identifies the node as time sharing. (Time-shared nodes
might be over-subscribed (number of jobs > number of CPUs) if the local policy
permits, and are not exclusively allocated to a single job.)
c. <node_name> - Specifies the node’s cluster name, such as n12.
d. Pcpus - Specifies a numerical attribute, <number_of_cpus> equivalent to the
physical CPUs in the server.