HP SVA V2.0 Software Installation Guide

Note that you can use the Node Configuration tool to change node type after the installation
at any time. See the HP SVA System Administration Guide for more information.
6. This step only applies to the installation of the SVA Software Kit on an existing HP XC
cluster. This step is needed only if the number of SVA nodes exceeds the maximum number
as specified during the cluster_prep installation task. Run the following command
immediately before the HP XC cluster_prep step:
# /opt/hptc/sbin/reset_db
1.4.5 Additional SVA Configuration
After you have completed the Configuring and Imaging steps as described in Section 1.4.4, follow
these SVA-specific steps:
1. Modify the default SLURM partition configuration. This is different than the step in the HP
XC Installation Guide: Finalize the Configuration of Compute Resources. Make one of two changes
to the /hptc_cluster/slurm/etc/slurm.conf file, depending on whether you intend
to use LSF on the cluster.
If you are using SLURM only and not LSF, change
/hptc_cluster/slurm/etc/slurm.conf as follows:
PartitionName=lsf RootOnly=YES Shared=FORCE Nodes=<nodelist>
To:
PartitionName=lsf Default=yes RootOnly=NO Shared=NO Nodes=<nodelist>
Note: Do not include the head node (head node is the highest numbered node) as one
of the nodes in the <nodelist>.
You also need to add an entry to the prolog/epilog section of the slurm.conf file:
Epilog=/opt/sva/sbin/sva_epilog.clean
To use LSF, create two partitions, one for visualization jobs and one for LSF jobs. Each
of the nodes in the cluster must only be present in one partition. For example, assume
a cluster has five nodes in which node 5 is the head node; nodes 1 and 2 are visualization
nodes; and nodes 3 and 4 are compute nodes. Change
/hptc_cluster/slurm/etc/slurm.conf as follows:
PartitionName=lsf RootOnly=YES Shared=FORCE Nodes=n[1-5]
To:
PartitionName=lsf RootOnly=YES Shared=FORCE Nodes=n4
PartitionName=vis Default=yes RootOnly=NO Shared=NO Nodes=n[1-3]
Note: Do not include the head node (head node is the highest numbered node) as one
of the nodes in either partition (lsf or vis).
In this example, compute nodes that you assign to the vis partition are not available
for jobs using LSF. The effect in this example is to have a single compute node (3) in
the vis partition and a single compute node (4) in the lsf partition.
You also need to add an entry to the prolog/epilog section of the slurm.conf file:
Epilog=/opt/sva/sbin/sva_epilog.clean
2. Set the protections on the jobacct.log file using the following command:
1.4 Full Software Installation from Scratch 15