HP Scalable Visualization Array, V1.1 Software Installation Guide

Note: Do not include the head node (head node is the highest numbered node) as one of the
nodes in either partition (lsf or vis).
In this example, compute nodes that you assign to the vis partition are not available for jobs
using LSF. The effect in this example is to have a single compute node (3) in the vis partition and
a single compute node (4) in the lsf partition.
You also need to add an entry to the prolog/epilog section of the slurm.conf file:
Epilog=/opt/sva/sbin/sva_epilog.clean
2. Set the protections on the jobacct.log file using the following command:
# chmod a+r /hptc_cluster/slurm/job/jobacct.log
3. You must reboot the head node. You can do so with the following command:
# /sbin/shutdown -r now
A reboot allows a variety of services to restart and use NIS. This is faster than restarting the services
manually.
For workstation head nodes, the Kudzu Hardware Discovery Utility automatically starts during the next
reboot. To properly deal with this, press any key when prompted to do so. Then select the
Ignore the
device
option. This allows the nVidia driver installed by SVA to be used without further interruption.
Golden Image the Render/Display Nodes
This process propagates the image from the head node to the render and display nodes in the cluster.
Complete the imaging process as documented in the
HP XC Installation Guide: Configuring and Imaging
the System
. Begin at the
Start the System and Propagate the Golden Image
section.
This next step only applies to the specific case of installing the SVA Software Kit on an existing HP XC cluster.
Run the following command immediately before you begin the
Start the System and Propagate the Golden
Image
section:
# setnetboot -node n[1-8]
In this command, n represents the default prefix for the cluster. For example, enter viz[18] assuming you
specified viz for your cluster prefix at an earlier stage of the installation and have eight visualization nodes
to image (not counting the head node).
Generating an SVA Site Configuration File
This process is documented in detail in the
SVA System Administration Guide
.
From the head node, enter the following command (requires root privileges):
# svaconfigure
Verifying the HP XC System
See the
HP XC Installation Guide: Verifying the System
for the steps to run the HP XC Operation Verification
Procedure (OVP). Use the OVP to verify that HP XC is installed correctly.
Main Installation Is Complete
At this point, you have completed the main parts of the installation of HP XC and SVA. There are some
additional verification and configuration steps that are SVA-specific. These are documented in Additional
Configuration Tasks (pg. 16).
Installing the SVA Software Kit on an Existing HP XC Cluster
This section explains how to install the SVA Software Kit on an existing HP XC Cluster; that is, when SVA
nodes are fully integrated into a compute cluster running HP XC.
14 Installing Software Components