HP Scalable Visualization Array Version 1.1 System Administration Guide

DoubleTall,TripleWide
Choose the Display Surfaces to change from the previous list.
Place all names on a single line, separated by spaces or commas.
You may use wildcards to specify more than one surface:
*
Select the node that will be used to replace syntho7:
(press ENTER to list available display nodes): syntho20
syntho7 has been replaced by syntho20 in the following Display Surfaces:
DoubleTall,TripleWide
Creating Aliases for Display Surfaces
You can change the generated Display Surface names and assign site-specific names to the Display Surfaces.
Instead of modifying the Site Configuration File, which changes if another discovery is needed, you can
define alias names for Display Surfaces in the /opt/sva/etc/alias.conf file.
This file contains generated defaults for each Display Surface in the form name=name to make it easier to
find the named Display Surfaces. The file format is as follows:
[SVA_ALIAS]
JacksTest=SVA_1_1
Chris_Debug=SVA_1_2
O&G_Demo=SVA_3_1
Four_Flat_Panels=SVA_4_1
TheaterOne=SVA_9_1
Configuration File Checker
The Configuration File Checker checks the syntax of the Site Configuration File. See Chapter 2 for information
on using this tool.
Accessing Data Files
Data set sizes can range from less than 1GB to more than 100GB. For example, a seismic data sets can be
1GB to 128GB, and a medical data set can be 1GB to 50GB.
Applications can access data files using NFS or the HP Scalable File Share (HP SFS) product, which is based
on the Lustre file system. When visualization nodes are integrated into a cluster with an HP SFS, they access
this file system using the System Interconnect (SI). When the HP SFS is in a separate cluster and not accessible
by the SI, visualization nodes access the file system using GigE. Applications can also copy data files to all
the visualization nodes.
Depending on the size of the data sets and the configuration of the SVA and related systems, you have
several options to implement access to the data:
Copy the data files to every visualization node. Local disk access is fast, but requires considerable disk
space locally. If the data files change frequently, this method is not recommended. Another disadvantage
of this technique is that data is lost whenever the node is re-imaged, an occasional occurrence in an
HP XC cluster environment.
The /tmp directory is a reasonable location for locally stored data.
Copy the data files to the head node, export the file share from the head node, and mount the share
on all the other nodes using the SI. You can place directories designated to contain the data under the
/var directory, which is then served using automount.
You can take advantage of the SI's high bandwidth potential if the SVA configuration uses an interconnect
such as InfiniBand® or Myrinet®. However, disk space on the head node is limited. Additionally,
network traffic congestion can occur on the SI between the file data flow and the I/O traffic generated
by the visualization application.
Configuration File Checker 33