HP Scalable Visualization Array Version 1.
© Copyright 2005, 2006 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Table of Contents About This Document.....................................................................................9 Intended Audience..................................................................................................................................9 Document Organization...........................................................................................................................9 Typographic Conventions......................................................................
5 Application Examples Running an Existing Application on a Single SVA Workstation....................................................................31 Assumptions and Goal.....................................................................................................................31 HP Remote Graphics Software and Use..............................................................................................32 Location for Application Execution and Control.........................................
List of Figures 1-1 1-2 1-3 2-1 3-1 3-2 5-1 5-2 5-3 System View of a Computing Environment with Integrated SVA...................................................................11 Standalone SVA Data Flow...................................................................................................................12 Software Support for Application Development and Use...........................................................................14 SVA Data Flow Overview.....................................
List of Tables 3-1 3-2 3-3 3-4 3-5 5-1 Operating System and Driver Components..............................................................................................24 HP XC System Components Relevant to SVA Operation.............................................................................24 HP SVA System Software.......................................................................................................................25 Third Party System Software.....................................
About This Document The SVA User's Guide introduces the components of the HP Scalable Visualization Array (SVA). The SVA product has hardware and software components that together make up the HP high performance visualization cluster. This document provides a high level understanding of SVA components.
WARNING A warning calls attention to important information that if not understood or followed will result in personal injury or nonrecoverable system problems. A caution calls attention to important information that if not understood or followed will result in data loss, data corruption, or damage to hardware or software. This alert provides essential information to explain a concept or to complete a task A note contains additional information to emphasize or supplement important points of the main text.
1 Introduction This chapter gives an overview of the HP Scalable Visualization Array (SVA). It describes how the SVA works within the context of overall HP cluster solutions. It also discusses attributes of the SVA that make it a powerful tool for running data intensive graphics applications. The SVA is a scalable visualization solution that brings the power of parallel computing to bear on many demanding visualization challenges.
The SVA serves as a key unit in an integrated computing environment that displays the results of generated data in locations where scientists and engineers can most effectively carry out analyses individually or collaboratively. SVA Clusters This section gives a high-level description of a standalone SVA, that is, an HP Cluster Platform system built using visualization nodes.
Displays The SVA supports a wide range of displays and configurations, including single displays, tiled displays in walls, and immersive CAVE environments. The SVA relies on the display capabilities of the graphics cards in the display nodes. This means that the SVA lets you use whatever display devices are supported by the graphics card. Depending on the demands of the display devices, you can use digital or analog output. The aggregate resolution of these displays can range from 10s to 100s of megapixels.
as model size, and match them to the visualization nodes your application needs to yield the desired performance and resolution. Application Support This section introduces software support for application developers. Chapter 3 contains more information on the software tools available for application developers. HP recognizes that a key capability of the SVA is to make it possible for serial applications to run without extensive recoding.
Scenegraph Applications The SVA lets you take advantage of scenegraph applications available through scenegraph middleware libraries and toolkits. The result is that the application is available on the SVA and can take advantage of its parallel scalability features.
2 SVA Architecture This chapter gives a detailed look at the architecture of the HP Scalable Visualization Array (SVA). It compares the SVA to other clusters and describes the flow of data within the cluster. SVA as a Cluster It is important to understand the cluster characteristics of the SVA. These characteristics have implications for how SVA functions. They also affect how applications take advantage of cluster features to achieve graphical performance and display goals.
Components of the HP Cluster Platform Because the SVA is an extension of the HP Cluster Platform, you can begin by understanding its base components without any visualization nodes. The following are the key architectural components of an HP Cluster Platform system without visualization nodes: Compute Nodes and The compute cluster consists of compute nodes and administrative or Administrative/Service Nodes service nodes.
(RGS). If you use RGS, a port connected to the external network is recommended. Components of an SVA The main tasks described in “Main Visualization Cluster Tasks” (pg. 18) are supported by two types of visualization nodes, which differ in their configuration and in the tasks they carry out. The two nodes types can carry out multiple tasks. These node types are unique to the SVA configuration and extend HP compute clusters to support visualization functions.
Figure 2-1 SVA Data Flow Overview multi-tile display Master Node System Interconnect OpenGL Graphics Card OpenGL Graphics Card OpenGL Graphics Card User Application OpenGL Graphics Card user interface transfer simulation data and drawing commands OpenGL Graphics Card OpenGL Graphics Card display nodes OpenGL Graphics Card render nodes A common usage scenario includes a master application node that runs the controlling logic of an application, processes the 3D data, and updates the virtual displ
3 SVA Hardware and Software This chapter provides information on the hardware and software that make up the SVA. It is a useful reference for anyone involved in managing the SVA. It is also useful for anyone who wants to understand the hardware that makes up the SVA and the software that is installed on it. The SVA combines commodity hardware components with software that include the following: • A cluster of Intel EM64T or AMD Opteron HP workstations as visualization nodes.
Figure 3-1 Sample SVA Bounded Configuration Base Rack (UVB) GigE Display Devices External Node Network Configurations This section describes the different networks used in the SVA. System Interconnect (SI) The SI for visualization nodes can be GigE, InfiniBand, or Myrinet. When the visualization nodes are integrated with compute nodes, the choice of SI is usually determined by the requirements of the compute nodes.
See the SVA System Administration Guide for more information on setting up display nodes and devices. SVA Software Summary The SVA combines third party software tools and libraries with custom and enhanced software tools and libraries. SVA software must be installed and run on each visualization node as well as the head node of a valid cluster configuration, such as an HP Cluster Platform 3000 or HP Cluster Platform 4000, properly configured for HP XC System Software with the SVA option.
Table 3-1 Operating System and Driver Components Component Notes Base Operating System Red Hat Enterprise Linux Advanced Server V4.0 Update 2. HP XC Linux is compatible with this version of Red Hat Enterprise Linux; however, it is built by HP and does not contain all the components distributed by Red Hat. www.redhat.com HP XC System Software Version 3.0. Clustering software. XC Web Site http://www.hp.com/techservers/clusters/xc_clusters.html X.Org Windowing System. X.Org Foundation: www.x.
• Main software components provided by HP (Table 3-3 ). • Main software components provided by third parties (Table 3-4). • Application development tools available on the SVA (Table 3-5).
4 Setting Up and Running a Visualization Session This chapter explains how to run visualization applications on the SVA. A visualization session relies primarily on HP XC utilities to do the underlying work; however, you can avoid manually using the underlying utilities by means of job launch scripts and associated templates provided by the SVA kit. For details on HP XC utilities, see the HP XC system documentation (link available from the SVA Documentation Library).
The kit installation also provides fully functional job launch scripts that you can use as is or customize for your own site. These are typically located in the /opt/sva/bin directory and are configured to be on your PATH. Follow these steps to use a script template: 1. 2. 3. Select a template. Modify a copy of the script template to suit the specific needs of the visualization application. Execute the modified script from the head node to launch your application as part of a visualization session.
1. Allocate: Allocates cluster resources for the visualization job. The allocation phase launches an HP XC SLURM job using the srun command. A SLURM Job ID is assigned to the job, which starts a session with the appropriate cluster resources; for example, the Display Surface and the requested number of display and render nodes. The number of resources can be specified using command line options in the script. 2. 3. 4.
You can start, stop, and restart an application to make it easier to test and debug. You must be able to view the SVA Display Surface because the DMX Console provides limited visual feedback. Creating an interactive session in this way lets you take advantage of your multi-tile display for other applications. Your desktop environment is available to start any application and display it on the multi-tile display; for example, to display high-resolution images or to launch an application like ParaView.
5 Application Examples This chapter describes the steps to start several representative applications that vary in their structure and requirements: • A workstation application that is launched remotely to use only a single node in the SVA. See “Running an Existing Application on a Single SVA Workstation” (pg. 31). • An application that uses render and display capabilities of the SVA (for example, ParaView). See “Running Render and Display Applications Using ParaView” (pg. 35).
The SVA Software Installation Guide has specific RGS installation instructions that you must use to supplement the HP RGS installation instructions. HP Remote Graphics Software and Use HP RGS is an advanced utility that makes it possible to remotely access and share 3D graphics workstation desktops. This can be done across Windows and Linux platforms. With RGS, you can: • Remotely access 3D graphics workstations. • Access applications running on SVA from a Linux or Windows desktop.
Data Access If you use a single SVA display node, place the data files in a convenient location given your site configuration. One location that provides reasonably fast access to the data is on a local disk of the display node, which is the node running your application. Given that the application in this scenario runs on a single node, there is little to be gained by distributing the data. If you choose to store data locally, you can copy the data file to the display node after the application starts.
Alternatively, you can omit the Display Surface option (–d) and accept a render or display node allocated automatically by the script. The allocated node will be one that supports RGS functions. The Site Configuration File (/opt/sva/etc/sva.conf) specifies all the available Display Surfaces. You can also use the Display Surface Configuration Tool to list the Display Surfaces. See the SVA System Administration Guide for more information. • The application name with or without application parameters.
4. Enter your Linux user name and password for the cluster in the RGS login window. The desktop environment login window for the cluster appears on your local desktop. 5. Log in to the desktop environment window using your Linux user name and password. The desktop environment appears on your local desktop in the RGS Receiver window. 6. Open a terminal window in the desktop environment and enter the following command: % sva_remote.
• ParaView supports tiled displays through a built-in display manager. • Handles structured (uniform rectilinear, non-uniform rectilinear, and curvilinear grids), unstructured, polygonal, and image data. • All processing operations (filters) produce datasets. This enables you to either further process or save as a data file the result of every operation. • Contours and isosurfaces can be extracted from all data types using scalars or vector components.
Figure 5-2 ParaView Flow of Control on the SVA Paraview Server Paraview Server Render Nodes Paraview Server Local Desktop Display Node 1 (Execution Host) Paraview Server X Server X Server File Paraview Client G igE To External Network Paraview Client Window SI G igE GFX Keyboard To Display Device Display Node 2 Display Device (2 tiles) Paraview Server X Server G igE SI G igE GFX To Display Device Follow these steps to run ParaView on the SVA.
5. Servers and Client to use the SI. This improves performance. (The ic-name is the HP XC convention used to denote that the SI communication mode is to be used.) To terminate ParaView, select the File: Exit menu item from the ParaView Console window on your desktop. Kill the various X Servers on the allocated cluster nodes. You can use the SLURM scancel command. Once you complete these steps, ParaView runs on the cluster while you maintain control of the application from your local desktop.
Assumptions and Goal This example assumes you have a visualization application that currently runs on a single workstation. It also assumes that you have not specifically modified it to take advantage of the parallel features of a cluster. This example also assumes that your goal is to run the application on the SVA and to take advantage of the multi-tile capabilities of the cluster.
File to indicate which host to use to run the application for a given Display Surface. See Chapter 4 and the SVA System Administration Guide for details on changing Configuration Data Files and their tag content. The Chromium Mothership and DMX also run on the Execution Host node. See the Chromium documentation for details on the Mothership. You must also provide input to the application as it runs. This means you must be able to provide keyboard and mouse input to the application.
Figure 5-3 Processes Running with Chromium-DMX Script External Node Display Node 1 X Server Xdmx Chromium Application X Server G ig E DMX cursor Console Window To External Network SI G ig E GFX Keyboard To Display Device Display Node 2 Display Device (2 tiles) Chromium X X Server G ig E SI G ig E GFX To Display Device Data Access For a serial application that uses Chromium, place the data files in a convenient location for your site configuration.
The primary mechanism that you use to set up displays is the Display Surface. A Display Surface is composed of one or more display nodes and their associated display devices; for example, a simple Display Surface is a specific display node and an attached flat panel display device. Initial configuration of the SVA sets up a series of default named Display Surfaces, one for each display node and its directly cabled display device.
Glossary Administrative Network bounded configuration Chromium compute node Configuration Data Files display node display block Display Surface Display Surface Configuration Tool Connects all nodes in the cluster. In an HP XC compute cluster, this consists of two branches: the Administrative Network and the Console Network. This private local Ethernet network runs TCP/IP. The Administrative Network is Gigabit Ethernet (GigE); the Console Network is 10/100 BaseT.
DMX interactive session Job Settings File LSF Node Configuration Tool ParaView Remote Graphics Software (HP) render node Site Configuration File SLURM svaconfigure Utility System Interconnect 44 Glossary arrangement of the display blocks. Invoked using the svadisplaysurface command. Requires root privileges. Distributed Multi-Head X is a proxy X Server that provides multi-head support for multiple displays attached to different machines (each of which is running a typical X Server).
tile UBB UVB VBB Myrinet can be used for the System Interconnect to speed the transfer of image data and drawing commands to the visualization nodes. The image output from the a single port of a graphics card in a display node. Typically, a tile is also considered the image displayed on a single display device such as a flat panel or projector. Utility Building Block (UBB). Base utility unit of a modular expandable SVA system.
Index A K Admin/service node, 18 Administrative network, 18, 21–22 Architecture of SVA, 17 Kit software installed by, 25 B Linux clusters background on, 17 Beowulf type, 17 types of, 17 Beowulf cluster, 17 Bounded configuration, 21 C Chromium, 25 Compilers on kit, 25 Compute cluster components of, 18 Compute node, 18 Configuration data files hierarchy of, 27 job settings, 27 overview, 27 site, 27 user, 27 D Data flow within SVA, 19 Debugger on kit, 25 Development tools, 25 Diagnostics, 25 Display fl
SLURM use in job launch, 29 SVA architecture for, 17 cluster components, 19 data flow in, 19 file access with, 20 main tasks of, 18 overview of, 12 scalability of, 13 software installed, 25 usage model for, 11 sva_remote.