HP XC System Software Hardware Preparation Guide Version 3.2

1 Hardware and Network Overview
This chapter addresses the following topics:
“Supported Cluster Platforms” (page 19)
“Server Blade Enclosure Components” (page 21)
“Server Blade Mezzanine Cards” (page 22)
“Server Blade Interconnect Modules” (page 22)
“Supported Console Management Devices” (page 23)
Administration Network Overview” (page 25)
Administration Network: Console Branch” (page 25)
“Interconnect Network” (page 25)
“Large-Scale Systems” (page 26)
1.1 Supported Cluster Platforms
An HP XC system is made up of interconnected servers.
A typical HP XC hardware configuration (on systems other than Server Blade c-Class servers)
contains from 5 to 512 nodes. To allow systems of a greater size, an HP XC system can be arranged
into a large-scale configuration with up to 1024 compute nodes (HP might consider larger systems
as special cases).
HP Server Blade c-Class servers (hereafter called server blades) are perfectly suited to form HP
XC systems. Physical characteristics make it possible to have many tightly interconnected nodes
while at the same time reducing cabling requirements. Typically, server blades are used as
compute nodes but they can also function as the head node and service nodes. The hardware
and network configuration on an HP XC system with HP server blades differs from that of a
traditional HP XC system, and those differences are described in this document.
You can install and configure HP XC System Software on the following platforms:
HP Cluster Platform 3000 (CP3000)
HP Cluster Platform 3000BL (CP3000BL) with HP c-Class server blades
HP Cluster Platform 4000 (CP4000)
HP Cluster Platform 4000BL (CP4000BL) with HP c-Class server blades
HP Cluster Platform 6000 (CP6000).
HP Cluster Platform 6000BL (CP6000BL) with HP c-Class server blades
For more information about the cluster platforms, see the documentation that was shipped with
the hardware.
1.1.1 Supported Processor Architectures and Hardware Models
Table 1-1 lists the hardware models that are supported for each HP cluster platform.
1.1 Supported Cluster Platforms 19