Managing Serviceguard Eighteenth Edition, September 2010

2 Understanding Serviceguard Hardware Configurations
This chapter gives a broad overview of how the Serviceguard hardware components
work. The following topics are presented:
Redundancy of Cluster Components
Redundant Network Components (page 38)
Redundant Disk Storage (page 44)
Redundant Power Supplies (page 49)
Larger Clusters (page 50)
Refer to the next chapter for information about Serviceguard software components.
Redundancy of Cluster Components
In order to provide a high level of availability, a typical cluster uses redundant system
components, for example two or more SPUs and two or more independent disks. This
redundancy eliminates single points of failure. In general, the more redundancy, the
greater your access to applications, data, and supportive services in the event of a
failure.
In addition to hardware redundancy, you must have the software support which enables
and controls the transfer of your applications to another SPU or network after a failure.
Serviceguard provides this support as follows:
In the case of LAN failure, Serviceguard switches to a standby LAN or moves
affected packages to a standby node.
In the case of SPU failure, your application is transferred from a failed SPU to a
functioning SPU automatically and in a minimal amount of time.
For failure of other monitored resources, such as disk interfaces, a package can be
moved to another node.
For software failures, an application can be restarted on the same node or another
node with minimum disruption.
Serviceguard also gives you the advantage of easily transferring control of your
application to another SPU in order to bring the original SPU down for system
administration, maintenance, or version upgrades.
The current maximum number of nodes supported in a Serviceguard cluster is 16. SCSI
disks or disk arrays can be connected to a maximum of 4 nodes at a time on a shared
(multi-initiator) bus. Disk arrays using fibre channel and those that do not use a shared
bus — such as the HP StorageWorks XP Series and the EMC Symmetrix — can be
simultaneously connected to all 16 nodes.
The guidelines for package failover depend on the type of disk technology in the cluster.
For example, a package that accesses data on a SCSI disk or disk array can failover to
a maximum of 4 nodes. A package that accesses data from a disk in a cluster using
Redundancy of Cluster Components 37