Best Practices for HP BladeSystem Deployments using HP Serviceguard Solutions for HP-UX 11i (May 2010)
20
In this example, a 4-node Serviceguard cluster is configured in a single c7000 enclosure, with an
EVA disk array used for shared storage between the cluster nodes. An HP Systems Insight Manager
Central Management Server is also shown, which provides overall management of the systems
environment from outside of the Serviceguard cluster. While this example shows 4 integrity server
blades used as cluster nodes, it is also possible to use HP Integrity Virtual Machines as Serviceguard
nodes. HP Virtual Connect (not shown in this figure) can be configured to provide a private cluster
heartbeat network within the enclosure for the cluster nodes without requiring any external wiring or
switches. A quorum service, running on a Linux OS in this example, provides quorum for the 4-node
cluster. Note it is supported to use a cluster lock disk or lock LUN for a 2, 3 or 4-node configuration
within a blade enclosure; however it is recommended to use a quorum service for clusters having 3 or
mode nodes.
While this configuration is supported, it is not recommended because the blade enclosure is
considered a single point of failure (SPOF) that could potentially fail and bring down the entire
cluster. However; one recommended best practice shown in this diagram is the placement of the CMS
on a system external to the blade enclosure so that it can remain functional for managing other
systems in the environment in the event the blade enclosure is unavailable due to some firmware
update operations requiring the entire enclosure to be down or a power failure of the enclosure.
Advantages and Limitations
This configuration has the following advantages and limitations:
Advantages:
• Provides a completely self-contained Serviceguard cluster within a single enclosure
• Internal cluster heartbeat network can be configured using Virtual Connect to eliminate additional
network cabling and switches
• Provides consistent management of server profiles using Virtual Connect with all cluster nodes within
the blade enclosure
Limitations:
• The blade enclosure is a single point of failure that can cause the entire cluster to go down
• There are no nodes external to the cluster to failover workloads in the event of planned enclosure
maintenance (e.g., Virtual Connect and / or Onboard Administrator firmware upgrades that
require all blades in the enclosures to be shutdown)
Clustering across Multiple Blade Enclosures or non-Blade Servers
One architecture design for improving a “cluster in a box” configuration is to split the Serviceguard
cluster nodes between multiple blade enclosures or other external Serviceguard cluster nodes to avoid
having a single enclosure as a single point of failure (SPOF). Figure 13 is an example of this
architecture with a Serviceguard cluster spanning multiple c7000 blade enclosures.