Best Practices for HP BladeSystem Deployments using HP Serviceguard Solutions for HP-UX 11i (May 2010)
13
configuration information from the active module. It remains as the standby until either a system
administrator manually promotes it to the active module or the active module fails.
General HP BladeSystem I/O Redundancy Considerations
HP Integrity server blades I/O connectivity provides for:
• 2 LAN on Motherboard (LOM) modules (2 ports each @ 1Gb/s; 4 ports total)
• Support for up to 3 Mezzanine cards
With this hardware connectivity, it is possible to use a combination of LOM ports and Mezzanine
cards to provide redundancy for both network and storage connections to the server blade. HP
recommends configuring primary and alternate paths to use different Mezzanine cards and
interconnect modules to eliminate these components as potential SPOFs, if possible. However;
depending on the Mezzanine card configuration chosen based on customer availability requirements
and cost, it is acceptable to have both primary and alternate paths defined through one multi-port
Ethernet or Fibre Channel Mezzanine card.
Networking Connectivity
With the four internal Ethernet ports provided on Integrity server blades using two LOM NICs that
have independent hardware port controllers, it is possible to create a redundant networking
configuration that eliminates the blade networking ports as a SPOF. This is achieved by configuring
the LOM ports to avoid a potential port controller failure that could disable two ports by using ports 1
and 4 as the Serviceguard primary and standby connection for one network (e.g., site data LAN) and
ports 2 and 3 as the primary and standby connection for another network (e.g., Serviceguard
heartbeat). Using this configuration, Serviceguard local LAN failover would protect against either a
port controller or interconnect module failure. Note that it is also possible to use APA (Auto-Port
Aggregation) LAN_MONITOR mode to provide an active / standby network port configuration.
However; APA trunking or load balancing is not supported with Virtual Connect as Virtual Connect
does not provide pass-through of LACP (Link Aggregation Control Protocol) frames to host systems.
HP also recommends using an additional 4-port Ethernet Mezzanine card, if required, to provide
additional network connections based on application use requirements (e.g., VM host supporting
multiple VM networks).
Using redundant HP Virtual Connect (VC) Ethernet modules is another method to improve network
connection availability. VC Ethernet modules, when installed in a side-by-side bay pair configuration
in interconnect bays 1 and 2, run as a high availability pair. Redundancy daemons running on both
modules determine the active VC Manager (usually in bay 1) using internal heartbeats maintained
over multiple paths (signal midplane, Ethernet link, Onboard Administrator) and can automatically
switch to the other VC Ethernet module in the event of a loss of heartbeat. There are no specific
network requirements for using Serviceguard with Virtual Connect other than the recommendation to
eliminate a SPOF by using redundant VC Ethernet modules.
Virtual Connect also facilitates Ethernet link failover by allowing Virtual Connect networks to utilize
ports on multiple Virtual Connect modules in the same VC Domain. VC domains using Virtual Connect
Manager can span up to four enclosures; additional enclosures can be managed using Virtual
Connect Enterprise Manager. Depending on the configuration, a VC network will transparently shift
its upstream communication to a port on the same module or on a different module in the event of a
link failure. HP recommends using fully redundant interconnection of Virtual Connect Ethernet modules
so that, if a stacking cable is lost, Ethernet packets within the VC domain will be automatically re-
routed to the uplink through the redundant path. This connection also preserves network connectivity if
an Ethernet interconnect module fails or is removed. Figure 8 shows an example of stacked Virtual
Connect Ethernet modules.