Specifications

Chapter 6. IBM Virtual Fabric 10Gb Switch Module implementation 273
6.5.2 Layer 2 Failover
The primary application for Layer 2 Failover is to support Network Adapter Teaming. With
Network Adapter Teaming, all the NICs on each server share an IP address, and are
configured into a team. One NIC is the primary link, and the other is a standby link. For more
details, see the documentation for your Ethernet adapter.
Layer 2 Failover can be enabled on any trunk group in the switch, including LACP trunks.
Trunks can be added to failover trigger groups. Then, if some specified number of monitor
links fail, the switch disables all the control ports in the switch. When the control ports are
disabled, it causes the NIC team on the affected servers to fail over from the primary to the
backup NIC. This process is called a
failover event.
When the appropriate number of links in a monitor group return to service, the switch enables
the control ports. This action causes the NIC team on the affected servers to fail back to the
primary switch (unless Auto-Fallback is disabled on the NIC team). The backup switch
processes traffic until the primary switch’s control links come up, which can take up to
5 seconds.
For more details about the Layer 2 Failover feature, see Chapter 2, “IBM System Networking
Switch 10Gb Ethernet switch features” on page 51.
Because the Virtual Fabric 10Gb Switch Modules are configured in stacking mode, they work
together as a unified system and both blade server ports are practically connected to a single
switch. The stacking operation already provides uplink redundancy.
Layer 2 Failover is most relevant when the VFSMs operate in stand-alone mode, with the
blade server connected to each switch. For NIC teaming and Layer 2 Failover with
stand-alone switches, see Chapter 2, “IBM System Networking Switch 10Gb Ethernet switch
features” on page 51.
6.5.3 Trunking
Multiple switch ports can be combined together to form robust, high-bandwidth trunks to other
devices. Since trunks are composed of multiple physical links, the trunk group is inherently
fault tolerant. If one connection between the switches is available, the trunk remains active.
For detailed information about trunking, see 6.4.2, “Ports and trunking” on page 265.
6.5.4 Hot Links
For network topologies that require Spanning Tree to be turned off, Hot Links provides basic
link redundancy with fast recovery.
Important: Only two links per server can be used for Layer 2 Trunk Failover (one primary
and one backup). Network Adapter Teaming allows only one backup NIC for each
server blade.
Note: The Hot Links mechanism is not used in th reference architecture implementation.
The following section is just a short outline of the feature. For more information, see the
documentation listed in 6.7, “More information” on page 284.