Specifications

Best Practices for Virtualizing and Managing Exchange 2013
38
38
Cluster Networking
Before you begin to construct the failover cluster that will form the resilient backbone for key virtualized
workloads, it is important to ensure that the networking is optimally configured. Clusters, as a minimum,
require 2 x 1 GbE network adapters; however, for a traditional production Hyper-V failover cluster, it is
recommended that a greater number of adapters be used to provide increased performance, isolation,
and resiliency.
As a base level for customers using gigabit network adapters, 8 network adapters are recommended.
This provides the following:
2 teamed NICs used for Hyper-V host management and the cluster heartbeat.
2 teamed NICs used by a Hyper-V Extensible Switch to allow virtual machine communication.
2 teamed NICs used by Cluster Shared Volumes for traffic and communication.
2 teamed NICs used for live migration of virtual machines.
Moreover, if a customer is using iSCSI storage, an additional 2 NICs should be included for connectivity
from host to storage, and MPIO should be used instead of NIC Teaming. Alternatively, if the customer is
using SMB storage for the cluster, the hosts should have 2 NICs, but SMB Multichannel (or SMB Direct, if
RDMA is present) should be used to provide enhanced throughput and resiliency. So, in total, 10 NICs as a
minimum are mandated for a customer who:
Needs a cluster that uses Cluster Shared Volumes.
Wants to use live migration.
Requires resiliency at the NIC level.
These different networks can be combined onto fewer NICs, but isolation is recommended for optimal
performance.
Another option is to use fewer, higher bandwidth NICs, such as 2 x 10 GbE NICs that are combined into a
host-level team for an aggregated 20 Gb bandwidth. The question remains, however, how do you isolate
the different types of traffic (like CSV and live migration) on what essentially is a single NIC Team
presented at the host level? To solve this problem, you can use a converged approach.
Figure 24 provides a high-level example of this converged approach. Here, the Hyper-V cluster node has 2
x 10 GbE NICs, which are configured in a team. For the isolated networks that are required, you can create
virtual NICs (vNICs) for the host operating system. Each cluster node member uses a virtual network
adapter (for live migration, for CSV, for management, and so on) to connect to the single Hyper-V
Extensible Switch, which connects it to the physical network. Each tenant virtual machine is also connected
to the same Hyper-V Extensible Switch using a regular virtual network adapter. Windows Server 2012
Hyper-V virtual switch Quality of Service (QoS) is used to ensure that each traffic type (such as live
migration, cluster, management, and tenant) has a predictable amount of bandwidth available. Traffic
isolation is enabled by 802.1q VLAN tagging so that host traffic is not visible to the tenants. Hyper-V
virtual switch port ACLs also can be used for more granular access control at the network level.