White Papers

vStart 1000m for Enterprise Virtualization using Hyper-V Reference Architecture
Dell Inc. 18
connectivity. The host-to-array Fibre Channel is in the 8Gbps form. For Hyper-V, iSCSI-capable storage
provides an advantage in that it is the protocol that can also be utilized by Hyper-V guest virtual
machines for guest clustering. This requires that VM storage traffic and other network traffic flow over
the same interface, however this contention is mitigated through the use of VLANs and NPAR on the
network adapter.
6.6.3 Storage Network
Each PowerEdge M620 blade server is configured with a QLogic dual port QME2572 8 Gb FC mezzanine
card. Each FC port is wired across the PowerEdge M1000e mid-plane to the Dell 8|4 FC SAN modules in
Fabric B. The SAN modules are trunked to Brocade 5100 Fibre Channel switches. The front end FC ports
of Compellent Series 40 are connected to the 5100 FC SAN switches. For the management servers, each
PowerEdge R620 server is configured with a QLogic QLE2562 8 Gb FC I/O Card and connected to the
Brocade 5100 ToR SAN switches. To further support the Fabric Management guest cluster, the
PowerEdge R620 server is also configured with iSCSI connectivity to the Compellent by using its dual
port Broadcom BCM57810 10GbE Add-in card. Both ports are configured with NPAR and dedicated to
iSCSI traffic. The connectivity to the Compellent iSCSI front-end is established via the two Force10
S4810 switches. To provide the fully redundant and independent paths for storage I/O, MPIO is enabled
by the iSCSI initiator on the host. The iSCSI traffic on PowerEdge R620 is segregated by the
implementation of NPAR and VLAN. QoS is provided by the bandwidth settings in NPAR.