Specifications

Best Practices for Virtualizing and Managing Exchange 2013
53
53
Guest Storage
In addition to presenting VHD or VHDX files to Exchange 2013 virtual machines, administrators can
choose to connect the guest operating system of an Exchange virtual machine directly to existing storage
investments. Two methods provided in Windows Server 2012 Hyper-V are In-Guest iSCSI and Virtual Fibre
Channel.
In-Guest iSCSI
Deploying an Exchange 2013 virtual machine on iSCSI storage provides a more cost-effective solution for
enterprise-level virtual machine installations than an equivalent Fibre Channel solution. Instead of using
virtual disks (such as the VHD or VHDX files discussed earlier) and placing them on the iSCSI LUNS
presented to the host, the administrator can choose to bypass the host and connect the virtual machines
directly to the iSCSI array itself. The iSCSI target, which is part of the storage array, provides storage to the
Exchange 2013 virtual machine directly over the virtual machine’s network adapters. The Exchange virtual
machine uses the in-box iSCSI initiator inside the Windows Server guest operating system to consume the
storage over a vNIC that has connectivity on the iSCSI storage network. The respective Exchange Servers
can therefore store information, logs, and other critical data directly on iSCSI disk volumes.
To enact this approach, the administrator must create dedicated Hyper-V virtual switches and bind them
to appropriate physical NICs in the hosts. This ensures that the virtual machines can communicate with
the iSCSI storage on the appropriate network/VLAN. After configuration, the administrator must use the
guest operating system IQN from the iSCSI initiator to present the appropriate LUNS directly to the virtual
machine over the virtual networks. In addition, vNIC features like jumbo frames and some other offload
capabilities can help to increase performance and throughput over the network. It is important to note
that if you intend to run the virtual machine with In-Guest iSCSI on top of a Hyper-V cluster, all cluster
nodes must have the same iSCSI virtual switches created on the hosts to ensure that when the virtual
machine migrates around the cluster, connectivity to the underlying storage is not lost.
For resiliency, the administrator may want to use multiple vNICs to connect the virtual machine to the
iSCSI SAN. If this is the case, it is important to enable and configure MPIO, as discussed earlier, to ensure
optimal performance and resiliency.
Virtual Fibre Channel
In a similar way to iSCSI, Virtual Fibre Channel for Hyper-V helps to connect to FC storage from within a
virtual machine, bypassing the host’s operating system. Virtual FC for Hyper-V provides direct SAN access
from the guest operating system by using standard World Wide Node Names (WWNN) and Worldwide
Port Names (WWPN) associated with a virtual machine. Virtual FC for Hyper-V also helps to run the
Failover Clustering feature inside the guest operating system of a virtual machine connected to the
underlying, shared FC storage.
For virtualizing Exchange 2013, Virtual FC for Hyper-V allows you to use existing FC investments to drive
the highest levels of storage performance access, while also retaining support for virtual machine live
migration and MPIO.