White Paper
Use Case Study: Using Active System For VMware Cluster Environment Configuration
14
configuration in the OS. If this scenario was executing an iSCSI boot use case, specifying the Native
VLAN would be a requirement.
The bandwidth minimum and maximum in this case will both be set to 10Gb since this port will not be
divided into partitions.
Redundancy is enabled on this Virtual NIC. By enabling redundancy here, one iSCSI connection on fabric
A1 and one on fabric A2 with identical settings will be enabled.
A Virtual Identity Pool is specified from which to consume iSCSI IQN/MAC identities. Finally the
networks that will be applied to this partition must be selected. These are the networks, or VLANs,
that will be allowed to ingress the server facing port of the I/O Module.
A similar process will be followed to create the Virtual NICs for the standard Ethernet connectivity. For
the B fabric devices each port will be divided into four partitions, each carrying different VLANs. A new
Virtual NIC will be created for the VMware management traffic. This will be used for communication
between the vSphere console and the ESXi hypervisor hosts. The connection type is specified as LAN
and a Native VLAN is not selected here. One Gb of the available 10Gb bandwidth is allocated for this
port due to minimal throughput needs. Since redundancy has been selected, another partition identical
to this configuration will also be configured on the second I/O Module in this fabric. The Global or
default Virtual Identity pool is selected which will assign a Virtual MAC address from the pool to this
partition. Finally, the network to associate with this virtual NIC is selected which will enable VLAN
traffic on the server facing port of the I/O Module.
This process is repeated three more times for each of the other Virtual NICs which need to be created
on the host. Two Virtual NICs, or partitions, will be created for Virtual Machine networks on VLAN 20
and 23 and allocated 4 Gb each. Finally, one Virtual NIC will be created for vMotion traffic on VLAN 22
which will be used by the hosts in the cluster for vMotion.
Once the Virtual NIC configuration has been completed, the deployment template will present a
summary screen with the configuration choices. Here you can view the bandwidth specified for each
NIC. In the case of the fabric A NICs the full 10Gb is allocated to a vNIC, or port, since in this case the
CNA is not partitioned. In the case of fabric B used for the standard Ethernet traffic, the port has been
divided into 4 partitions and the total bandwidth adds up to the 10Gb available on that port. It is
important to note that you should never exceed the devices bandwidth limitations, as this could cause
network issues.
“PCI” was selected as the Virtual NIC Mapping order. The Virtual NIC Mapping Order determines how
virtual NICs are mapped to the partitions on each port of an adapter. Selecting Physical Partition
consecutively assigns virtual NICs to partitions (for example, Port1/Partition0, then Port1/Partition1,
and so on). Selecting PCI Function assigns virtual NICs by alternating between ports (for example,
Port1/Partition0, then Port2/Partition0, and so on). Different operating systems have different
mapping order methods—for example, RHEL 6.2 and ESX 5.0 both use PCI function to enumerate
partitions.
Below is an example that shows four Active System Manager virtual NICs where “Redundancy” has not
been selected in the deployment template. On the left side of this diagram you can see how the virtual
NICs in Active System Manager map to the physical NIC partitions when you map based on PCI Function
order. You should note that the four virtual NICs have been spread evenly across the partitions of both