White Paper
Use Case Study: Using Active System For VMware Cluster Environment Configuration
5
Fabric A, iSCSI-Only Configuration (No Data Center Bridging)
For this use case, Fabric A will be set up by Active System Manager for an iSCSI-only configuration. An
iSCSI-only network contains iSCSI traffic and no other traffic. In the absence of technology such as Data
Center Bridging (DCB), which enable converging LAN and SAN (iSCSI) traffic on the same fabric, an
iSCSI-only network configuration is required to ensure reliability for high-priority SAN traffic.
All devices in the storage data path must be enabled for flow control and jumbo frames. It is also
recommended (but not required) that DCB be disabled on all network devices from within Active
System Manager, since the network will carry only one type of traffic. This will ensure that all devices—
including CNAs, iSCSI initiators, I/O aggregators, Top-of-Rack (ToR) switches, and storage arrays—share
the same DCB-disabled configuration. DCB is an all or none configuration, in other words if you choose
to enable DCB, it must be enabled end-to-end from your storage to your CNA. The ASM template
configuration and ToR switch configuration of DCB in this case will drive the configuration of your CNAs
as they should operate in “willing” mode and obtain their DCB configuration from the upstream switch.
EqualLogic storage also operates in “willing” mode and will obtain its DCB or non-DCB configuration
from the ToR distribution switches to which it is connected.
In this configuration, the jumbo frame MTU size is set on the switches to 12000 to accommodate the
largest possible packet size allowed by the S4810 switches. Make sure to set the MTU size in your own
environment based on the packet size your devices support. In this example, specific ESXi hosts being
configured for the cluster will only support an MTU of 9000, as a result the overall MTU for these paths
will adjust down to 9000.
Active System Manager will only configure hardware within the Active Infrastructure blade chassis
environment, thus the distribution layer and above must be configured manually or by other tools.
Active System Manager does not manage storage devices, thus these will also need to be configured by
the system administrator. In this example, two Dell Force10 S4810 switches are used for the
distribution layer devices and two Dell EqualLogic PS6010X are used as the storage arrays. The S4810
switches are configured as a set of VLT peers, and the storage arrays are connected directly to the
distribution layer device. These switches connect the downstream Dell Force10 PowerEdge M I/O
aggregator switches in the chassis with the upstream EqualLogic storage arrays.
In an iSCSI-only configuration, the distribution switches have only four types of ports:
Out-of-band management
VLT peer ports
Downlinks to the I/O aggregator in the M1000e chassis
Connections to the Dell EqualLogic iSCSI storage array
The Virtual Link Trunking (VLT) peer link is configured using 40GbE QSFP ports of the S4810 distribution
layer switches. VLT is a Dell Force10 technology that allows you to create a single link aggregated
(LAG) port channel using ports from two different switch peers while providing load balancing and
redundancy in case of a switch failure. This configuration also provides a loop-free environment
without the use of a spanning-tree. The ports are identified in the same manner as two switches not
connected via VLT (port numbers do not change like they would if stacked). You can keep one peer
switch up while updating the firmware on the other peer. In contrast to stacking, these switches