White Papers

Sizing and Best Practices for Deploying VMware View 4.5
on VMware vSphere 4.1 with Dell EqualLogic Storage
9
Server LAN Configuration:
Each PowerEdge M610 server has an onboard Broadcom 5709 dual-port 1GbE NIC.
Dual PowerConnect M6220 switches were installed in fabric A of the blade chassis. We
connected the onboard NICs to each the M6220 switches.
The two M6220 switches were inter-connected using a 2 x 10GbE LAG.
SAN Configuration:
Each PowerEdge M610 server included two Broadcom NetXtreme II 5709 quad port NIC
mezzanine cards. We assigned one card to fabric B and the other to on fabric C.
Note: The tests for this white paper used only 2 NIC ports per card as shown in Figure 3.
We installed dual PowerConnect M6348 switches into fabric B and fabric C on the blade server
chassis. The NIC mezzanine cards connected to these switches via the blade chassis mid-
plane. Note: PowerConnect M6220 switches can be used instead if only two ports per fabric
are needed per server.
We used four PowerConnect 6248 switches as our external SAN switches. We connected each
of our EqualLogic storage arrays to these switches. Each port on the quad-port EqualLogic
storage controller (PS6000XV or PS6000XVS) was connected to a different 6248 switch.
Figure 3 shows these connection paths.
The PowerConnect 6248 switches were configured with 10GbE SFP+ uplink modules in both
module bays. As shown in Figure 3, one module was used to create a 2 x 10GbE LAG uplink to
the M6348 blade switches. The other module was used to create a ring of 10GbE links
between each 6248. Spanning tree protocol (STP) settings were adjusted to create a logical
disconnect in the ring (if one of the other links in the loop fails, the open link is re-enabled by
STP).
Each PowerConnect M6348 switch on the chassis connected to the external SAN switches
using a 2 x 10GbE Link Aggregation Group (LAG). Note: this infrastructure design could scale
out with addition of a second M1000e chassis. SAN connectivity in this case would be
accomplished by using a single 10GbE LAG inter-connect between the M6348 switch modules
in each blade chassis and the external SAN switches. The switch modules in a second blade
server chassis could then inter-connect to the same external SAN switches by using the
second 10GbE LAG connection ports on each switch.
3.2 ESX Host Network Configuration
We configured two virtual switches, vSwitch0 and vSwitchISCSI, on each ESX host as shown in Figure
4. Virtual switch configuration details:
vSwitch0 vSwitch0 provided connection paths for all Server LAN traffic. We
assigned the physical adapters corresponding to the two onboard
NICs (fabric A) to this switch.
vSwitchISCSI vSwitchISCSI provided connection paths for all iSCSI SAN traffic.
We assigned four physical adapters to this vSwitch: two connecting
to fabric B and two connecting to fabric C. Four VMkernel ports
were created and attached to the ESX iSCSI software initiator. Each