Reference Architecture
Reference Architecture for Active System 800m with Hyper-V:
Dell Inc. 15
The design of PowerEdge R620 servers includes high availability and redundant features such as
redundant fans and power supplies that are distributed to independent power sources. The servers also
use PERC H710 controllers with two hard disks configured with RAID-1 to prevent server crashes in the
event of single disk failures.
6.5 Storage Architecture
Dell EqualLogic PS6110 provides capabilities essential to the Active System 800m design, like 10Gb
connectivity, flexibility in configuring RAID arrays and creating volumes, thin provisioning, and storage
tiering.
6.5.1 EqualLogic Group and Pool Configuration
Each EqualLogic array (or member) is assigned to a particular group. Groups help in simplifying
management by enabling management of all members in a group from a single interface. Each group
contains one or more storage pools. Each pool must contain one or more members and each member is
associated with only one storage pool.
The iSCSI volumes are created at the pool level. In the case where multiple members are placed in a
single pool, the data is distributed amongst the members of the pool. With data being distributed over
a larger number of disks, the potential performance of iSCSI volumes within the pool is increased with
each member added.
6.5.1.1 Storage Network
Figure 7 below Illustrates the SAN Connectivity. Each Dell PowerEdge M620 blade server is configured
with a Broadcom BCM 57810 Converged Network Adapter Network Daughter Card. The BCM 57810
enumerates an iSCSI initiator device in addition to the standard LAN device, resulting in a converged
network fabric. Each port is wired across the Dell PowerEdge M1000e mid-plane to the Dell PowerEdge
M I/O Aggregators in Fabric A. These modules are trunked to the Dell Force10 S4810 ToR switches. The
10 GbE ports of Dell EqualLogic PS6110 Series are also connected to the S4810 ToR switches. For the
management servers, each PowerEdge R620 server is configured with a BCM 57810 Add-in PCI-E Card
and connected to the Force10 S4810 switches.
Both the blade servers and the rack servers utilize DCB to guarantee traffic flows from each node to
the SAN. The DCB configuration utilizes DCBX, PFC, and ETS. The DCB settings default to allow 50%
bandwidth for LAN traffic and 50% for SAN traffic. These ETS settings are configurable based upon
customer requirements. The PFC settings are configured to be lossless for the iSCSI queue.