Reference Architecture

Page 13
7 Converged Network Architecture
One of the key attributes of the Active System 800v is the convergence of SAN and LAN over the same
network infrastructure. LAN and iSCSI SAN traffic share the same physical connections from servers to
storage. The converged network is designed using Data Center Bridging (IEEE 802.1) and Data Center
Bridging Exchange (IEEE 802.1AB) technologies and features. The converged network design drastically
reduces cost and complexity by reducing the components and physical connections and the associated
efforts in deploying, configuring, and managing the infrastructure.
Data Center Bridging is a set of related standards to achieve enhance Ethernet capabilities, especially
in datacenter environments, through converge network connectivity. The functionalities provided by
DCB and DCBX are:
Priority Flow Control (PFC): This capability provides zero packet loss under congestion by
providing a link level flow control mechanism that can be controlled independently for each
priority.
Enhanced Transmission Selection (ETS): This capability provides a framework and mechanism
for bandwidth management for different traffic types by assigning bandwidth to different
frame priorities.
Data Center Bridging Exchange (DCBX): This functionality is used for conveying the
capabilities and configuration of the above features between neighbors to ensure consistent
configuration across the network.
Dell Force10 S4810 switches, Dell PowerEdge M I/O Aggregator modules, Broadcom 57810-k Dual port
10GbE KR Blade NDCs, and EqualLogic PS6110 iSCSI SAN arrays enable Active System 800v to utilize
these technologies, features, and capabilities to support converged network architecture.
7.1 Converged Network Connectivity
The Active System 800v design is based upon a converged network. All LAN and iSCSI traffic within the
solution share the same physical connections. The following section describes the converged network
architecture of Active System 800v.
Connectivity between hypervisor hosts and converged network switches: The compute cluster
hypervisor hosts, PowerEdge M620 blade servers, connect to the Force10 S4810 switches through the
PowerEdge M I/O Aggregator I/O Modules in the PowerEdge M1000e blade chassis. The management
cluster hypervisor hosts, PowerEdge R620 rack servers, directly connect to the Force10 S4810 switches.
Connectivity between the Dell PowerEdge M620 blade servers and Dell PowerEdge M I/O
Aggregators: The internal architecture of PowerEdge M1000e chassis provides connectivity
between the Broadcom 57810-k Dual port 10GbE KR Blade NDC in each PowerEdge M620 blade
server and the internal ports of the PowerEdge M I/O Aggregator. The PowerEdge M I/O
Aggregator has 32 x 10GbE internal ports. With one Broadcom 57810-k Dual port 10GbE KR
Blade NDC in each PowerEdge M620 blade, blade servers 1-16 connect to the internal ports 1-16
of each of the two PowerEdge M I/O Aggregator. Internal ports 17-32 of each PowerEdge M I/O
Aggregator are disabled and not used.