Reference Architecture

Page 12
The figure shows high-level logical connectivity between various components. Subsequent sections of
this document provide more detailed connectivity information.
6 Dell Blade Network Architecture
In Active System 800v, the Fabric A in PowerEdge M1000e blade chassis contains two Dell PowerEdge M
I/O Aggregator modules, one in I/O module slot A1 and the other in slot A2, and is used for converged
LAN and SAN traffic. Fabric B and Fabric C (I/O Module slot B1, B2, C1, and C2) are not used.
The PowerEdge M620 blade servers use the Broadcom 57810-k Dual port 10GbE KR Blade NDC to
connect to the Fabric A. Dell PowerEdge M I/O Aggregator modules uplink to Dell Force10 S4810
network switches providing LAN AND SAN connectivity.
Figure 3 below illustrates how the fabrics are populated in the PowerEdge M1000e blade server chassis
and how the I/O modules are utilized.
Figure 3: I/O Connectivity for PowerEdge M620 Blade Server
Network Interface Card Partition (NPAR): NPAR allows splitting the 10GbE pipe on the NDC with no
specific configuration requirements in the switches. With NPAR, administrators can split each 10GbE
port of an NDC into four separate partitions, or physical functions, and allocate the desired bandwidth
and resources as needed. Each of these partitions is enumerated as a PCI Express function that appears
as a separate physical NIC in the server, operating systems, BIOS, and hypervisor. Active System 800v
solution takes advantage of NPAR. Partitions are created for various traffic types and bandwidth is
allocated, as described in the following section.