System information

Dell
PowerEdge M1000e Technical Guide 45
9 Input/Output (I/O)
9.1 Overview
The Dell™ PowerEdge™ M-series provides complete, snap-in Flex I/O scalability down to the switch
interconnects. Flex I/O technology is the foundation of the M1000e I/O subsystem. Customers may
mix and match I/O modules from a wide variety of options including Cisco
®
, Dell PowerConnect,
Fibre Channel, and InfiniBand options. The I/O modules may be installed singly or in redundant pairs.
See the I/O Connectivity section in the About Your System chapter of the Dell PowerEdge Modular
Systems Hardware Owner’s Manual on Support.Dell.com/Manuals.
I/O modules connect to the blades through three redundant I/O fabrics. The enclosure was designed
for 5+ years of I/O bandwidth and technology.
The I/O system offers customers a wide variety of options to meet nearly any network need:
Complete, on-demand switch design
Easily scale to provide additional uplink and stacking functionality
No need to waste your current investment with a rip-and-replace upgrade
Flexibility to scale Ethernet stacking and throughput
Partnered Solutions with Cisco, Emulex, and Brocade
Quad data rate InfiniBand switch options available for HPCC
Up to 8 high-speed ports
Cisco
®
virtual blade switch capability
Ethernet port aggregator
Virtualization of Ethernet ports for integration into any Ethernet fabric
Fibre channel products from Brocade and Emulex offering powerful connectivity to Dell/EMC
SAN fabrics
High-availability clustering inside a single enclosure or between two enclosures
Each server module connects to traditional network topologies while providing sufficient bandwidth
for multi‐generational product lifecycle upgrades. I/O fabric integration encompasses networking,
storage, and interprocessor communications (IPC).
9.2 Quantities and Priorities
There are three supported high-speed fabrics per M1000e half‐height server module, with two
flexible fabrics using optional plug-in mezzanine cards on the server, and one connected to the LOMs
on the server. The ports on the server module connect through the midplane to the associated I/O
Modules (IOM) in the back of the enclosure, which then connect to the customer’s LAN/SAN/IPC
networks.
The optional mezzanine cards are designed to connect through the eight-lane PCIe to the server
module’s chipset in most cases. Mezzanine cards may have either one dual port ASIC with four- or
eight-lane PCIe interfaces or dual ASICs, each with four-lane PCIe interfaces. External fabrics are
routed through high-speed, 10-Gigabit-per-secondcapable air dielectric connector pins through the
planar and midplane. For best signal integrity, the signals isolate transmit and receive signals for
minimum crosstalk. Differential pairs are isolated with ground pins and signal connector columns are
staggered to minimize signal coupling.