White Papers

Table Of Contents
DCBx Example
The following figure shows how DCBx is used on an Aggregator installed in a Dell PowerEdge FX2 server chassis in which
servers are also installed.
The Aggregator ports are numbered 1 to 12. Ports 1 to 8 are internal server-facing interfaces. Ports 9 to 12 are uplink ports. The
uplink ports on the base module (ports 9 to 12) are used for uplinks configured as DCBx auto-upstream ports. The Aggregator is
connected to third-party, top-of-rack (ToR) switches through the uplinks. The ToR switches are part of a Fibre Channel storage
network.
The internal ports (ports 1 to 8) connected to the 10GbE backplane are configured as auto-downstream ports.
On the Aggregator, PFC and ETS use DCBx to exchange link-level configuration with DCBx peer devices.
Figure 31. DCBx Sample Topology
DCBx Prerequisites and Restrictions
The following prerequisites and restrictions apply when you configure DCBx operation on a port:
DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer
ports are detected on a local DCBx interface, LLDP is shut down.
The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management
(BCN), logical link down (LLD), and network interface virtualization (NIV).
240
Data Center Bridging (DCB)