Deployment Guide

Strict-priority
groups:
If two priority groups have strict-priority scheduling, trac assigned from the priority group with the higher
priority-queue number is scheduled rst. However, when three priority groups are used and two groups have strict-
priority scheduling (such as groups 1 and 3 in the example), the strict priority group whose trac is mapped to one
queue takes precedence over the strict priority group whose trac is mapped to two queues.
Therefore, in this example, scheduling trac to priority group 1 (mapped to one strict-priority queue) takes precedence over scheduling
trac to priority group 3 (mapped to two strict-priority queues).
DCBx Operation
The data center bridging exchange protocol (DCBx) is used by DCB devices to exchange conguration information with directly connected
peers using the link layer discovery protocol (LLDP) protocol. DCBx can detect the misconguration of a peer DCB device, and optionally,
congure peer DCB devices with DCB feature settings to ensure consistent operation in a data center network.
DCBx is a prerequisite for using DCB features, such as priority-based ow control (PFC) and enhanced trac selection (ETS), to exchange
link-level congurations in a converged Ethernet environment. DCBx is also deployed in topologies that support lossless operation for FCoE
or iSCSI trac. In these scenarios, all network devices are DCBx-enabled (DCBx is enabled end-to-end).
The following versions of DCBx are supported on an Aggregator: CIN, CEE, and IEEE2.5.
DCBx requires the LLDP to be enabled on all DCB devices.
DCBx Operation
DCBx performs the following operations:
Discovers DCB conguration (such as PFC and ETS) in a peer device.
Detects DCB mis-conguration in a peer device; that is, when DCB features are not compatibly congured on a peer device and the
local switch. Mis-conguration detection is feature-specic because some DCB features support asymmetric conguration.
Recongures a peer device with the DCB conguration from its conguration source if the peer device is willing to accept
conguration.
Accepts the DCB conguration from a peer if a DCBx port is in “willing” mode to accept a peer’s DCB settings and then internally
propagates the received DCB conguration to its peer ports.
DCBx Port Roles
The following DCBx port roles are auto-congured on an Aggregator to propagate DCB congurations learned from peer DCBx devices
internally to other switch ports:
Auto-upstream
The port advertises its own conguration to DCBx peers and receives its conguration from DCBx peers (ToR or
FCF device). The port also propagates its conguration to other ports on the switch.
The rst auto-upstream that is capable of receiving a peer conguration is elected as the conguration source. The
elected conguration source then internally propagates the conguration to other auto-upstream and auto-
downstream ports. A port that receives an internally propagated conguration overwrites its local conguration
with the new parameter values.
When an auto-upstream port (besides the conguration source) receives and overwrites its conguration with
internally propagated information, one of the following actions is taken:
If the peer conguration received is compatible with the internally propagated port conguration, the link with
the DCBx peer is enabled.
If the received peer conguration is not compatible with the currently congured port conguration, the link
with the DCBx peer port is disabled and a syslog message for an incompatible conguration is generated. The
52 Data Center Bridging (DCB)