Administrator Guide

The CIN version supports two types of strict-priority scheduling:
* Group strict priority: Allows a single priority ow in a priority group to increase its bandwidth usage to the bandwidth total
of the priority group. A single ow in a group can use all the bandwidth allocated to the group.
* Link strict priority: Allows a ow in any priority group to increase to the maximum link bandwidth.
CIN supports only the default dot1p priority-queue assignment in a priority group.
DCBx Operation
The data center bridging exchange protocol (DCBx) is used by DCB devices to exchange conguration information with directly
connected peers using the link layer discovery protocol (LLDP) protocol. DCBx can detect the misconguration of a peer DCB
device, and optionally, congure peer DCB devices with DCB feature settings to ensure consistent operation in a data center
network.
DCBx is a prerequisite for using DCB features, such as priority-based ow control (PFC) and enhanced trac selection (ETS), to
exchange link-level congurations in a converged Ethernet environment. DCBx is also deployed in topologies that support lossless
operation for FCoE or iSCSI trac. In these scenarios, all network devices are DCBx-enabled (DCBx is enabled end-to-end).
The following versions of DCBx are supported on an Aggregator: CIN, CEE, and IEEE2.5.
DCBx requires the LLDP to be enabled on all DCB devices.
DCBx Operation
DCBx performs the following operations:
Discovers DCB conguration (such as PFC and ETS) in a peer device.
Detects DCB mis-conguration in a peer device; that is, when DCB features are not compatibly congured on a peer device and
the local switch. Mis-conguration detection is feature-specic because some DCB features support asymmetric conguration.
Recongures a peer device with the DCB conguration from its conguration source if the peer device is willing to accept
conguration.
Accepts the DCB conguration from a peer if a DCBx port is in “willing” mode to accept a peer’s DCB settings and then internally
propagates the received DCB conguration to its peer ports.
DCBx Port Roles
The following DCBx port roles are auto-congured on an Aggregator to propagate DCB congurations learned from peer DCBx
devices internally to other switch ports:
Auto-upstream
The port advertises its own conguration to DCBx peers and receives its conguration from DCBx peers
(ToR or FCF device). The port also propagates its conguration to other ports on the switch.
The rst auto-upstream that is capable of receiving a peer conguration is elected as the conguration
source. The elected conguration source then internally propagates the conguration to other auto-
upstream and auto-downstream ports. A port that receives an internally propagated conguration
overwrites its local conguration with the new parameter values.
When an auto-upstream port (besides the conguration source) receives and overwrites its conguration
with internally propagated information, one of the following actions is taken:
If the peer conguration received is compatible with the internally propagated port conguration, the link
with the DCBx peer is enabled.
If the received peer conguration is not compatible with the currently congured port conguration, the
link with the DCBx peer port is disabled and a syslog message for an incompatible conguration is
generated. The network administrator must then recongure the peer device so that it advertises a
compatible DCB conguration.
40
Data Center Bridging (DCB)