White Papers

Table Of Contents
Strict-priority
groups:
If two priority groups have strict-priority scheduling, traffic assigned from the priority group with the
higher priority-queue number is scheduled first. However, when three priority groups are used and two
groups have strict-priority scheduling (such as groups 1 and 3 in the example), the strict priority group
whose traffic is mapped to one queue takes precedence over the strict priority group whose traffic is
mapped to two queues.
Therefore, in this example, scheduling traffic to priority group 1 (mapped to one strict-priority queue) takes precedence over
scheduling traffic to priority group 3 (mapped to two strict-priority queues).
DCBx Operation
The data center bridging exchange protocol (DCBx) is used by DCB devices to exchange configuration information with directly
connected peers using the link layer discovery protocol (LLDP) protocol. DCBx can detect the misconfiguration of a peer DCB
device, and optionally, configure peer DCB devices with DCB feature settings to ensure consistent operation in a data center
network.
DCBx is a prerequisite for using DCB features, such as priority-based flow control (PFC) and enhanced traffic selection
(ETS), to exchange link-level configurations in a converged Ethernet environment. DCBx is also deployed in topologies that
support lossless operation for FCoE or iSCSI traffic. In these scenarios, all network devices are DCBx-enabled (DCBx is enabled
end-to-end).
The following versions of DCBx are supported on an Aggregator: CIN, CEE, and IEEE2.5.
DCBx requires the LLDP to be enabled on all DCB devices.
DCBx Operation
DCBx performs the following operations:
Discovers DCB configuration (such as PFC and ETS) in a peer device.
Detects DCB mis-configuration in a peer device; that is, when DCB features are not compatibly configured on a peer
device and the local switch. Mis-configuration detection is feature-specific because some DCB features support asymmetric
configuration.
Reconfigures a peer device with the DCB configuration from its configuration source if the peer device is willing to accept
configuration.
Accepts the DCB configuration from a peer if a DCBx port is in willing mode to accept a peers DCB settings and then
internally propagates the received DCB configuration to its peer ports.
DCBx Port Roles
The following DCBx port roles are auto-configured on an Aggregator to propagate DCB configurations learned from peer DCBx
devices internally to other switch ports:
Auto-upstream
The port advertises its own configuration to DCBx peers and receives its configuration from DCBx peers
(ToR or FCF device). The port also propagates its configuration to other ports on the switch.
The first auto-upstream that is capable of receiving a peer configuration is elected as the configuration
source. The elected configuration source then internally propagates the configuration to other auto-
upstream and auto-downstream ports. A port that receives an internally propagated configuration
overwrites its local configuration with the new parameter values.
When an auto-upstream port (besides the configuration source) receives and overwrites its configuration
with internally propagated information, one of the following actions is taken:
If the peer configuration received is compatible with the internally propagated port configuration, the
link with the DCBx peer is enabled.
If the received peer configuration is not compatible with the currently configured port configuration,
the link with the DCBx peer port is disabled and a syslog message for an incompatible configuration
is generated. The network administrator must then reconfigure the peer device so that it advertises a
compatible DCB configuration.
The configuration received from a DCBx peer or from an internally propagated configuration is not stored
in the switchs running configuration.
Data Center Bridging (DCB) 237