Users Guide
• In a switch stack, congure all stacked ports with the same PFC conguration.
A DCB input policy for PFC applied to an interface may become invalid if you recongure dot1p-queue mapping. This situation occurs when
the new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch. In this case, all
PFC congurations received from PFC-enabled peers are removed and resynchronized with the peer devices.
Trac may be interrupted when you recongure PFC no-drop priorities in an input policy or reapply the policy to an interface.
How Priority-Based Flow Control is Implemented
Priority-based ow control provides a ow control mechanism based on the 802.1p priorities in converged Ethernet trac received on an
interface and is enabled by default. As an enhancement to the existing Ethernet pause mechanism, PFC stops trac transmission for
specied priorities (CoS values) without impacting other priority classes. Dierent trac types are assigned to dierent priority classes.
When trac congestion occurs, PFC sends a pause frame to a peer device with the CoS priority values of the trac that needs to be
stopped. DCBx provides the link-level exchange of PFC parameters between peer devices. PFC creates zero-loss links for SAN trac that
requires no-drop service, while at the same time retaining packet-drop congestion management for LAN trac.
PFC is implemented on an Aggregator as follows:
• If DCB is enabled, as soon as a dcb-map with PFC is applied on an interface, DCBx starts exchanging information with PFC-enabled
peers. The IEEE802.1Qbb, CEE and CIN versions of PFC TLV are supported. DCBx also validates PFC congurations received in TLVs
from peer devices.
• To achieve complete lossless handling of trac, enable PFC operation on ingress port trac and on all DCB egress port trac.
• All 802.1p priorities are enabled for PFC. Queues to which PFC priority trac is mapped are lossless by default. Trac may be
interrupted due to an interface ap (going down and coming up).
• For PFC to be applied on an Aggregator port, the auto-congured priority trac must be supported by a PFC peer (as detected by
DCBx).
• A dcb-map for PFC applied to an interface may become invalid if dot1p-queue mapping is recongured. This situation occurs when the
new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch. In this case, all
PFC congurations received from PFC-enabled peers are removed and re-synchronized with the peer devices.
• Dell Networking OS does not support MACsec Bypass Capability (MBC).
Conguring Enhanced Transmission Selection
ETS provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet trac.
Dierent trac types have dierent service needs. Using ETS, you can create groups within an 802.1p priority class to congure dierent
treatment for trac with dierent bandwidth, latency, and best-eort needs.
For example, storage trac is sensitive to frame loss; interprocess communication (IPC) trac is latency-sensitive. ETS allows dierent
trac types to coexist without interruption in the same converged link by:
• Allocating a guaranteed share of bandwidth to each priority group.
• Allowing each group to exceed its minimum guaranteed bandwidth if another group is not fully using its allotted bandwidth.
To congure ETS and apply an ETS output policy to an interface, you must:
1 Create a Quality of Service (QoS) output policy with ETS scheduling and bandwidth allocation settings.
2 Create a priority group of 802.1p trac classes.
3 Congure a DCB output policy in which you associate a priority group with a QoS ETS output policy.
4 Apply the DCB output policy to an interface.
How Enhanced Transmission Selection is Implemented
Enhanced transmission selection (ETS) provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet
trac. Dierent trac types have dierent service needs. Using ETS, groups within an 802.1p priority class are auto-congured to provide
dierent treatment for trac with dierent bandwidth, latency, and best-eort needs.
46
Data Center Bridging (DCB)