Users Guide

In a switch stack, congure all stacked ports with the same PFC conguration.
A DCB input policy for PFC applied to an interface may become invalid if you recongure dot1p-queue mapping. This situation occurs
when the new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch. In
this case, all PFC congurations received from PFC-enabled peers are removed and resynchronized with the peer devices.
Trac may be interrupted when you recongure PFC no-drop priorities in an input policy or reapply the policy to an interface.
How Priority-Based Flow Control is Implemented
Priority-based ow control provides a ow control mechanism based on the 802.1p priorities in converged Ethernet trac received
on an interface and is enabled by default. As an enhancement to the existing Ethernet pause mechanism, PFC stops trac
transmission for specied priorities (CoS values) without impacting other priority classes. Dierent trac types are assigned to
dierent priority classes.
When trac congestion occurs, PFC sends a pause frame to a peer device with the CoS priority values of the trac that needs to
be stopped. DCBx provides the link-level exchange of PFC parameters between peer devices. PFC creates zero-loss links for SAN
trac that requires no-drop service, while at the same time retaining packet-drop congestion management for LAN trac.
PFC is implemented on an Aggregator as follows:
If DCB is enabled, as soon as a dcb-map with PFC is applied on an interface, DCBx starts exchanging information with PFC-
enabled peers. The IEEE802.1Qbb, CEE and CIN versions of PFC TLV are supported. DCBx also validates PFC congurations
received in TLVs from peer devices.
To achieve complete lossless handling of trac, enable PFC operation on ingress port trac and on all DCB egress port trac.
All 802.1p priorities are enabled for PFC. Queues to which PFC priority trac is mapped are lossless by default. Trac may be
interrupted due to an interface ap (going down and coming up).
For PFC to be applied on an Aggregator port, the auto-congured priority trac must be supported by a PFC peer (as detected
by DCBx).
A dcb-map for PFC applied to an interface may become invalid if dot1p-queue mapping is recongured. This situation occurs
when the new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch.
In this case, all PFC congurations received from PFC-enabled peers are removed and re-synchronized with the peer devices.
Dell Networking OS does not support MACsec Bypass Capability (MBC).
Conguring Enhanced Transmission Selection
ETS provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet trac.
Dierent trac types have dierent service needs. Using ETS, you can create groups within an 802.1p priority class to congure
dierent treatment for trac with dierent bandwidth, latency, and best-eort needs.
For example, storage trac is sensitive to frame loss; interprocess communication (IPC) trac is latency-sensitive. ETS allows
dierent trac types to coexist without interruption in the same converged link by:
Allocating a guaranteed share of bandwidth to each priority group.
Allowing each group to exceed its minimum guaranteed bandwidth if another group is not fully using its allotted bandwidth.
To congure ETS and apply an ETS output policy to an interface, you must:
1. Create a Quality of Service (QoS) output policy with ETS scheduling and bandwidth allocation settings.
2. Create a priority group of 802.1p trac classes.
3. Congure a DCB output policy in which you associate a priority group with a QoS ETS output policy.
4. Apply the DCB output policy to an interface.
How Enhanced Transmission Selection is Implemented
Enhanced transmission selection (ETS) provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged
Ethernet trac. Dierent trac types have dierent service needs. Using ETS, groups within an 802.1p priority class are auto-
congured to provide dierent treatment for trac with dierent bandwidth, latency, and best-eort needs.
42
Data Center Bridging (DCB)