Concept Guide
By default, all 802.1p priorities are grouped in priority group 0 and 100% of the port bandwidth is assigned to priority group 0. The complete
bandwidth is equally assigned to each priority class so that each class has 12 to 13%.
The maximum number of priority groups supported in ETS output policies on an interface is equal to the number of data queues (4) (8)on
the port. The 802.1p priorities in a priority group can map to multiple queues. The maximum number of priority group supported is two.
If you congure more than one priority queue as strict priority or more than one priority group as strict priority, the higher numbered priority
queue is given preference when scheduling data trac.
If multiple lossful priorities are mapped to a single priority group (PG1) and lossless priorities to another priority group (PG0), then
bandwidth split across lossful priorities is not even.
ETS Operation with DCBx
The following section describes DCBx negotiation with peer ETS devices.
In DCBx negotiation with peer ETS devices, ETS conguration is handled as follows:
• ETS TLVs are supported in DCBx versions CIN, CEE, and IEEE2.5.
• The DCBx port-role congurations determine the ETS operational parameters (refer to Congure a DCBx Operation).
• ETS congurations received from TLVs from a peer are validated.
• If there is a hardware limitation or TLV error:
– DCBx operation on an ETS port goes down.
– New ETS congurations are ignored and existing ETS congurations are reset to the default ETS settings.
• ETS operates with legacy DCBx versions as follows:
– In the CEE version, the priority group/trac class group (TCG) ID 15 represents a non-ETS priority group. Any priority group
congured with a scheduler type is treated as a strict-priority group and is given the priority-group (TCG) ID 15.
– The CIN version supports two types of strict-priority scheduling:
◦ Group strict priority: Use this to increase its bandwidth usage to the bandwidth total of the priority group and allow a single
priority ow in a priority group. A single ow in a group can use all the bandwidth allocated to the group.
◦ Link strict priority: Use this to increase to the maximum link bandwidth and allow a ow in any priority group.
NOTE
: CIN supports only the dot1p priority-queue assignment in a priority group. To congure a dot1p priority ow in a
priority group to operate with link strict priority, you congure: The dot1p priority for strict-priority scheduling (strict-
priority command). The priority group for strict-priority scheduling (scheduler strict command.
Conguring Bandwidth Allocation for DCBx CIN
After you apply an ETS output policy to an interface, if the DCBx version used in your data center network is CIN, you may need to
congure a QoS output policy to overwrite the default CIN bandwidth allocation.
This default setting divides the bandwidth allocated to each port queue equally between the dot1p priority trac assigned to the queue.
To create a QoS output policy that allocates dierent amounts of bandwidth to the dierent trac types/ dot1p priorities assigned to a
queue and apply the output policy to the interface, follow these steps.
1 Create a QoS output policy.
CONFIGURATION mode
Dell(conf)#qos-policy-output test12
The maximum 32 alphanumeric characters.
2 Congure the percentage of bandwidth to allocate to the dot1p priority/queue trac in the associated L2 class map.
QoS OUTPUT POLICY mode
Dell(conf-qos-policy-out)#bandwidth-percentage 100
286
Data Center Bridging (DCB)