Install Guide

Table Of Contents
Data Center Bridging in a Traffic Flow
The following figure shows how DCB handles a traffic flow on an interface.
Figure 31. DCB PFC and ETS Traffic Handling
Buffer Organization
This section describes the buffer organization on the platform.
A single chip architecture can allocate or share all its resource on all the ports. However, the system runs on a different 2x2 chip
design.
In this design, all ports are assigned to four port-sets. These sets are built as follows:
A or B for ingress, referred to as layers
R or S for egress, referred to as slices
There are 4 XPEs with each XPE containing 5.5MB of buffer space making the total available buffer space to be 22MB. Out of
these 4 XPEs, one of the XPE is used for a particular traffic flow based on its ingress and egress port pipes. For example, if a
traffic flow enters through a port that is a part of the ingress pipe 0 and exits through a port that is part of the egress pipe 0,
then XPE A from MMU Slice R is used. If a traffic flow enters through the pipe 1 and exits through the pipe 3, then XPE B from
MMU Slice S is used.
The following example shows default DCB buffer values:
DellEMC#show dcb
DCB Status: Enabled, PFC Queue Count: 2
Total Buffer: Total available buffer excluding the buffer pre-allocated
for guaranteed services like global headroom, queue's min
guaranteed buffer and CPU queues.
PFC Total Buffer: Maximum buffer available for lossless queues.
PFC Shared Buffer: Buffer used by ingress priority groups for shared usage.
PFC Headroom Buffer: Buffer used by ingress priority group for shared headroom usage.
PFC Available Buffer: Current buffer available for new lossless queues to be
Provisioned.
stack-unit Total Buffer PFC Total Buffer PFC Shared Buffer PFC Headroom Buffer PFC
Available Buffer
PP (KB) (KB) (KB)
246
Data Center Bridging (DCB)