Specifications

White Paper
© 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 23 of 89
4. The DSCP can be set for the frame using a DSCP default value typically assigned though an Access Control
List (ACL) entry.
After a DSCP value is assigned to the frame, policing (rate limiting) is applied should a policing configuration exist.
Policing will limit the flow of data through the PFC by dropping or marking down traffic that is out of profile. Out of
Profile is a term used to indicate that traffic has exceeded a limit defined by the administrator as the amount of bits
per second the PFC will send. Out of profile traffic can be dropped, or, the data can still be sent but CoS value is
marked down. The PFC1 and PFC2 currently only support input policing (rate limiting). The PFC3 supports input and
output policing. The output-policing feature of the PFC3 applies to routed (layer 3) ports or VLAN interfaces (switch
SVI interface—this is discussed in more detail later in the paper).
The PFC will then pass the frame to the egress port for processing. At this point, a rewrite process is invoked to
modify the CoS values in the frame and the ToS value in the IPv4 header. Prior to passing the frame to the port
ASIC, the PFC will derive the CoS based on internal DSCP. The port ASIC will then use the CoS value passed to it
to place the frame into the appropriate queue. While the frame is in the queue, the port ASIC will monitor the buffers
and implement WRED to avoid the buffers from overflowing. A Round Robin scheduling algorithm is then used to
schedule and transmit frames from the egress port
Each of the sections below will explore this flow in more detail giving configuration examples for each of the steps
described above.
6. Queues, Buffers, Thresholds and Mappings
Before QoS configuration is described in detail, certain terms must be explained further to ensure the reader fully
understands the QoS configuration capabilities of the switch.
6.1 Queues
Each port in the switch has a series of input and output queues that are used as temporary storage areas for data.
Catalyst 6500 line cards implement different number of queues for each port. The queues are usually implemented
in hardware ASIC’s for each port. This is different to routers where the queues are virtualised by the software. On the
first generation Catalyst 6500 line cards, the typical queue configuration included one input queue and two output
queues. Later line cards use enhanced ASIC’s which incorporated additional queues. One innovation included
support for a special strict priority (SP) queue, which is ideal for storing latency sensitive traffic like Voice over IP
(VoIP). Data in this queue is serviced in a strict priority fashion. That is, if a frame arrives in the SP queue,
scheduling and transmission of frames from the lower queues is ceased in order to process the frame in the strict
priority queue. Only when the SP queue is empty will scheduling of packets from the lower queue(s) recommence.
When a frame arrives at an ingress port and congestion is present, it will be placed into a queue. The decision
behind which queue the frame is placed in is determined by the CoS value in the Ethernet header of the incoming
frame.
On egress, a scheduling algorithm will be employed to empty the transmit (output) queue. There are a number of
Round Robin techniques available (dependent on the hardware and software in use) to perform this function. These
Round Robin techniques include Weighted Round Robin (WRR), Deficit Weighted Round Robin (DWRR) and
Shaped Round Robin (SRR). While each of these are explored later, in the case of WRR and DWRR, each queue
uses a weighting to dictate how much data will be emptied from the queue before moving onto the next queue. For
SRR, a rate (or limit) is applied on a per queue basis and this dictates the upper limit of bandwidth that can be used
by this queue.