. White Paper Understanding Quality of Service on the Catalyst 6500 Switch Carl Solder Patrick Warichet CCIE #2416 CCIE #14218 Technical Marketing Engineer Technical Marketing Engineer Internetworking Systems Business Unit Internetworking Systems Business Unit February 2009 Version 4.07 © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
White Paper THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
White Paper Table of Contents Terminology .....................................................................................................................................5 Abstract ............................................................................................................................................6 1. What is Layer 2 and Layer 3 QoS ...............................................................................................7 2. Why The Need for QoS in a Switch ...............
White Paper 7.12 Egress ACL Support for Remarked DSCP ..............................................................................43 7.13 Egress DSCP Mutation Mapping .............................................................................................44 7.14 DSCP to CoS Mapping ............................................................................................................44 7.15 Adjusting the Transmit Queue Size Ratio..................................................................
White Paper Terminology Aggregate A type of Policer (see Policing) applied to ALL traffic matching a particular traffic class ASIC Application Specific Integrated Circuit is a purpose built chipset designed to perform a specific function in hardware CatOS Catalyst Operating System is one of two operating system options available for the Catalyst 6500 platform. Originally derived from the Catalyst 5000 code base. Cisco IOS Cisco IOS is the second of two Operating Systems available for the Cat6500.
White Paper PFC Policy Feature Card is a daughter card that sits on the Supervisor providing hardware accelerated L3/L4 switching, QoS and Security PFC1 Policy Feature Card supported on a Supervisor 1 or Supervisor 1A PFC2 Policy Feature Card supported on a Supervisor 2 PFC3A Policy Feature Card supported on a Supervisor 720 PFC3B Policy Feature Card supported on a Supervisor 720-3B PFC3BXL Policy Feature Card supported on a Supervisor 720-3BXL PFC3C Policy Feature Card supported on a Supervis
White Paper 1. What is Layer 2 and Layer 3 QoS While some may think of QoS in a switch is only about prioritising Ethernet frames, it is in fact much more. Layer 2 and Layer 3 QoS in the Catalyst 6500 entails the following: 1. Input Trust and Queue Scheduling: When the frame enters the switch port, it can be assigned to an ingress port-based queue prior to being switched to an egress port.
White Paper Figure 1. Congestion causes in switches A switch may be the fastest and largest non-blocking switch in the world, but if you have either of the two scenarios shown in the above figure (as in just about every network in the world), then that switch can experience congestion. At times of congestion, packets will be dropped if the congestion management features are not up to par. When packets are dropped, retransmissions occur. When retransmissions occur, the network load can increase.
White Paper ● Ability to trust incoming Class of Service and Type of Service priority bits ● The ability to push down local QoS policies to a Distributed Forwarding Card (DFC) ● A dedicated QoS TCAM (separate from the TCAM used to store Security ACL’s) ● Egress marking and egress classification ● User Based Rate Limiting (UBRL) ● DSCP Transparency Starting with the PFC3B support for the following features was added ● L2 ACLs for IP traffic on VLAN interfaces ● Match on COS per VLAN Figure 2
White Paper The PFC3C and PFC3CXL add the following features over the PFC3A/B/BXL ● Ingress and egress policers can operate independently of each other (in serial mode) ● Ingress IP DSCP and MPLS EXP marking at the IP-to-MPLS edge. ● Concurrent CoS and DSCP transparency for Layer 2 VPNs. All versions of the PFC support the processing of QoS in hardware. This means that forwarding rates are not impacted when QoS is enabled in the switch.
White Paper cannot directly configure the DFC3x; rather, the master MSFC/PFC on the active supervisor controls and programs the DFC. The primary MSFC3 will calculate, then push down a FIB table (Forwarding Information Base) giving the DFC3x its layer 3 forwarding tables. The MSFC will also push down a copy of the QoS policies so that they are also local to the line card.
White Paper 3.3.1. 10/100 Line Cards (WS-X6148 Series) These linecards are classic based linecards (support a 32Gb bus connection only). Linecards included in this group include the following: ● WS-X6148 group includes WS-X6148-RJ45, WS-X6148-45AF, WS-X6148-RJ21, WS-X6148-21AF Details for these linecards are listed in the following table. Table 3.
White Paper Table 4. QoS on 10/100 Line Cards (WS-X6148A Series) WS-X6148A-RJ45 WS-X6148-FE-SFP Number of 10/100 Ports 48 - Number of 100FX ports 48 # Port ASIC’s on the linecard 6 6 # Physical Ports per Port ASIC 8 8 Per Port Buffering Yes Yes Buffer on Receive Side 64K 64K Buffer on Transmit Side 5.2MB 5.
White Paper Figure 7. WS-X6196-RJ21 Linecards 3.3.4. 10/100 Line Cards (WS-X6524-100FX-MM) This 10/100 Ethernet linecards utilizes a different port ASIC implementation to the one used in the WS-X6148 series linecards mentioned above. Both queue structure and port buffering has been enhanced significantly and use the queue structures 1P1Q4T and 1P3Q1T. More importantly a strict priority queue now exists on both the ingress and egress side. Table 6.
White Paper 3.3.5. Gigabit Ethernet Line Cards (WS-X6408A, WS-X6516A) Linecards in this group include the following ● WS-X6408A-GBIC ● WS-X6516A-GBIC Table 7.
White Paper WS-X6148 WS-X6548 Receive Queue Structure per port 1Q2T 1Q2T Receive (RX) Strict Priority Queue No No Transmit (TX) Strict Priority Queue Yes Yes Figure 10. WS-X6148-GETX and WS-X6548-GETX architecture Buffering for these linecards is implemented on a shared basis. There is 1.2Mb of buffering available for each 10/100/1000 ASIC. There are six 10/100/1000 ASIC’s on the linecard. Each of these ASIC’s support 8 physical 10/100/1000 ports.
White Paper Table 9. QoS on Gigabit Ethernet Line Cards (WS-X6724, WS-X6748) WS-X6724 WS-X6748 Number of GE Ports 24 48 # Port ASIC’s on the linecard 2 4 # Physical Ports per Port ASIC 24 24 Per Port Buffering Yes Yes Buffer on Receive Side 166KB 166KB Buffer on Transmit Side 1.2MB 1.
White Paper This linecard requires a Supervisor 720 to be installed. It supports the 1Q8T and 1P7Q8T queue structures. A strict priority queue is supported on the transmit side only. When a DFC3 is present, the input queue structure changes to 8Q8T. The WS-X6704-10GE module is pictured below: Figure 12. WS-X6704.10GE Linecard More details on queue structures and buffer assignments for each linecard types are detailed in Appendix One. 3.3.9.
White Paper Figure 13. WS-X6716.10GE Linecard 3.4 Catalyst 6500 QoS Hardware Summary The hardware components that perform the above QoS functions in the Catalyst 6500 are detailed in the table below: Table 12.
White Paper 4.1 Priority Mechanisms in IP and Ethernet For any QoS services to be applied to data, there must be a way to “tag” (or prioritize) an IP Packet or an Ethernet frame. The Type of Service (ToS) in the IPv4 header and the Class of Service (CoS) fields in the Ethernet header are used to achieve this. These are described in more detail below. 4.1.1. Type of Service (ToS) Type of Service (ToS) is a one-byte field that exists in an IPv4 header.
White Paper assigned to the IPv4 Packet. DSCP (also referred to as DiffServ) is defined in RFC 2474 (Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers). The Catalyst 6500 can manipulate (modify) the ToS priority bits and this can be achieved using both the PFC and/or the MSFC. When a data frame comes into the switch, it will be assigned a DSCP value, which is derived from a predefined default or from existing priority settings.
White Paper 5. QoS Flow in the Catalyst 6500 QoS in the Catalyst 6500 is the most comprehensive implementation of QoS found in all of the current Cisco Catalyst Switches. The following sections will describe how the various QoS processes are applied to a frame as it transits the switch. Before continuing further, the flow of QoS though the Catalyst 6500 switch will be reviewed.
White Paper 4. The DSCP can be set for the frame using a DSCP default value typically assigned though an Access Control List (ACL) entry. After a DSCP value is assigned to the frame, policing (rate limiting) is applied should a policing configuration exist. Policing will limit the flow of data through the PFC by dropping or marking down traffic that is out of profile.
White Paper 6.2 Buffers Each queue is assigned a certain amount of buffer space to store transit data. Resident on the port ASIC is buffer memory, which is split up and allocated on a per port basis. Per port buffering for each of the linecards is detailed in Appendix One. 6.3 Thresholds One aspect of normal data transmission is that if a packet is dropped, it will (if we are talking TCP flows) result in that packet being retransmitted by the TCP endpoint.
White Paper Figure 18. Mapping Priority to Internal DSCP This example map is an ingress map used to take either the CoS or IP Precedence value of an incoming frame and map it to an internal DSCP value.
White Paper With the introduction of the WS-X6708 and WS-X6716, the placement of a packet within a queue can be determined by the DSCP value of the IP packet directly. This suppresses the need for DSCP to CoS mapping and simplifies the queue/threshold assignment process. 6.
White Paper Figure 19. WRED high and low thresholds WRED also supports high and low threshold settings for a given threshold. In the diagram above, you can see there are two thresholds (1 and 2). Threshold #1 is a level under which no traffic mapped to that threshold is dropped. When that threshold is exceeded, traffic mapped to that threshold (CoS 0 and 1) is eligible to be dropped. The more the threshold is exceeded, the greater rate at which those packets are dropped.
White Paper 2 70[1] 100[2] Should WRED not be available on a port, the port will use a Tail Drop method of buffer management. Tail Drop, as its name implies, simply drops incoming frames once the buffers have been fully utilized 6.5.2. WRR WRR is used to schedule egress traffic from transmit queues. A normal round robin algorithm will alternate between transmit queues sending an equal number of packets from each queue before moving to the next queue.
White Paper Configuration for WRR and the other aspects of what have been described above are explained in the following section. 6.5.3. Deficit Weighted Round Robin DWRR is a feature that is used on egress (transmit) queues. Explaining DWRR is best served by using an example. Let’s assume a switch port has 3 queues and we are using the normal WRR algorithm. Queue 1 has been given access to 50% of the bandwidth, Queue 2 has 30% and Queue 3 has 20%.
White Paper Figure 22. SRR compared to WRR The shaper is implemented on a per queue basis and has the effect of smoothing transient bursts of data that pass through that port. SRR will limit traffic output for that queue to the stated rate. Even if bandwidth is available, the rate will never exceed what is configured. SRR also modifies the way in which it schedules data when compared to the WRR algorithm.
White Paper 7. Configuring (Port ASIC based) QoS on the Catalyst 6500 QoS configuration instructs either the port ASIC or the PFC to perform a QoS action. The following sections will look at QoS configuration for both these processes. On the port ASIC, QoS configuration affects both inbound and outbound traffic flows. Figure 24. Ingress QoS Processing Flow From the above diagram it can be seen that the following QoS configuration processes apply 1. Trust states of ports 2. Applying port based CoS 3.
White Paper The above diagram shows QoS processing performed by the Port ASIC for outbound traffic. Some of the processes invoked on outbound QoS processing includes: 1. Transmit Tail Drop and WRED threshold assignments 2. CoS to Transmit Tail Drop and WRED maps 3. DSCP Rewrite 4. CoS Rewrite for ISL/802.
White Paper Total packets: 2928324 IP shortcut packets: 1293840 Packets dropped by policing: 0 IP packets with TOS changed by policing: 423137 IP packets with COS changed by policing: 2 Non-IP packets with COS changed by policing: 0 MPLS packets with EXP changed by policing: 0 When QoS is enabled in the Catalyst 6500, the switch will set a series of QoS defaults for the switch.
White Paper QoS Feature Default setting Policy Maps None Enabled Protocol Independent MAC Filtering Disabled VLAN Based MAC ACL QoS filtering Disabled There are also a number of default settings that are applied to receive and transmit queues and these include the following: Table 14.
White Paper Additional defaults can be found in the QoS configuration guides on CCO for a given software release. 7.2 Trusted and Un-trusted Ports Any given port on the Catalyst 6500 can be configured as trusted or un-trusted. The trust state of the port dictates how it marks, classifies and schedules the frame as it transits the switch. By default, all ports are in the un-trusted state. This means that any packet entering that port will have its ToS and CoS rewritten with a zero value.
White Paper 7.3 Preserving the Received ToS Byte (DSCP Transparency) When a packet enters the switch, the switch will derive an internal DSCP value from the incoming priority bits (based on the trust setting). This internal DSCP is used to write the ToS byte when the packet egresses the switch. This action can thus change the ingress DSCP setting. Those customers who would like to preserve the integrity of their DSCP can use this feature to avoid the PFC rewriting the DSCP on egress.
White Paper Table 15. Example CoS Mutation Mapping Original CoS Value Mutated CoS Value for Threshold Management 0 3 1 0 2 2 3 4 4 1 5 5 6 7 7 6 In the example map above, an ingress frame with a CoS value of 4 will take on the mutated value of 1 when placed in the queue. Should the threshold where CoS value of 1 is mapped to be exceeded, then that frame (with the original CoS value of 4) will be dropped. This feature can only be applied to an 802.
White Paper Figure 26. Receive Drop Thresholds—Example uses 1P1Q4T threshold (on GE ports) As shown in the above diagram, frames arrive and are placed in the queue. As the queue starts to fill, the thresholds are monitored by the port ASIC. When a threshold is breached, frames with CoS values identified by the administrator are dropped randomly from the queue. The default threshold mappings for the different queue structures are shown below. Table 17.
White Paper Feature Default Value Strict Priority Queue 1P1Q0T Receive Queue Standard Queue Strict Priority Queue 1P1Q8T Receive Queue Threshold 1 Threshold 2 Threshold 3 Threshold 4 Threshold 5 Threshold 6 Threshold 7 Strict Priority Queue 1Q8T Receive Queue Threshold 1 Threshold 2 Threshold 3 Threshold 4 Threshold 5 © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
White Paper Feature Default Value Threshold 6 Threshold 7 Threshold 8 2Q8T Receive Queue Queue 1 Threshold 1 Queue 1 Threshold 2 Queue 1 Threshold 3 Queue 1 Threshold 4 Queue 1 Threshold 5-8 Queue 2 Threshold 1 Queue 2 Threshold 2-8 8Q8T Receive Queue Queue 1 Threshold 1 Queue 1 Threshold 2 Queue 1 Threshold 3 Queue 1 Threshold 4 Queue 1 Threshold 5-8 © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
White Paper Feature Default Value Queue 2-7 Threshold 1-8 Queue 8 Threshold 1 Queue 8 Threshold 2-8 CoS None Tail Drop Disabled WRED Disabled CoS 5 Tail Drop Disabled WRED 100% Low; 100% High CoS None Tail Drop Disabled WRED 100% Low; 100% High These drop thresholds can be changed by the administrator. Also, the default CoS values that are mapped each threshold can also be changed. Different line cards implement different receive queue implementations.
White Paper Cat6500(config-if)# wrr-queue random-detect max-threshold 1 40 100 This sets the WRED drop thresholds for a port with a 1p2q2t queue structure 1 to 40% for threshold 1 (Tx) and 100% for threshold 2 (Tx). WRED can also be disabled if required in Cisco IOS. The method used to do this is to use the “no” form of the command. An example of disabling WRED is shown as follows: Cat6500(config-if)# no wrr-queue random-detect queue_id 7.
White Paper Figure 29. Weighted Round Robin (WRR) On the WS-X6248, 6148 and WS-X6348 line cards (with 2q2t queue structures), two transmit queues are used by the WRR mechanism for scheduling. On the WS-X6548 line cards (with a 1p3q1t queue structure) there are 4 Tx queues. Of these 4 Tx queues, 3 Tx queues are serviced by the WRR algorithm (the last Tx queue is a strict priority queue).
White Paper 7.13 Egress DSCP Mutation Mapping The switch will derive an internal DSCP value from the incoming packet based on the trust setting of the port. Assuming ToS Byte preservation is not configured, this internal DSCP value is used to write the ToS value on the egress packet. Egress DSCP Mutation is a feature that allows this internal DSCP value to be changed, and the changed value be used to as the priority setting for the egress packet.
White Paper This command example above sets the transmit queue size ratio on a 1P3Q8T port setting queue #1 and queue #2 to having 30% each of the allocated buffer space. Queue #3 is set to using 20% of the allocated buffer space. The most obvious question that springs to mind is why the percentages in this example don’t add up to 100%. The reason the allocated amount not adding up to 100% is due to the presence of the strict priority queue.
White Paper The differences between the policing capabilities of the different Supervisors (Policy Feature Card’s) are summarised in the following table. Table 18.
White Paper 8.3 Rate and Burst Two key parameters used implicitly in the configuration of policing are the Rate and Burst. The rate (also referred to as the Committed Information Rate—or CIR) is defined as the maximum amount of data that can be forwarded in a given interval (normally referred to in Kbps or Mbps). The burst can be thought of as the total amount of data that can be received in a given interval.
White Paper 8.6 Aggregates and Microflow’s Aggregates and Microflow’s are terms used to define the scope of policing that the PFC performs. A Microflow defines the policing of a single flow. The flow mask installed in the system defines how the Microflow Policer views a flow. Typically the default flow mask for Microflow policing is defined as a session with a unique SA/DA MAC address, SA/DA IP address and TCP/UDP port numbers or a source IP address.
White Paper Figure 34. Aggregate Policer There are two forms of aggregates that can be defined in Cisco IOS, and those are 1. Per interface aggregate policers, and 2. Named aggregate policers or shared aggregate Per interface aggregates are applied to an individual interface using the police command within a policy map class. These map classes can be applied to multiple interfaces, but the policer polices each interface separately.
White Paper on the number of bits in the packet (i.e. 512 bytes x 8 bits = 4096). If only 4095 tokens are in the bucket then the packet cannot be sent (it is dropped) and the tokens remain in the bucket. When a packet is sent, the tokens are removed from the bucket. The depth of the bucket is another important factor to consider. The Burst should be set so that it can hold the largest sized packet in the data flow being policed. In IOS, the minimum defined burst size is 1000.
White Paper Figure 35. Step 1 and 2 of the Token Bucket Process Step 3 and 4: The PFC will inspect the number of tokens that are available in the token bucket. If the number of tokens is greater than or equal to the number of bits in the current packet being inspected, then the packet can be forwarded. If there are not enough tokens, then the packet will either be marked down (and forwarded) or dropped, depending on the policer’s configuration. Figure 36.
White Paper Figure 37. Steps 5 and 6 of the Token Bucket Process Step 7: At the end of the time interval, the token bucket is primed with a new complement of tokens. The number of tokens that are added to the bucket is calculated as the Rate divided by 4000. Step 8: So begins the next time interval and the same process continues. For each packet that arrives in that given interval, if enough tokens exist, then the packet will be sent and the corresponding amount of tokens removed from the bucket.
White Paper 8.9 A Simple Policing Example Lets start with a simple example and assume a policing policy has been made. For a given Fast Ethernet interface (100Mb) the traffic is to be limited to 10Mbps. The definition of this policy implies the use of an Aggregate Policer. The following example steps through the process of building a policing policy on the Catalyst 6500 running Native IOS. The first step is to define a class map.
White Paper ● A constant stream of 64 byte packets arrives at the interface. ● PFC3B is used, so L2 Header plus Data is counted. ● Burst set to 5,000. ● Conforming traffic will be transmitted as per configuration statement. ● Traffic exceeding the rate will be dropped as per configuration statement. ● Token Rate Replenishment for each interval is calculated as 10Mb / 4000 = 10,000,000 / 4000 = 2500 The following table provides an insight into how this policer works. Table 19.
White Paper 8.10 User Based Rate Limiting (UBRL) When Microflow policing is enabled, interface full flow masks are used. This means that a Microflow policer will apply to each flow with a unique source/destination IP address and a unique source/destination port number and a unique interface. To explain this further, lets assume a Microflow policer with a Rate of 1Mbps is applied to a switchport.
White Paper Cat6500(config-pmap-c)# police flow mask src-only 20000000 13000 conform-action transmit exceed-action drop Cat6500(config-pmap)# class return_traffic Cat6500(config-pmap-c)# police flow mask dest-only 30000000 13000 conform-action transmit exceed-action drop This statement created a rate limits for outbound traffic of 20Mbps with a burst of 52Mbps (13000*4000 = 52Mb), and return traffic for 30Mbps with a burst of 52Mbps.
White Paper The above command sets the following map: CoS 0 1 2 3 4 5 6 7 DSCP 20 30 1 43 63 12 13 8 While it is very unlikely that the above map would be used in a real life network, it serves to give an idea of what can be achieved using this command. 8.11.2. IP Precedence to DSCP Mapping Like the CoS to DSCP map, a frame can have a DSCP value determined from the incoming packets IP Precedence setting.
White Paper The NO TRUST form of the keyword is used when a frame arrives from an un-trusted port. This allows the frame to have a DSCP value assigned during the process of policing. Lets look at an example of how a new priority (DSCP) can be assigned to different flows coming into the PFC using the following policy definition.
White Paper 9.
White Paper Module Port Type RX Queue TX Queue Total Buffer RX Buffer TX Buffer WS-X6248-TEL 48 x 10/100 (RJ21) 1Q4T 2Q2T 64KB 8KB 56KB WS-X6248A-TEL 48 x 10/100 (Telco) 1Q4T 2Q2T 128KB 16KB 112KB WS-X6316-GE-TX 16 x 10/100/1000 1P1Q4T 1P2Q2T 512KB 73KB 439KB WS-X6324-100FX-MM 24 x 100FX 1Q4T 2Q2T 128KB 16KB 112KB WS-X6324-100FX-SM 24 x 100FX 1Q4T 2Q2T 128KB 16KB 112KB WS-X6348-RJ-45 48 x 10/100 1Q4T 2Q2T 128KB 16KB 112KB WS-X6348-RJ-45V 48 x 10/100 with Ci
White Paper 11. Appendix Three—Comparison of QoS Features between PFC’s Over the years a number of different PFC versions have been released. The table below provides a high level overview of the major differences between the QoS capabilities of each PFC.
White Paper 1 2 2 3 2 1 4 5 2 2 6 7 Receive queues [type = 1q4t]: Queue Id Scheduling Num of thresholds ----------------------------------------1 Standard 4 queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] queue thresh cos-map --------------------------------------1 1 0 1 1 2 2 3 1 3 4 5 1 4 6 7 Packets dropped on Transmit: BPDU packets: 0 queue thresh dropped [cos-map] --------------------------------------------------1 1 0 [0 1 ] 1 2
White Paper 12.
White Paper -------------------------1 100[1] 100[2] 100[3] 100[4] queue thresh cos-map --------------------------------------1 1 0 1 1 2 2 3 1 3 4 6 1 4 7 2 1 5 Packets dropped on Transmit: BPDU packets: 0 queue thresh dropped [cos-map] --------------------------------------------------1 1 0 [0 1 ] 1 2 0 [2 3 ] 2 1 0 [4 6 ] 2 2 0* [7 ] 3 1 0* [5 ] * - shared transmit counter Packets dropped on Receive: BPDU packets: queue thresh 0 dropped [cos-map] --------------
White Paper Queueing Mode In Tx direction: mode-cos Transmit queues [type = 1p2q2t]: Queue Id Scheduling Num of thresholds ----------------------------------------1 WRR low 2 2 WRR high 2 3 Priority 1 WRR bandwidth ratios: 255[queue 1] 1[queue 2] queue-limit ratios: 100[queue 1] 0[queue 2] queue random-detect-min-thresholds ---------------------------------1 100[1] 100[2] 2 100[1] 100[2] queue random-detect-max-thresholds ---------------------------------1 100[1] 100[2] 2 100[1] 10
White Paper 1 1 1 2 1 3 1 4 2 1 0 1 2 3 4 5 6 7 Packets dropped on Transmit: BPDU packets: 0 queue thresh dropped [cos-map] --------------------------------------------------1 1 0 [0 1 2 3 4 5 6 7 ] 1 2 0 [] 2 1 0 [] 2 2 0* [] 3 1 0* [] * - shared transmit counter Packets dropped on Receive: BPDU packets: queue thresh 0 dropped [cos-map] --------------------------------------------------1 1 0 [0 1 2 3 4 5 6 7 ] 1 2 0 [] 1 3 0 [] 1 4 0* [] 2 1 0* [] * -
White Paper Transmit queues [type = 1p3q1t]: Queue Id Scheduling Num of thresholds ----------------------------------------1 WRR 1 2 WRR 1 3 WRR 1 4 Priority 1 WRR bandwidth ratios: 100[queue 1] 150[queue 2] 200[queue 3] queue random-detect-min-thresholds ---------------------------------1 70[1] 2 70[1] 3 70[1] queue random-detect-max-thresholds ---------------------------------1 100[1] 2 100[1] 3 100[1] WRED disabled queues: queue thresh cos-map -------------------------------
White Paper Packets dropped on Transmit: BPDU packets: 0 queue thresh dropped [cos-map] --------------------------------------------------1 1 0 [0 1 ] 2 1 0 [2 3 4 ] 3 1 0 [6 7 ] 4 1 0 [5 ] Packets dropped on Receive: BPDU packets: queue thresh 0 dropped [cos-map] --------------------------------------------------1 1 0 [0 1 2 3 4 6 7 ] 2 1 0 [5 ] 12.
White Paper queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 2 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 3 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 4 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 5 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 6 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 7 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] queue random-detect-min-thr
White Paper 1 7 1 8 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 3 1 3 2 3 3 3 4 3 5 3 6 3 7 3 8 4 1 4 2 4 3 4 4 4 5 4 6 4 7 4 8 5 1 5 2 5 3 5 4 5 5 5 6 5 7 5 8 6 1 6 2 6 3 © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
White Paper 6 4 6 5 6 6 6 7 6 8 7 1 7 2 7 3 7 4 7 5 7 6 7 7 7 8 8 1 Queueing Mode In Rx direction: mode-cos Receive queues [type = 1q8t]: Queue Id Scheduling Num of thresholds ----------------------------------------01 WRR 08 WRR bandwidth ratios: 100[queue 1] queue-limit ratios: 100[queue 1] queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] queue thresh cos-map --------------------------------------1 1 1
White Paper Packets dropped on Transmit: queue dropped [cos-map] --------------------------------------------1 0 [0 1 2 3 4 5 6 7 ] 2 0 [] 3 0 [] 4 0 [] 5 0 [] 6 0 [] 7 0 [] 8 0 [] Packets dropped on Receive: queue dropped [cos-map] --------------------------------------------1 0 [0 1 2 3 4 5 6 7 ] 12.
White Paper 4] 0[queue 5] 0[queue 6] 0[queue 7] 0[Pri Queue] queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] 2 100[1] 100[2] 100[3] 100[4] 3 100[1] 100[2] 100[3] 100[4] 4 100[1] 100[2] 100[3] 100[4] 5 100[1] 100[2] 100[3] 100[4] 6 100[1] 100[2] 100[3] 100[4] 7 100[1] 100[2] 100[3] 100[4] queue random-detect-min-thresholds ---------------------------------1 100[1] 100[2] 100[3] 100[4] 2 100[1] 100[2] 100[3] 100[4] 3 100[1] 100[2] 100[3] 100[4] 4 10
White Paper 2 3 2 4 3 1 3 2 3 3 3 4 4 1 4 2 4 3 4 4 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 7 1 7 2 7 3 7 4 8 1 queue thresh dscp-map --------------------------------------1 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 1 2 1 3 1 4 2 1 2 2 2 3 2 4 3 1 © 2009 Cisco Systems, Inc. All rights reserved.
White Paper 3 2 3 3 3 4 4 1 4 2 4 3 4 4 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 7 1 7 2 7 3 7 4 8 1 Queueing Mode In Rx direction: mode-cos Receive queues [type = 8q4t]: Queue Id Scheduling Num of thresholds ----------------------------------------01 WRR 04 02 WRR 04 03 WRR 04 04 WRR 04 05 WRR 04 06 WRR 04 07 WRR 04 08 WRR 04 WRR bandwidth ratios: 4] 0[queue 5] 0[queue 6] queue-limit ratios: 4] 0[queue 5] 100[queue 1] 0[queue 7] 100[queue 1]
White Paper queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] 2 100[1] 100[2] 100[3] 100[4] 3 100[1] 100[2] 100[3] 100[4] 4 100[1] 100[2] 100[3] 100[4] 5 100[1] 100[2] 100[3] 100[4] 6 100[1] 100[2] 100[3] 100[4] 7 100[1] 100[2] 100[3] 100[4] 8 100[1] 100[2] 100[3] 100[4] queue random-detect-min-thresholds ---------------------------------1 100[1] 100[2] 100[3] 100[4] 2 100[1] 100[2] 100[3] 100[4] 3 100[1] 100[2] 100[3] 100[4] 4 100[1] 100[2] 100[3] 100
White Paper 2 1 2 2 2 3 2 4 3 1 3 2 3 3 3 4 4 1 4 2 4 3 4 4 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 7 1 7 2 7 3 7 4 8 1 8 2 8 3 8 4 queue thresh dscp-map --------------------------------------1 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 1 2 1 3 1 4 © 2009 Cisco Systems, Inc. All rights reserved.
White Paper 2 1 2 2 2 3 2 4 3 1 3 2 3 3 3 4 4 1 4 2 4 3 4 4 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 7 1 7 2 7 3 7 4 8 1 8 2 8 3 8 4 Packets dropped on Transmit: BPDU packets: queue 0 dropped [cos-map] --------------------------------------------1 0 [0 1 2 3 4 5 6 7 ] 2 0 [] 3 0 [] 4 0 [] © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
White Paper 5 0 [] 6 0 [] 7 0 [] 8 0 [] Packets dropped on Receive: BPDU packets: 0 queue dropped [cos-map] --------------------------------------------1 0 [0 1 2 3 4 5 6 7 ] 2 0 [] 3 0 [] 4 0 [] 5 0 [] 6 0 [] 7 0 [] 8 0 [] 12.
White Paper WRR bandwidth ratios: 0[queue 5] 0[queue 6] queue-limit ratios: 0[queue 5] 0[queue 6] 100[queue 1] 150[queue 2] 200[queue 3] 0[queue 4] 0[queue 7] 50[queue 1] 0[queue 7] 20[queue 2] 15[queue 3] 0[queue 4] 15[Pri Queue] queue tail-drop-thresholds -------------------------1 70[1] 100[2] 100[3] 100[4] 2 70[1] 100[2] 100[3] 100[4] 3 100[1] 100[2] 100[3] 100[4] 4 100[1] 100[2] 100[3] 100[4] 5 100[1] 100[2] 100[3] 100[4] 6 100[1] 100[2] 100[3] 100[4] 7 100[1] 100[2] 100[3] 100
White Paper 1 4 2 1 2 2 2 3 4 2 3 2 4 3 1 3 2 3 3 3 4 4 1 4 2 4 3 4 4 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 7 1 7 2 7 3 7 4 8 1 6 7 5 queue thresh dscp-map --------------------------------------1 1 0 1 2 3 4 5 6 7 8 9 11 13 15 16 17 19 21 23 25 27 29 31 33 39 41 42 43 44 45 47 1 2 1 3 1 4 2 1 14 2 2 12 2 3 10 2 4 © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.
White Paper 3 1 22 3 2 20 3 3 18 3 4 4 1 24 30 4 2 28 4 3 26 4 4 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 7 1 7 2 7 3 7 4 8 1 32 34 35 36 37 38 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 40 46 Queueing Mode In Rx direction: mode-cos Receive queues [type = 1p7q2t]: Queue Id Scheduling Num of thresholds ----------------------------------------01 WRR 02 02 WRR 02 03 WRR 02 04 WRR 02 05 WRR 02 06 WRR 02 07 WRR 02 08 Priority 01 WRR bandw
White Paper queue tail-drop-thresholds -------------------------1 100[1] 100[2] 2 100[1] 100[2] 3 100[1] 100[2] 4 100[1] 100[2] 5 100[1] 100[2] 6 100[1] 100[2] 7 100[1] 100[2] queue thresh cos-map --------------------------------------1 1 1 2 2 1 2 2 3 1 3 2 4 1 4 2 5 1 5 2 6 1 6 2 7 1 7 2 8 1 0 1 2 3 4 5 6 7 queue thresh dscp-map --------------------------------------1 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
White Paper 4 2 5 1 5 2 6 1 6 2 7 1 7 2 8 1 Packets dropped on Transmit: BPDU packets: 0 queue dropped [cos-map] --------------------------------------------1 0 [0 1 ] 2 0 [2 3 4 ] 3 0 [6 7 ] 8 0 [5 ] Packets dropped on Receive: BPDU packets: 0 queue dropped [cos-map] --------------------------------------------1 0 [0 1 2 3 4 5 6 7 ] Cat6500# 12.
White Paper 04 Priority WRR bandwidth ratios: queue-limit ratios: 01 100[queue 1] 150[queue 2] 200[queue 3] 50[queue 1] 20[queue 2] 15[queue 3] queue tail-drop-thresholds -------------------------1 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 2 70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 3 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] queue random-detect-min-thresholds ---------------------------------1 40[1] 70[2] 70[3] 70[4] 70[5] 70[6] 70[7] 70[8] 2 40[1] 70
White Paper 3 1 3 2 3 3 3 4 3 5 3 6 3 7 3 8 4 1 6 7 5 Queueing Mode In Rx direction: mode-cos Receive queues [type = 2q8t]: Queue Id Scheduling Num of thresholds ----------------------------------------01 WRR 08 02 WRR 08 WRR bandwidth ratios: 100[queue 1] 0[queue 2] queue-limit ratios: 100[queue 1] 0[queue 2] queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8] 2 100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7]
White Paper 2 7 2 8 12.
White Paper 1 Standard 4 2 Priority 1 queue tail-drop-thresholds -------------------------1 100[1] 100[2] 100[3] 100[4] queue thresh cos-map --------------------------------------1 1 1 2 1 3 1 4 2 1 0 1 2 3 4 5 6 7 Packets dropped on Transmit: BPDU packets: 0 queue thresh dropped [cos-map] --------------------------------------------------1 1 0 [0 1 2 3 4 5 6 7 ] 1 2 0 [] 2 1 0 [] 2 2 0* [] 3 1 0* [] * - shared transmit counter Packets dropped on Receive: BPDU pack
White Paper Printed in USA © 2009 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.