User manual
Table Of Contents
- Cisco ONS 15310-CL and Cisco ONS 15310-MA Ethernet Card Software Feature and Configuration Guide
- Contents
- Preface
- Overview of the ML-Series Card
- CTC Operations on the ML-Series Card
- Initial Configuration of the ML-Series Card
- Configuring Interfaces on the ML-Series Card
- Configuring POS on the ML-Series Card
- Configuring STP and RSTP on the ML-Series Card
- STP Features
- STP Overview
- Supported STP Instances
- Bridge Protocol Data Units
- Election of the Root Switch
- Bridge ID, Switch Priority, and Extended System ID
- Spanning-Tree Timers
- Creating the Spanning-Tree Topology
- Spanning-Tree Interface States
- Spanning-Tree Address Management
- STP and IEEE 802.1Q Trunks
- Spanning Tree and Redundant Connectivity
- Accelerated Aging to Retain Connectivity
- RSTP Features
- Interoperability with IEEE 802.1D STP
- Configuring STP and RSTP Features
- Default STP and RSTP Configuration
- Disabling STP and RSTP
- Configuring the Root Switch
- Configuring the Port Priority
- Configuring the Path Cost
- Configuring the Switch Priority of a Bridge Group
- Configuring the Hello Time
- Configuring the Forwarding-Delay Time for a Bridge Group
- Configuring the Maximum-Aging Time for a Bridge Group
- Verifying and Monitoring STP and RSTP Status
- STP Features
- Configuring VLANs on the ML-Series Card
- Configuring IEEE 802.1Q Tunneling and Layer 2 Protocol Tunneling on the ML-Series Card
- Configuring Link Aggregation on the ML-Series Card
- Configuring IRB on the ML-Series Card
- Configuring Quality of Service on the ML-Series Card
- Understanding QoS
- ML-Series QoS
- QoS on RPR
- Configuring QoS
- Monitoring and Verifying QoS Configuration
- QoS Configuration Examples
- Understanding Multicast QoS and Multicast Priority Queuing
- Configuring Multicast Priority Queuing QoS
- QoS not Configured on Egress
- ML-Series Egress Bandwidth Example
- Understanding CoS-Based Packet Statistics
- Configuring CoS-Based Packet Statistics
- Understanding IP SLA
- Configuring the Switching Database Manager on the ML-Series Card
- Configuring Access Control Lists on the ML-Series Card
- Configuring Resilient Packet Ring on the ML-Series Card
- Understanding RPR
- Configuring RPR
- Connecting the ML-Series Cards with Point-to-Point STS Circuits
- Configuring CTC Circuits for RPR
- Configuring RPR Characteristics and the SPR Interface on the ML-Series Card
- Assigning the ML-Series Card POS Ports to the SPR Interface
- Creating the Bridge Group and Assigning the Ethernet and SPR Interfaces
- RPR Cisco IOS Configuration Example
- Verifying Ethernet Connectivity Between RPR Ethernet Access Ports
- CRC Threshold Configuration and Detection
- Monitoring and Verifying RPR
- Add an ML-Series Card into an RPR
- Delete an ML-Series Card from an RPR
- Cisco Proprietary RPR KeepAlive
- Cisco Proprietary RPR Shortest Path
- Redundant Interconnect
- Configuring Security for the ML-Series Card
- Understanding Security
- Disabling the Console Port on the ML-Series Card
- Secure Login on the ML-Series Card
- Secure Shell on the ML-Series Card
- RADIUS on the ML-Series Card
- RADIUS Relay Mode
- RADIUS Stand Alone Mode
- Understanding RADIUS
- Configuring RADIUS
- Default RADIUS Configuration
- Identifying the RADIUS Server Host
- Configuring AAA Login Authentication
- Defining AAA Server Groups
- Configuring RADIUS Authorization for User Privileged Access and Network Services
- Starting RADIUS Accounting
- Configuring a nas-ip-address in the RADIUS Packet
- Configuring Settings for All RADIUS Servers
- Configuring the ML-Series Card to Use Vendor-Specific RADIUS Attributes
- Configuring the ML-Series Card for Vendor-Proprietary RADIUS Server Communication
- Displaying the RADIUS Configuration
- Configuring Bridging on the ML-Series Card
- CE-100T-8 Ethernet Operation
- Command Reference for the ML-Series Card
- [no] bridge bridge-group-number protocol {drpri-rstp | ieee | rstp}
- clear counters
- [no] clock auto
- interface spr 1
- [no] pos mode gfp [fcs-disabled]
- [no] pos pdi holdoff time
- [no] pos report alarm
- [non] pos trigger defects condition
- [no] pos trigger delay time
- [no] pos vcat defect {immediate | delayed}
- show controller pos interface-number [details]
- show interface pos interface-number
- show ons alarm
- show ons alarm defect {[eqpt | port [port-number] | sts [sts-number] | vcg [vcg-number] | vt]}
- show ons alarm failure {[eqpt | port [port-number] | sts [sts-number] | vcg [vcg-number] | vt]}
- spr-intf-id shared-packet-ring-number
- [no] spr load-balance { auto | port-based }
- spr station-id station-id-number
- spr wrap { immediate | delayed }
- Unsupported CLI Commands for the ML-Series Card
- Using Technical Support
- Index

11-6
Cisco ONS 15310-CL and Cisco ONS 15310-MA Ethernet Card Software Feature and Configuration Guide R8.5
78-18133-01
Chapter 11 Configuring Quality of Service on the ML-Series Card
ML-Series QoS
In some cases, it might be desirable to discard all traffic of a specific ingress class. This can be
accomplished by using a police command of the following form with the class: police 96000
conform-action drop exceed-action drop.
If a marked packet has a provider-supplied Q-tag inserted before transmission, the marking only affects
the provider Q-tag. If a Q-tag is received, it is re-marked. If a marked packet is transported over the RPR
ring, the marking also affects the RPR-CoS bit.
If a Q-tag is inserted (QinQ), the marking affects the added Q-tag. If the ingress packet contains a Q-tag
and is transparently switched, the existing Q-tag is marked. In case of a packet without any Q-tag, the
marking does not have any significance.
The local scheduler treats all nonconforming packets as discard eligible regardless of their CoS setting
or the global cos commit definition. For RPR implementation, the discard eligible (DE) packets are
marked using the DE bit on the RPR header. The discard eligibility based on the CoS commit or the
policing action is local to the ML-Series card scheduler, but it is global for the RPR ring.
Queuing
ML-Series card queuing uses a shared buffer pool to allocate memory dynamically to different traffic
queues. The ML-100T-8 has 1.5 MB of packet buffer memory.
Each queue has an upper limit on the allocated number of buffers based on the class bandwidth
assignment of the queue and the number of queues configured. This upper limit is typically 30 percent
to 50 percent of the shared buffer capacity. Dynamic buffer allocation to each queue can be reduced
based on the number of queues needing extra buffering. The dynamic allocation mechanism provides
fairness in proportion to service commitments as well as optimization of system throughput over a range
of system traffic loads.
The Low Latency Queue (LLQ) is defined by setting the weight to infinity or committing 100 percent
bandwidth. When a LLQ is defined, a policer should also be defined on the ingress for that specific class
to limit the maximum bandwidth consumed by the LLQ; otherwise there is a potential risk of LLQ
occupying the whole bandwidth and starving the other unicast queues.
The ML-Series includes support for 400 user-definable queues, which are assigned per the classification
and bandwidth allocation definition. The classification used for scheduling classifies the frames/packet
after the policing action, so if the policer is used to mark or change the CoS bits of the ingress
frames/packet, the new values are applicable for the classification of traffic for queuing and scheduling.
The ML-Series provides buffering for 4000 packets.
Scheduling
Scheduling is provided by a series of schedulers that perform a WDRR as well as priority scheduling
mechanisms from the queued traffic associated with each egress port.
Though ordinary round robin servicing of queues can be done in constant time, unfairness occurs when
different queues use different packet sizes. Deficit Round Robin (DRR) scheduling solves this problem.
If a queue was not able to send a packet in its previous round because its packet size was too large, the
remainder from the previous amount of credits that the queue got in each previous round (quantum) is
added to the quantum for the next round.
WDRR extends the quantum idea from the DRR to provide weighted throughput for each queue.
Different queues have different weights, and the quantum assigned to each queue in its round is
proportional to the relative weight of the queue among all the queues serviced by that scheduler.