Design Reference
Table Of Contents
- Contents
- Chapter 1: Introduction
- Chapter 2: New in this release
- Chapter 3: Network design fundamentals
- Chapter 4: Hardware fundamentals and guidelines
- Chapter 5: Optical routing design
- Chapter 6: Platform redundancy
- Chapter 7: Link redundancy
- Chapter 8: Layer 2 loop prevention
- Chapter 9: Spanning tree
- Chapter 10: Layer 3 network design
- Chapter 11: SPBM design guidelines
- Chapter 12: IP multicast network design
- Multicast and VRF-Lite
- Multicast and MultiLink Trunking considerations
- Multicast scalability design rules
- IP multicast address range restrictions
- Multicast MAC address mapping considerations
- Dynamic multicast configuration changes
- IGMPv3 backward compatibility
- IGMP Layer 2 Querier
- TTL in IP multicast packets
- Multicast MAC filtering
- Guidelines for multicast access policies
- Multicast for multimedia
- Chapter 13: System and network stability and security
- Chapter 14: QoS design guidelines
- Chapter 15: Layer 1, 2, and 3 design examples
- Chapter 16: Software scaling capabilities
- Chapter 17: Supported standards, RFCs, and MIBs
- Glossary
At a high level, three main types or stages of congestion exist:
1. No congestion
2. Bursty congestion
3. Severe congestion
In a noncongested network, QoS actions ensure that delay-sensitive applications, such as real-time
voice and video traffic, are sent before lower-priority traffic. The prioritization of delay-sensitive traffic
is essential to minimize delay and reduce or eliminate jitter, which has a detrimental impact on these
applications.
A network can experience momentary bursts of congestion for various reasons, such as network
failures, rerouting, and broadcast storms. Avaya Virtual Services Platform 4000 Series has sufficient
capacity to handle bursts of congestion in a seamless and transparent manner. If the burst is not
sustained, the traffic management and buffering process on the switch allows all the traffic to pass
without loss.
Severe congestion is defined as a condition where the network or certain elements of the network
experience a prolonged period of sustained congestion. Under such congestion conditions,
congestion thresholds are reached, buffers overflow, and a substantial amount of traffic is lost.
After the switch detects severe congestion, Avaya Virtual Services Platform 4000 Series discards
traffic based on drop precedence values. This mode of operation ensures that high-priority traffic is
not discarded before lower-priority traffic.
When you perform traffic engineering and link capacity analysis for a network, the standard design
rule is to design the network links and trunks for a maximum average-peak utilization of no more
than 80%. This value means that the network peaks to up to 100% capacity, but the average-peak
utilization does not exceed 80%. The network is expected to handle momentary peaks above 100%
capacity.
QoS examples and recommendations
The sections that follow present QoS network scenarios for bridged and routed traffic over the core
network.
Bridged traffic
If you bridge traffic over the core network, you keep customer VLANs separate (similar to a Virtual
Private Network). Normally, a service provider implements VLAN bridging (Layer 2) and no routing.
In this case, the 802.1p-bit marking determines the QoS level assigned to each packet. If DiffServ is
active on core ports, the level of service received is based on the highest of the DiffServ or 802.1p
settings.
The following cases provide sample QoS design guidelines you can use to provide and maintain
high service quality in a network.
If you configure a core port, you assume that, for all incoming traffic, the QoS value is properly
marked. All core switch ports simply read and forward packets; they are not re-marked or
reclassified. All initial QoS markings are performed at the customer device or on the edge devices.
QoS examples and recommendations
January 2015 Network Design Reference for Avaya VSP 4000 Series 133
Comments? infodev@avaya.com










