Installation guide

Network Infrastructure for EtherNet/IP™
Designing the Infrastructure
4-49
leave message. Immediate-leave processing ensures optimal bandwidth management for all hosts on a
switched network, even when multiple multicast groups are in use simultaneously.
4.9 Quality of Service in a Switched Network
Phrases like “gigabits of back-plane capacity,” “millions of switched packets per second,” and “non-
blocking switch fabrics” reflect the high-performance of today’s Ethernet switches and typically generate
a simple question: Why the need for Quality of Service (QoS)? The answer is congestion.
A switch may be the fastest switch in the world, but if either of the two scenarios in Figure 4-9 is present,
the switch will experience congestion.
Figure 4-9 Inputs Higher in Speed or Quantity than Outputs Cause Congestion.
If the congestion management features on the switch are not up to par during these congested periods,
performance will suffer and packets will be dropped. In a TCP/IP network, packet drops will generate
retransmissions, which then increases network load. In networks that are already congested, this increase
in network load further exacerbates existing network performance issues.
Latency-sensitive traffic, such as motion control messages, can be severely affected if transmission delays
occur. Adding more buffers to a switch will not necessarily alleviate congestion problems since latency-
sensitive traffic needs to be switched as quickly as possible.
To address network congestion issues, different stages of QoS need to be implemented:
First,
identify different traffic types in the network using classification techniques.
Next,
implement advanced buffer management techniques to avoid high-priority
traffic from being dropped during congestion.
Finally,
incorporate scheduling techniques to transmit high-priority traffic from
queues as quickly as possible.