Deployment Guide

Table Of Contents
Input packets: 0 0 pps 0
Output packets: 0 0 pps 0
64B packets: 0 0 pps 0
Over 64B packets: 0 0 pps 0
Over 127B packets: 0 0 pps 0
Over 255B packets: 0 0 pps 0
Over 511B packets: 0 0 pps 0
Over 1023B packets: 0 0 pps 0
Error statistics:
Input underruns: 0 0 pps 0
Input giants: 0 0 pps 0
Input throttles: 0 0 pps 0
Input CRC: 0 0 pps 0
Input IP checksum: 0 0 pps 0
Input overrun: 0 0 pps 0
Output underruns: 0 0 pps 0
Output throttles: 0 0 pps 0
m - Change mode c - Clear screen
l - Page up a - Page down
T - Increase refresh interval t - Decrease refresh interval
q - Quit
q
Dell#
Maintenance Using TDR
The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers.
TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four
copper pairs. TDR sends a signal onto the physical cable and examines the reflection of the signal that returns. By examining
the reflection, TDR is able to indicate whether there is a cable fault (when the cable is broken, becomes unterminated, or if a
transceiver is unplugged).
TDR is useful for troubleshooting an interface that is not establishing a link; that is, when the link is flapping or not coming up.
TDR is not intended to be used on an interface that is passing traffic. When a TDR test is run on a physical cable, it is important
to shut down the port on the far end of the cable. Otherwise, it may lead to incorrect test results.
NOTE: TDR is an intrusive test. Do not run TDR on a link that is up and passing traffic.
To test and display TDR results, use the following commands.
1. To test for cable faults on the TenGigabitEthernet
EXEC Privilege mode
tdr-cable-test tengigabitethernet slot/port
Between two ports, do not start the test on both ends of the cable.
Enable the interface before starting the test.
Enable the port to run the test or the test prints an error message.
2. Displays TDR test results.
EXEC Privilege mode
show tdr tengigabitethernet slot/port
Displaying Traffic Statistics on HiGig Ports
You can verify the buffer usage and queue counters for high-Gigabit Ethernet (HiGig) ports and link bundles (port channels).
The buffer counters supported for front-end ports are extended to HiGig backplane ports.
You can display the queue statistics and buffer counters for backplane line-card (leaf) and switch fabric module (SFM - spine)
NPU port queues on a switch using the show commands described in this section. Transmit, receive, and drop counters are
displayed. Buffer counters include the total number of cells currently used by all queues on all ports in a port pipe.
The f10-bp-stats.mib is used for gathering statistics about backplane HiGig ports. Line-card NPUs value is 0; SFM NPUs range
from 0 to 1.
Interfaces
433