Troubleshooting Guide
Table Of Contents
- Contents
- Chapter 1: Introduction
- Chapter 2: Safety messages
- Chapter 3: New in this document
- Chapter 4: Data collection required for Technical Support cases
- Chapter 5: Troubleshooting planning fundamentals
- Chapter 6: Troubleshooting fundamentals
- Chapter 7: Troubleshooting tool fundamentals
- Chapter 8: Log and trap fundamentals
- Chapter 9: Log configuration using ACLI
- Configuring a UNIX system log and syslog host
- Configuring secure forwarding
- Installing root certificate for syslog client
- Configuring logging
- Configuring the remote host address for log transfer
- Configuring system logging to external storage
- Configuring system message control
- Extending system message control
- Viewing logs
- Configuring ACLI logging
- Chapter 10: Log configuration using EDM
- Chapter 11: SNMP trap configuration using ACLI
- Chapter 12: SNMP trap configuration using EDM
- Chapter 13: Traps reference
- Chapter 14: Hardware troubleshooting
- Chapter 15: Software troubleshooting
- Chapter 16: Software troubleshooting tool configuration using ACLI
- Using ACLI for troubleshooting
- Using software record dumps
- Using trace to diagnose problems
- Using trace to diagnose IPv6 problems
- Viewing and deleting debug files
- Configuring port mirroring
- Configuring global mirroring actions with an ACL
- Configuring ACE actions to mirror
- Clearing ARP information for an interface
- Flushing routing, MAC, and ARP tables for an interface
- Pinging an IP device
- Running a traceroute test
- Showing SNMP logs
- Using trace to examine IS-IS control packets
- Viewing the metric type of IS-IS route in TLVs – detailed
- Viewing the metric type of IS-IS route in TLVs – summarized
- Chapter 17: Software troubleshooting tool configuration using EDM
- Chapter 18: Layer 1 troubleshooting
- Chapter 19: Operations and Management
- CFM fundamentals
- CFM configuration using ACLI
- Autogenerated CFM
- Configuring explicit mode CFM
- Displaying SPBM nodal configuration
- Configuring simplified CFM SPBM
- Triggering a loopback test (LBM)
- Triggering linktrace (LTM)
- Triggering a Layer 2 ping
- Triggering a Layer 2 traceroute
- Triggering a Layer 2 tracetree
- Triggering a Layer 2 tracemroute
- Using trace CFM to diagnose problems
- Using trace SPBM to diagnose problems
- CFM configuration using EDM
- Autogenerated CFM
- Configuring explicit CFM
- Configuring Layer 2 ping
- Initiating a Layer 2 traceroute
- Viewing Layer 2 traceroute results
- Configuring Layer 2 IP ping
- Viewing Layer 2 IP Ping results
- Configuring Layer 2 IP traceroute
- Viewing Layer 2 IP traceroute results
- Triggering a loopback test
- Triggering linktrace
- Viewing linktrace results
- Configuring Layer 2 tracetree
- Viewing Layer 2 tracetree results
- Configuring Layer 2 trace multicast route on a VLAN
- Configuring Layer 2 tracemroute on a VRF
- Viewing Layer 2 trace multicast route results
- CFM configuration example
- Chapter 20: Upper layer troubleshooting
- Troubleshooting SNMP
- Troubleshooting DHCP
- Troubleshooting DHCP Relay
- Troubleshooting client connection to the DHCP server
- Troubleshooting IPv6 DHCP Relay
- IPv6 DHCP Relay switch side troubleshooting
- IPv6 DHCP Relay server side troubleshooting
- IPv6 DHCP Relay client side troubleshooting
- Enabling trace messages for IPv6 DHCP Relay
- Troubleshooting IPv6 VRRP
- VRRP transitions
- Enabling trace messages for IPv6 VRRP troubleshooting
- Risks associated with enabling trace messages
- VRRP with higher priority running as backup
- Downgrading or upgrading from releases that support different key sizes
- Troubleshooting IPv6 connectivity loss
- Troubleshooting TACACS+
- Troubleshooting RSMLT
- Chapter 21: Unicast routing troubleshooting
- Chapter 22: Multicast troubleshooting
- Chapter 23: Multicast routing troubleshooting using ACLI
- Viewing IGMP interface information
- Viewing multicast group trace information for IGMP snoop
- Viewing IGMP group information
- Showing the hardware resource usage
- Using PIM debugging commands
- Determining the protocol configured on the added VLAN
- Determining the data stream learned with IP Multicast over Fabric Connect on the VLAN
- Displaying the SPBM multicast database
- Troubleshooting IP Multicast over Fabric Connect for Layer 2 VSNs
- Troubleshooting IP Multicast over Fabric Connect for Layer 3 VSNs
- Troubleshooting IP Multicast over Fabric Connect for IP Shortcuts
- Defining the IS-IS trace flag for IP multicast
- Chapter 24: Multicast routing troubleshooting using EDM
- Viewing IGMP interface information
- Viewing IGMP snoop trace information
- Viewing IGMP group information
- Viewing multicast group sources
- Viewing multicast routes by egress VLAN
- Enabling multicast routing process statistics
- Determining the data stream learned when IP Multicast over Fabric Connect is configured on the VLAN
- Showing the SPBM multicast database
- Chapter 25: Transparent Port UNI feature troubleshooting using ACLI
- Chapter 26: Troubleshooting MACsec
- Chapter 27: Troubleshooting MACsec using EDM
- Chapter 28: Troubleshooting Fabric Attach
- Troubleshooting Fabric Attach using the ACLI
- Troubleshooting Fabric Attach using the EDM
- Fabric Attach troubleshooting example
Layer 2 tracemroute
The l2tracemroute command is a proprietary command that allows the user to trace the
multicast tree for a certain multicast flow. The user specifies source, group, and service context
(either VLAN or VRF) for the multicast flow to trace.
CFM sends a multicast LTM using an internal calculation to map the source, group, and context to
the corresponding target address. The LTR comes from all leaves of the multicast tree for that flow,
as well as transit nodes. The target MAC used in the LTM is a combination of the data I-SID and the
nickname and the packet is sent on the appropriate SPBM B-VLAN. The user can see the
generated multicast tree for that flow, which includes the data I-SID and nickname.
Nodal MPs
Nodal MPs provide both MEP and MIP functionality for SPBM deployments. Nodal MPs are
associated with a B-VLAN and are VLAN encapsulated packets. The Nodal MEP provides
traceability and troubleshooting at the system level for a given B-VLAN. Each node (chassis) has a
given MAC address and communicates with other nodes. The SPBM instance MAC address is used
as the MAC address of the Nodal MP. The Nodal B-VLAN MPs supports eight levels of CFM and
you configure the Nodal B-VLAN MPs on a per B-VLAN basis. Virtual SMLT 10 MAC addresses are
also able to respond for LTM and LBM.
Nodal B-VLAN MEPs
The Nodal B-VLAN MEPs created on the CP and function as if they are connected to the virtual
interface of the given B-VLAN. Because of this they are supported for both port and MLT based B-
VLANs. To support this behavior a MAC Entry is added to the FDB and a new CFM data-path table
containing the B-VLAN and MP level are added to direct CFM frames to the CP as required.
Nodal B-VLAN MIPs
The Nodal MIP is associated with a B-VLAN. VLAN and level are sufficient to specify the Nodal MIP
entity. The Nodal MIP MAC address is the SPBM system ID for the node on which it resides. If the
fastpath sends a message to the CP, the MIP responds if it is not the target and the MEP responds
if it is the target.
Nodal B-VLAN MIPs with SMLT
When Nodal MEPs or MIPs are on SPBM B-VLANs the LTM code uses a unicast MAC DA. The
LTM DA is the same as the target MAC address, which is the SPBM MAC address or the SMLT
MAC address of the target node.
Avaya Virtual Services Platform 8000 Series supports SMLT interaction with SPBM. This is
accomplished by using two B-VIDs into the core from each pair of SMLT terminating nodes. Both
nodes advertise the Nodal B-MAC into the core on both B-VIDS. In addition each node advertises
the SMLT virtual B-MAC on one of the two B-VLANs.
Operations and Management
January 2017 Troubleshooting 152
Comments on this document? infodev@avaya.com










