Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.7(0.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide.............................................................................................................13 Audience........................................................................................................................................................................... 13 Conventions......................................................................................................................................................................
Configuring Enhanced Transmission Selection.............................................................................................................31 Configuring DCB Maps and its Attributes..........................................................................................................................31 DCB Map: Configuration Procedure............................................................................................................................ 31 Important Points to Remember.....
Ensuring Robustness in a Converged Ethernet Network..................................................................................................65 FIP Snooping on Ethernet Bridges................................................................................................................................... 66 How FIP Snooping is Implemented................................................................................................................................... 68 FIP Snooping on VLANs......
Adding an Interface to an Untagged VLAN................................................................................................................. 91 VLAN Configuration on Physical Ports and Port-Channels..........................................................................................91 Port Channel Interfaces....................................................................................................................................................
Configuration Tasks for Port Channel Interfaces........................................................................................................113 Creating a Port Channel............................................................................................................................................ 113 Adding a Physical Interface to a Port Channel...........................................................................................................
Viewing the LLDP Configuration..................................................................................................................................... 140 Viewing Information Advertised by Adjacent LLDP Agents.............................................................................................. 141 Configuring LLDPDU Intervals........................................................................................................................................
Reading Managed Object Values.....................................................................................................................................173 Displaying the Ports in a VLAN using SNMP................................................................................................................... 174 Fetching Dynamic MAC Entries using SNMP..................................................................................................................175 Deriving Interface Indices..
18 Broadcast Storm Control........................................................................................... 202 Supported Modes.......................................................................................................................................................... 202 Disabling Broadcast Storm Control.................................................................................................................................202 Displaying Broadcast-Storm Control Status.........
Guidelines for Working with FC Flex IO Modules...................................................................................................... 233 Processing of Data Traffic........................................................................................................................................ 235 Installing and Configuring the Switch.......................................................................................................................
26 Debugging and Diagnostics....................................................................................... 264 Supported Modes.......................................................................................................................................................... 264 Debugging Aggregator Operation...................................................................................................................................
1 About this Guide This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.7(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Related Documents For more information about the Dell PowerEdge M I/O Aggregator MXL 10/40GbE Switch IO Module, refer to the following documents: • Dell Networking OS Command Line Reference Guide for the M I/O Aggregator • Dell Networking OS Getting Started Guide for the M I/O Aggregator • Release Notes for the M I/O Aggregator 14 About this Guide
2 Before You Start To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, refer to Stacking.
• Internet small computer system interface (iSCSI)optimization. • Internet group management protocol (IGMP) snooping. • Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. • Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface.
Link Tracking By default, all server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG goes down, the aggregator loses its connectivity and is no longer operational; all server-facing ports are brought down after the specified defer-timer interval, which is 10 seconds by default. If you have configured VLAN, you can reduce the defer time by changing the defer-timer value or remove it by using the no defer-timer command.
For detailed information about how to reconfigure specific software settings, refer to the appropriate chapter.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• LINE submode is the mode in which you to configure the console and virtual terminal lines. NOTE: At any time, entering a question mark (?) displays the available command options. For example, when you are in CONFIGURATION mode, entering the question mark first lists all available commands, including the possible submodes.
CLI Command Mode Prompt Access Command VIRTUAL TERMINAL Dell(config-line-vty)# line (LINE Modes) The following example shows how to change the command mode from CONFIGURATION mode to INTERFACE configuration mode. Example of Changing Command Modes Dell(conf)#interface tengigabitethernet 0/2 Dell(conf-if-te-0/2)# The do Command You can enter an EXEC mode command from any CONFIGURATION mode (CONFIGURATION, INTERFACE, and so on.
Obtaining Help Obtain a list of keywords and a brief functional description of those keywords at any CLI mode using the ? or help command: • To list the keywords available in the current mode, enter ? at the prompt or after a keyword. • Enter ? after a prompt lists all of the available keywords. The output of this command is the same for the help command.
Short-Cut Key Combination Action CNTL-N Return to more recent commands in the history buffer after recalling commands with CTRL-P or the UP arrow key. CNTL-P Recalls commands, beginning with the last command. CNTL-U Deletes the line. CNTL-W Deletes the previous word. CNTL-X Deletes the line. CNTL-Z Ends continuous scrolling of command outputs. Esc B Moves the cursor back one word. Esc F Moves the cursor forward one word. Esc D Deletes all characters from the cursor to the end of the word.
The except keyword displays text that does not match the specified text. The following example shows this command used in combination with the show linecard all command. Example of the except Keyword Dell(conf)#do show stack-unit all stack-ports all pfc details | except 0 Admin mode is On Admin is enabled Local is enabled Link Delay 65535 pause quantum Dell(conf)# The find keyword displays the output of the show command beginning from the first occurrence of specified text.
4 Data Center Bridging (DCB) On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode.
Priority-Based Flow Control In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in multiprotocol links so that it does not affect other traffic types and no frames are lost due to congestion. When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.1p priority traffic to the transmitting device.
• PFC uses the DCB MIB IEEE802.1azd2.5 and the PFC MIB IEEE802.1bb-d2.2. If DCBx negotiation is not successful (for example, due to a version or TLV mismatch), DCBx is disabled and you cannot enable PFC or ETS. Configuring Priority-Based Flow Control PFC provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default when you enable DCB.
5. (Optional) Enter a text description of the input policy. DCB INPUT POLICY mode description text The maximum is 32 characters. 6. Exit DCB input policy configuration mode. DCB INPUT POLICY mode exit 7. Enter interface configuration mode. CONFIGURATION mode interface type slot/port 8. Apply the input policy with the PFC configuration to an ingress interface. INTERFACE mode dcb-policy input policy-name 9. Repeat Steps 1 to 8 on all PFC-enabled peer interfaces to ensure lossless traffic service.
A DCB input policy for PFC applied to an interface may become invalid if you reconfigure dot1p-queue mapping. This situation occurs when the new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch. In this case, all PFC configurations received from PFC-enabled peers are removed and resynchronized with the peer devices. Traffic may be interrupted when you reconfigure PFC no-drop priorities in an input policy or reapply the policy to an interface.
• ETS supports groups of 802.1p priorities that have: – PFC enabled or disabled – No bandwidth limit or no ETS processing • Bandwidth allocated by the ETS algorithm is made available after strict-priority groups are serviced. If a priority group does not use its allocated bandwidth, the unused bandwidth is made available to other priority groups so that the sum of the bandwidth use is 100%.
Step Task Command Command Mode port queue lossless. The sum of all allocated bandwidth percentages in all groups in the DCB map must be 100%. Strict-priority traffic is serviced first. Afterwards, bandwidth allocated to other priority groups is made available and allocated according to the specified percentages. If a priority group does not use its allocated bandwidth, the unused bandwidth is made available to other priority groups.
Step Task Command Command Mode fortygigabitEthernet slot/ port} Apply the DCB map on the Ethernet port to configure it with the PFC and ETS settings in the map; for example: 2 INTERFACE dcb-map name Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_A_dcb_map1 Repeat Steps 1 and 2 to apply a DCB map to more than one port.
• A limit of two lossless queues are supported on a port. If the number of lossless queues configured exceeds the maximum supported limit per port (two), an error message is displayed. You must re-configure the value to a smaller number of queues. • If you configure lossless queues on an interface that already has a DCB map with PFC enabled (pfc on), an error message is displayed. Step Task Command Command Mode 1 Enter INTERFACE Configuration mode.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network. DCB is disabled by default. It must be enabled to support CEE.
dcb stack-unit all pfc-buffering pfc-ports 64 pfc-queues 2 NOTE: To save the pfc buffering configuration changes, save the configuration and reboot the system. NOTE: Dell Networking OS Behavior: DCB is not supported if you enable link-level flow control on one or more interfaces. For more information, refer to Flow Control Using Ethernet Pause Frames.
switchport auto vlan ! protocol lldp advertise management-tlv management-address system-name dcbx port-role auto-downstream no shutdown Dell# When no DCBx TLVs are received on a DCB-enabled interface for 180 seconds, DCB is automatically disabled and flow control is reenabled. Lossless Traffic Handling In auto-DCB-enable mode, Aggregator ports operate with the auto-detection of DCBx traffic. At any moment, some ports may operate with link-level flow control while others operate with DCB-based PFC enabled.
NOTE: Dell Networking does not recommend mapping all ingress traffic to a single queue when using PFC and ETS. However, Dell Networking does recommend using Ingress traffic classification using the service-class dynamic dot1p command (honor dot1p) on all DCB-enabled interfaces. If you use L2 class maps to map dot1p priority traffic to egress queues, take into account the default dot1p-queue assignments in the following table and the maximum number of two lossless queues supported on a port.
How Enhanced Transmission Selection is Implemented Enhanced transmission selection (ETS) provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet traffic. Different traffic types have different service needs. Using ETS, groups within an 802.1p priority class are autoconfigured to provide different treatment for traffic with different bandwidth, latency, and best-effort needs.
– The CIN version supports two types of strict-priority scheduling: * Group strict priority: Allows a single priority flow in a priority group to increase its bandwidth usage to the bandwidth total of the priority group. A single flow in a group can use all the bandwidth allocated to the group. * Link strict priority: Allows a flow in any priority group to increase to the maximum link bandwidth. CIN supports only the default dot1p priority-queue assignment in a priority group.
The configuration received from a DCBx peer or from an internally propagated configuration is not stored in the switch’s running configuration. On a DCBx port in an auto-upstream role, the PFC and application priority TLVs are enabled. ETS recommend TLVs are disabled and ETS configuration TLVs are enabled. Auto-downstream The port advertises its own configuration to DCBx peers but is not willing to receive remote peer configuration.
Asymmetric DCB parameters are exchanged between a DCBx-enabled port and a peer port without requiring that a peer port and the local port use the same configured values for the configurations to be compatible. For example, ETS uses an asymmetric exchange of parameters between DCBx peers. Symmetric DCB parameters are exchanged between a DCBx-enabled port and a peer port but requires that each configured parameter value be the same for the configurations in order to be compatible.
• The peer times out. • Multiple peers are detected on the link. DCBx operations on a port are performed according to the auto-configured DCBx version, including fast and slow transmit timers and message formats. If a DCBx frame with a different version is received, a syslog message is generated and the peer version is recorded in the peer status table. If the frame cannot be processed, it is discarded and the discard counter is incremented.
DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down. • The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN), logical link down (LLD), and network interface virtualization (NIV).
Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 3. Displaying DCB Configurations Command Output show qos dot1p-queue mapping Displays the current 802.1p priority-queue mapping. show qos dcb-map map-name Displays the DCB parameters configured in a specified DCB map. show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFC-enabled ports, and number of PFC-enabled queues.
Example of the show interface pfc statistics Command Dell#show interfaces tengigabitethernet 0/3 pfc statistics Interface TenGigabitEthernet 0/3 Priority Rx XOFF Frames Rx Total Frames Tx Total Frames ------------------------------------------------------0 0 0 0 1 0 0 0 2 0 0 0 3 0 0 0 4 0 0 0 5 0 0 0 6 0 0 0 7 0 0 0 Example of the show interfaces pfc summary Command Dell# show interfaces tengigabitethernet 0/4 pfc summary Interface TenGigabitEthernet 0/4 Admin mode is on Admin is enabled Remote is enabled,
Table 4. show interface pfc summary Command Description Fields Description Interface Interface type with stack-unit and port number. Admin mode is on; Admin is enabled PFC Admin mode is on or off with a list of the configured PFC priorities . When PFC admin mode is on, PFC advertisements are enabled to be sent and received from peers; received PFC configuration takes effect. The admin operational status for a DCBx exchange of PFC configuration is enabled or disabled.
Fields Description PFC TLV Statistics: Pause Tx pkts Number of PFC pause frames transmitted. PFC TLV Statistics: Pause Rx pkts Number of PFC pause frames received. Input Appln Priority TLV pkts Number of Application Priority TLVs received. Output Appln Priority TLV pkts Number of Application Priority TLVs transmitted. Error Appln Priority TLV pkts Number of Application Priority error packets received.
Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled Example of the show interface ets detail Command Dell# show interfaces tengigabitethernet Interface TenGigabitEthernet 0/4 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on Admin Parameters : -----------------Admin is enabled TC-grp Priority# Bandwidth 0 0,1,2,3,4,5,6,7 100% 1 0% 2 0% 3 0% 4 0% 5 0% 6 0% 7 0% 0/4 ets detail TSA ETS ETS ETS ETS ETS ETS ETS ETS Remote Parameters: ----------
Field Description Admin Parameters ETS configuration on local port, including priority groups, assigned dot1p priorities, and bandwidth allocation. Remote Parameters ETS configuration on remote peer port, including Admin mode (enabled if a valid TLV was received or disabled), priority groups, assigned dot1p priorities, and bandwidth allocation. If the ETS Admin mode is enabled on the remote port for DCBx exchange, the Willing bit received in ETS TLVs from the remote peer is included.
Link Delay 45556 pause quantum 0 Pause Tx pkts, 0 Pause Rx pkts Example of the show stack-unit all stack-ports all ets details Command Dell# show stack-unit all stack-ports all ets details Stack unit 0 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Stack unit 1 stack port all Max Supported T
Local DCBX Status ----------------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 2 Acknowledgment Number: 2 Protocol State: In-Sync Peer DCBX Status: ---------------DCBX Operational Version is 0 DCBX Max Version Supported is 255 Sequence Number: 2 Acknowledgment Number: 2 2 Input PFC TLV pkts, 3 Output PFC TLV pkts, 0 Error PFC pkts, 0 PFC Pause Tx pkts, 0 Pause Rx pkts 2 Input PG TLV Pkts, 3 Output PG TLV Pkts, 0 Error PG TLV Pkts 2 Input Appln Priority TLV pkts, 0 Output Ap
Field Description Peer DCBx Status: DCBx Operational Version DCBx version advertised in Control TLVs received from peer device. Peer DCBx Status: DCBx Max Version Supported Highest DCBx version supported in Control TLVs received from peer device. Peer DCBx Status: Sequence Number Sequence number transmitted in Control TLVs received from peer device. Peer DCBx Status: Acknowledgment Number Acknowledgement number transmitted in Control TLVs received from peer device.
• Strict-priority groups: If priority group 1 or 2 has free bandwidth, (20 + 30)% of the free bandwidth is distributed to priority group 3. Priority groups 1 and 2 retain whatever free bandwidth remains up to the (20+ 30)%. If two priority groups have strict-priority scheduling, traffic assigned from the priority group with the higher priority-queue number is scheduled first.
Reason Description PFC is down (show interfaces pfc output) One of the following PFC-specific errors has occurred: ETS is down (show interfaces ets output) • No MBC support. • Configured PFC priorities exceed maximum PFC capability limit. • New dot1p-to-queue mapping violates the allowed system limit for PFC Enable status per priority One of the following ETS-specific errors occurred in ETS validation: • Unsupported PGID • A priority group exceeds the maximum number of supported priorities.
5 Dynamic Host Configuration Protocol (DHCP) The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
DHCPINFORM A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP. The server responds by unicast. DHCPNAK A server sends this message to the client if it is not able to fulfill a DHCPREQUEST; for example, if the requested address is already in use. In this case, the client starts the configuration process over by sending a DHCPDISCOVER.
EXEC Privilege [no] debug ip dhcp client events [interface type slot/port] The following example shows the packet- and event-level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface.
Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Received DHCPOFFER packet in Interface Ma 0/0 with Lease-Ip
You can also manually configure an IP address for the VLAN 1 default management interface using the CLI. If no user-configured IP address exists for the default VLAN management interface exists and if the default VLAN IP address is not in the startup configuration, the Aggregator will automatically obtain it using DHCP. • The default VLAN 1 with all ports configured as members is the only L3 interface on the Aggregator.
NOTE: Management routes added by the DHCP client include the specific routes to reach a DHCP server in a different subnet and the management route. DHCP Client on a VLAN The following conditions apply on a VLAN that operates as a DHCP client: • The default VLAN 1 with all ports auto-configured as members is the only L3 interface on the Aggregator.
Option Number and Description Specifies the amount of time that the client is allowed to use an assigned IP address. DHCP Message Type Option 53 • 1: DHCPDISCOVER • 2: DHCPOFFER • 3: DHCPREQUEST • 4: DHCPDECLINE • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request List Option 55 Renewal Time Option 58 Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code.
To insert Option 82 into DHCP packets, follow this step. • Insert Option 82 into DHCP packets. CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface.
DHCPINFORM Dell# 0 Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ================ Ma 0/0 0.0.0.0/0 0.0.0.0 0.0.0.0 INIT -----NA--------NA---Vl 1 10.1.1.254/24 0.0.0.0 Renew Time ========== ----NA---08-26-2011 16:21:50 64 10.1.1.
6 FIP Snooping This chapter describes about the FIP snooping concepts and configuration procedures. Supported Modes Standalone, PMUX, VLT Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
• FIP discovery: FCoE end-devices and FCFs are automatically discovered. • Initialization: FCoE devices perform fabric login (FLOGI) and fabric discovery (FDISC) to create a virtual link with an FCoE switch. • Maintenance: A valid virtual link between an FCoE device and an FCoE switch is maintained and the link termination logout (LOGO) functions properly. Figure 7.
• Global ACLs are applied on server-facing ENode ports. • Port-based ACLs are applied on ports directly connected to an FCF and on server-facing ENode ports. • Port-based ACLs take precedence over global ACLs. • FCoE-generated ACLs take precedence over user-configured ACLs. A user-configured ACL entry cannot deny FCoE and FIP snooping frames. The below illustration depicts an Aggregator used as a FIP snooping bridge in a converged Ethernet network. The ToR switch operates as an FCF for FCoE traffic.
The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis. • Set the FCoE MAC address prefix (FC-MAP) value used by an FCF to assign a MAC address to an ECoE end-device (server ENode or storage device) after a server successfully logs in.
FIP Snooping Prerequisites On an Aggregator, FIP snooping requires the following conditions: • • A FIP snooping bridge requires DCBX and PFC to be enabled on the switch for lossless Ethernet connections (refer to Data Center Bridging (DCB)). Dell recommends that you also enable ETS; ETS is recommended but not required. DCBX and PFC mode are auto-configured on Aggregator ports and FIP snooping is operational on the port.
4. Enter interface configuration mode to configure the port for FIP snooping links. CONFIGURATION mode interface port-type slot/port By default, a port is configured for bridge-to-ENode links. 5. Configure the port for bridge-to-FCF links. INTERFACE or CONFIGURATION mode fip-snooping port-mode fcf NOTE: All these configurations are available only in PMUX mode.
Display information on the FCoE VLANs on which FIP snooping is enabled.
show fip-snooping enode Command Description Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. VLAN VLAN ID number used by the session. FC-ID Fibre Channel session ID assigned by the FCF.
Number of FDISC Rejects Number of FLOGO Accepts Number of FLOGO Rejects Number of CVL Number of FCF Discovery Timeouts Number of VN Port Session Timeouts Number of Session failures due to Hardware Config Dell(conf)# :0 :0 :0 :0 :0 :0 :0 Dell# show fip-snooping statistics int tengigabitethernet 0/11 Number of Vlan Requests :1 Number of Vlan Notifications :0 Number of Multicast Discovery Solicits :1 Number of Unicast Discovery Solicits :0 Number of FLOGI :1 Number of FDISC :16 Number of FLOGO :0 Number of E
Number of Multicast Discovery Solicits Number of FIP-snooped multicast discovery solicit frames received on the interface. Number of Unicast Discovery Solicits Number of FIP-snooped unicast discovery solicit frames received on the interface. Number of FLOGI Number of FIP-snooped FLOGI request frames received on the interface. Number of FDISC Number of FIP-snooped FDISC request frames received on the interface. Number of FLOGO Number of FIP-snooped FLOGO frames received on the interface.
*1 100 0X0EFC00 1 2 17 NOTE: NPIV sessions are included in the number of FIP-snooped sessions displayed. FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch.
After FIP packets are exchanged between the ENode and the switch, a FIP snooping session is established. ACLS are dynamically generated for FIP snooping on the FIP snooping bridge/switch. Debugging FIP Snooping To enable debug messages for FIP snooping events, enter the debug fip-snooping command..
7 Internet Group Management Protocol (IGMP) On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
– One router on a subnet is elected as the querier. The querier periodically multicasts (to all-multicast-systems address 224.0.0.1) a general query to all hosts on the subnet. – A host that wants to join a multicast group responds with an IGMP membership report that contains the multicast address of the group it wants to join (the packet is addressed to the same group).
Figure 12. IGMP version 3 Membership Report Packet Format Joining and Filtering Groups and Sources The below illustration shows how multicast routers maintain the group and source information from unsolicited reports. • The first unsolicited report from the host indicates that it wants to receive traffic for group 224.1.1.1. • The host’s second report indicates that it is only interested in traffic from group 224.1.1.1, source 10.11.1.1.
Leaving and Staying in Groups The below illustration shows how multicast routers track and refreshes the state change in response to group-and-specific and general queries. • Host 1 sends a message indicating it is leaving group 224.1.1.1 and that the included filter for 10.11.1.1 and 10.11.1.2 are no longer necessary.
Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode. When multicast flooding is disabled, unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs.
Last report received Group source list Source address 1.1.1.
8 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• • All interfaces are auto-configured as members of all (4094) VLANs and untagged VLAN 1. All VLANs are up and can send or receive layer 2 traffic. You can use the Command Line Interface (CLI) or CMC interface to configure only the required VLANs on a port interface. Aggregator ports are numbered 1 to 56. Ports 1 to 32 are internal server-facing interfaces. Ports 33 to 56 are external ports numbered from the bottom to the top of the Aggregator.
0 CRC, 0 overrun, 0 discarded Output Statistics: 14856 packets, 2349010 bytes, 0 underruns 0 64-byte pkts, 4357 over 64-byte pkts, 8323 over 127-byte pkts 2176 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 12551 Multicasts, 2305 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.
To leave INTERFACE mode, use the exit command or end command. You cannot delete a physical interface. The management IP address on the D-fabric provides a dedicated management access to the system. The switch interfaces support Layer 2 traffic over the 10-Gigabit Ethernet interfaces. These interfaces can also become part of virtual interfaces such as VLANs or port channels. For more information about VLANs, refer to VLANs and Port Tagging.
Configuring a Management Interface On the Aggregator, the dedicated management interface provides management access to the system.You can configure this interface with Dell Networking OS, but the configuration options on this interface are limited. You cannot configure gateway addresses and IP addresses if it appears in the main routing table of Dell Networking OS. In addition, the proxy address resolution protocol (ARP) is not supported on this interface.
To view the configured static routes for the management port, use the show ip management-route command in EXEC privilege mode. Dell#show ip management-route all Destination ----------1.1.1.0/24 172.16.1.0/24 172.31.1.0/24 Gateway ------172.31.1.250 172.31.1.
VLANs and Port Tagging To add an interface to a VLAN, it must be in Layer 2 mode. After you place an interface in Layer 2 mode, it is automatically placed in the default VLAN. Dell Networking OS supports IEEE 802.1Q tagging at the interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses. The information that is preserved as the frame moves through the network.
Command Syntax Command Mode Purpose vlan untagged {vlan-id | vlan-range} INTERFACE Add the interface as an untagged member of one or more VLANs, where: vlan-id specifies an untagged VLAN number. Range: 2-4094 vlan-range specifies a range of untagged VLANs. Separate VLAN IDs with a comma; specify a VLAN range with a dash; for example, vlan tagged 3,5-7.
I - Isolated Q: U - Untagged, T - Tagged x - Dot1x untagged, X - Dot1x tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged, C - CMC tagged NUM 2 Status Active Description Dell(conf-if-te-1/7) Q U T T Ports Po1(Te 0/7,18) Po128(Te 0/50-51) Te 1/7 Except for hybrid ports, only a tagged interface can be a member of multiple VLANs.
Dell(conf-if-te-0/1)#portmode hybrid Dell(conf-if-te-0/1)#switchport 2. Configure the tagged VLANs 10 through 15 and untagged VLAN 20 on this port. Dell(conf-if-te-0/1)#vlan tagged 10-15 Dell(conf-if-te-0/1)#vlan untagged 20 Dell(conf-if-te-0/1)# 3. Show the running configurations on this port. Dell(conf-if-te-0/1)#show config ! interface TenGigabitEthernet 0/1 portmode hybrid switchport vlan tagged 10-15 vlan untagged 20 no shutdown Dell(conf-if-te-0/1)#end Dell# 4.
20 Active U Po128(Te 0/4-5) U Te 0/1 Dell# You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan vlan-id vlan-id — Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged vlan-range command. You can remove the untagged VLANs using the no vlan untagged command in the physical port/port-channel.
Member ports of a LAG are added and programmed into hardware in a predictable order based on the port ID, instead of in the order in which the ports come up. With this implementation, load balancing yields predictable results across switch resets and chassis reloads. A physical interface can belong to only one port channel at a time. Each port channel must contain interfaces of the same interface type/speed. Port channels can contain a mix of 1000 or 10000 Mbps Ethernet interfaces .
To display detailed information on a port channel, enter the show interfaces port-channel command in EXEC Privilege mode. The below example shows the port channel’s mode (L2 for Layer 2, L3 for Layer 3, and L2L3 for a Layer 2 port channel assigned to a routed VLAN), the status, and the number of interfaces belonging to the port channel. In this example, the Port-channel 1 is a dynamically created port channel based on the NIC teaming configuration in connected servers learned via LACP.
Output 34.00 Mbits/sec, 12314 packets/sec, 0.36% of line-rate Time since last interface status change: 00:13:57 Interface Range An interface range is a set of interfaces to which other commands may be applied, and may be created if there is at least one valid interface within the range. Bulk configuration excludes from configuring any non-existing interfaces from an interface range. A default VLAN may be configured only if the interface range being configured consists of only VLAN ports.
Overlap Port Ranges If overlapping port ranges are specified, the port range is extended to the smallest start port number and largest end port number.
Over 127B packets: Over 255B packets: Over 511B packets: Over 1023B packets: Error statistics: Input underruns: Input giants: Input throttles: Input CRC: Input IP checksum: Input overrun: Output underruns: Output throttles: m l T q - 0 0 0 0 0 0 0 0 pps pps pps pps 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 pps pps pps pps pps pps pps pps 0 0 0 0 0 0 0 0 Change mode Page up Increase refresh interval Quit c - Clear screen a - Page down t - Decrease refresh interval Maintenance Using TDR The time dom
arise where a sending device may transmit data faster than a destination device can accept it. The destination sends a pause frame back to the source, stopping the sender’s transmission for a period of time. The globally assigned 48-bit Multicast address 01-80-C2-00-00-01 is used to send and receive pause frames. To allow full duplex flow control, stations implementing the pause operation instruct the MAC to enable reception of frames with a destination address equal to this multicast address.
The MTU range is 592-12000, with a default of 1554. The table below lists out the various Layer 2 overheads found in Dell Networking OS and the number of bytes. Difference between Link MTU and IP MTU Layer 2 Overhead Difference between Link MTU and IP MTU Ethernet (untagged) 18 bytes VLAN Tag 22 bytes Untagged Packet with VLAN-Stack Header 22 bytes Tagged Packet with VLAN-Stack Header 26 bytes Link MTU and IP MTU considerations for port channels and VLANS are as follows.
Step Task Command Syntax Command Mode 1. Determine the local interface status. show interfaces [interface] status EXEC Privilege 2. Determine the remote interface status. [Use the command on the remote system that is equivalent to the above command.] EXEC EXEC Privilege 3. Access CONFIGURATION mode. config EXEC Privilege 4. Access the port. interface interface slot/port CONFIGURATION 5. Set the local port speed. speed {100 | 1000 | 10000 | auto} INTERFACE 6.
Setting Auto-Negotiation Options The negotiation auto command provides a mode option for configuring an individual port to forced master/forced slave after you enable auto-negotiation. CAUTION: Ensure that only one end of the node is configured as forced-master and the other is configured as forcedslave. If both are configured the same (that is, both as forced-master or both as forced-slave), the show interface command flaps between an auto-neg-error and forced-master/slave states.
exit Exit from autoneg configuration mode mode Specify autoneg mode no Negate a command or set its defaults show Show autoneg configuration information Dell(conf-if-autoneg)#mode ? forced-master Force port to master mode forced-slave Force port to slave mode Dell(conf-if-autoneg)# Viewing Interface Information Displaying Non-Default Configurations. The show [ip | running-config] interfaces configured command allows you to display only interfaces that have non-default configurations are displayed.
Command Syntax Command Mode Purpose clear counters [interface] EXEC Privilege Clear the counters used in the show interface commands for all VRRP groups, VLANs, and physical interfaces or selected ones. Without an interface specified, the command clears all interface counters. • (OPTIONAL) Enter the following interface keywords and slot/port or number information: • For a Loopback interface, enter the keyword loopback followed by a number from 0 to 16383.
9 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
The following figure shows iSCSI optimization between servers in an M1000e enclosure and a storage array in which an Aggregator connects installed servers (iSCSI initiators) to a storage array (iSCSI targets) in a SAN network. iSCSI optimization running on the Aggregator is configured to use dot1p priority-queue assignments to ensure that iSCSI traffic in these sessions receives priority treatment when forwarded on Aggregator hardware. Figure 16.
• Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data clears. Detection and Auto configuration for Dell EqualLogic Arrays The iSCSI optimization feature includes auto-provisioning support with the ability to detect directly connected Dell EqualLogic storage arrays and automatically reconfigure the switch to enhance storage traffic flows.
show iscsi sessions Displays information on active iSCSI sessions on the switch that have been established since the last reload. show iscsi sessions detailed [session isid] Displays detailed information on active iSCSI sessions on the switch. To display detailed information on specified iSCSi session, enter the session’s iSCSi ID. show run iscsi Displays all globally-configured non-default iSCSI settings in the current Dell Networking OS session.
10 Isolated Networks for Aggregators An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a nonisolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
11 Link Aggregation Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128).
Uplink LAG When the Aggregator power is on, all uplink ports are configured in a single LAG (LAG 128). Server-Facing LAGs Server-facing ports are configured as individual ports by default. If you configure a server NIC in standalone, stacking, or VLT mode for LACP-based NIC teaming, server-facing ports are automatically configured as part of dynamic LAGs. The LAG range 1 to 127 is reserved for server-facing LAGs.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Configuration Tasks for Port Channel Interfaces To configure a port channel (LAG), use the commands similar to those found in physical interfaces. By default, no port channels are configured in the startup configuration. In VLT mode, port channel configurations are allowed in the startup configuration.
To add a physical interface to a port, use the following commands. 1. Add the interface to a port channel. INTERFACE PORT-CHANNEL mode channel-member interface This command is applicable only in PMUX mode. The interface variable is the physical interface type and slot/port information. 2. Double check that the interface was added to the port channel.
When more than one interface is added to a Layer 2-port channel, Dell Networking OS selects one of the active interfaces in the port channel to be the primary port. The primary port replies to flooding and sends protocol data units (PDUs). An asterisk in the show interfaces port-channel brief command indicates the primary port. As soon as a physical interface is added to a port channel, the properties of the port channel determine the properties of the physical interface.
channel-member TenGigabitEthernet 0/8 shutdown Dell(conf-if-po-3)# Configuring the Minimum Oper Up Links in a Port Channel You can configure the minimum links in a port channel (LAG) that must be in “oper up” status to consider the port channel to be in “oper up” status. To set the “oper up” status of your links, use the following command. • Enter the number of links in a LAG that must be in “oper up” status. INTERFACE mode minimum-links number The default is 1.
Deleting or Disabling a Port Channel To delete or disable a port channel, use the following commands. • Delete a port channel. CONFIGURATION mode no interface portchannel channel-number • Disable a port channel. shutdown When you disable a port channel, all interfaces within the port channel are operationally down also. Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled.
For the interface level auto LAG configurations, use the show interface command.
Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports (the uplink port-channel is LAG 128) on the I/O Aggregator only when a minimum number of member interfaces of the LAG bundle is up. For example, based on your network deployment, you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state.
Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode, the VLT LAG configurations are saved in nonvolatile storage (NVS). By restoring the settings saved in NVS, the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced. The delay in restoring the VLT LAG parameters is reduced (90 seconds by default) on the secondary VLT peer node before it becomes operational.
Table 8.
92 64-byte pkts, 0 over 64-byte pkts, 90 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 182 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 2999 packets, 383916 bytes, 0 underruns 5 64-byte pkts, 214 over 64-byte pkts, 2727 over 127-byte pkts 53 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 2904 Multicasts, 95 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (i
Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/47 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/48 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not presen
Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755009 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag10001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel: Te 0/12(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:41 Queueing strategy: fifo Input Statistics: 112 packets, 18161 bytes 0 64-
Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP. 1. Bring up all the ports. Dell#configure Dell(conf)#int range tengigabitethernet 0/1 - 56 Dell(conf-if-range-te-0/1-56)#no shutdown 2. Associate the member ports into LAG-10 and 11.
* 1 Active 1000 1001 Dell# 5. Active Active U Po10(Te 0/4-5) U Po11(Te 0/6) T Po10(Te 0/4-5) T Po11(Te 0/6) Show LAG member ports utilization.
Dell(conf-if-range-fo-0/33,fo-0/37)#no shut Dell(conf-if-range-fo-0/33,fo-0/37)#port-channel-protocol lacp Dell(conf-if-range-fo-0/33,fo-0/37-lacp)#port-channel 20 mode active Dell(conf)# Dell(conf)#int fortygige 0/49 Dell(conf-if-fo-0/49)#port-channel-protocol lacp Dell(conf-if-fo-0/49-lacp)#port-channel 21 mode active Dell(conf-if-fo-0/49-lacp)# Dell(conf-if-fo-0/49)#no shut 4. Configure the port mode, VLAN, and so forth on the port-channel.
12 Layer 2 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] – address: displays the specified entry. – aging-time: displays the configured aging-time.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and reassociated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves. Figure 19.
MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
13 Link Layer Discovery Protocol (LLDP) Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs. You can configure the inclusion of individual Optional TLVs. Type, Length, Value (TLV) Types Type TLV Description 0 End of LLDPDU Marks the end of an LLDPDU. 1 Chassis ID The Chassis ID TLV is a mandatory TLV that identifies the chassis containing the IEEE 802 LAN station associated with the transmitting LLDP agent.
CONFIGURATION versus INTERFACE Configurations All LLDP configuration commands are available in PROTOCOL LLDP mode, which is a sub-mode of the CONFIGURATION mode and INTERFACE mode. • Configurations made at the CONFIGURATION level are global; that is, they affect all interfaces on the system. • Configurations made at the INTERFACE level affect only the specific interface; they override CONFIGURATION level configurations.
To undo an LLDP configuration, precede the relevant command with the keyword no. Advertising TLVs You can configure the system to advertise TLVs out of all interfaces or out of specific interfaces. • If you configure the system globally, all interfaces send LLDPDUs with the specified TLVs. • If you configure an interface, only the interface sends LLDPDUs with the specified TLVs.
Figure 22. Configuring LLDP Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
Type TLV Description 5 System name A user-defined alphanumeric string that identifies the system. 6 System description A user-defined alphanumeric string that identifies the system. 7 System capabilities Identifies the chassis as one or more of the following: repeater, bridge, WLAN Access Point, Router, Telephone, DOCSIS cable device, end station only, or other. 8 Management address Indicates the network address of the management interface.
LLDP-MED Capabilities TLV The LLDP-MED capabilities TLV communicates the types of TLVs that the endpoint device and the network connectivity device support. LLDP-MED network connectivity devices must transmit the Network Policies TLV. • The value of the LLDP-MED capabilities field in the TLV is a 2–octet bitmap, each bit represents an LLDP-MED capability (as shown in the following table). • The possible values of the LLDP-MED device type are shown in the following.
• DSCP value An integer represents the application type (the Type integer shown in the following table), which indicates a device function for which a unique network policy is defined. An individual LLDP-MED network policy TLV is generated for each application type that you specify with the CLI (XXAdvertising TLVs).
• Power Source — there are two possible power sources: primary and backup. The Dell Networking system is a primary power source, which corresponds to a value of 1, based on the TIA-1057 specification. • Power Priority — there are three possible priorities: Low, High, and Critical. On Dell Networking systems, the default power priority is High, which corresponds to a value of 2 based on the TIA-1057 specification. You can configure a different power priority through the CLI.
switchport no shutdown R1(conf-if-te-0/3)#protocol lldp R1(conf-if-te-0/3-lldp)#show config ! protocol lldp R1(conf-if-te-0/3-lldp)# Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. • • Display brief information about adjacent devices. show lldp neighbors Display all of the information that neighbors are advertising.
Total Unrecognized TLVs: 0 Total TLVs Discarded: 0 Next packet will be sent after 4 seconds The neighbors are given below: ----------------------------------------------------------------------Remote Chassis ID Subtype: Mac address (4) Remote Chassis ID: 00:00:c9:ad:f6:12 Remote Port Subtype: Mac address (3) Remote Port ID: 00:00:c9:ad:f6:12 Local Port ID: TenGigabitEthernet 0/3 Configuring LLDPDU Intervals LLDPDUs are transmitted periodically; the default interval is 30 seconds.
protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description no disable R1(conf-lldp)#multiplier ? <2-10> Multiplier (default=4) R1(conf-lldp)#multiplier 5 R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description multiplier 5 no disable R1(conf-lldp)#no multiplier R1(co
Figure 27. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • received and transmitted LLDP-MED TLVs Table 13.
MIB Object Category LLDP Statistics LLDP Variable LLDP MIB Object Description mibMgmtAddrInstanceTxEnable lldpManAddrPortsTxEnable The management addresses defined for the system and the ports through which they are enabled for transmission. statsAgeoutsTotal lldpStatsRxPortAgeoutsTotal Total number of times that a neighbor’s information is deleted on the local system due to an rxInfoTTL timer expiration.
TLV Type TLV Name TLV Variable management address length management address subtype management address interface numbering subtype interface number OID System LLDP MIB Object Remote lldpRemSysCapEnabled Local lldpLocManAddrLen Remote lldpRemManAddrLen Local lldpLocManAddrSubtype Remote lldpRemManAddrSubtype Local lldpLocManAddr Remote lldpRemManAddr Local lldpLocManAddrIfSubtype Remote lldpRemManAddrIfSubtyp e Local lldpLocManAddrIfId Remote lldpRemManAddrIfId Local lldpLoc
Table 16.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object 4 Extended Power via MDI Power Device Type Local lldpXMedLocXPoEDevice Type Remote lldpXMedRemXPoEDevice Type Local lldpXMedLocXPoEPSEPo werSource Power Source lldpXMedLocXPoEPDPow erSource Remote lldpXMedRemXPoEPSEP owerSource lldpXMedRemXPoEPDPo werSource Power Priority Local lldpXMedLocXPoEPDPow erPriority lldpXMedLocXPoEPSEPor tPDPriority Remote lldpXMedRemXPoEPSEP owerPriority lldpXMedRemXPoEPDPo werPriority Power Value
14 Port Monitoring The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Supported Modes Standalone, PMUX, VLT, Stacking Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
In the following example, the host and server are exchanging traffic which passes through the uplink interface 1/1. Port 1/1 is the monitored port and port 1/42 is the destination port, which is configured to only monitor traffic received on tengigabitethernet 1/1 (host-originated traffic). Figure 28. Port Monitoring Example Important Points to Remember • Port monitoring is supported on physical ports only; virtual local area network (VLAN) and port-channel interfaces do not support port monitoring.
NOTE: There is no limit to the number of monitoring sessions per system, provided that there are only four destination ports per port-pipe. If each monitoring session has a unique destination port, the maximum number of session is four per port-pipe. Port Monitoring The Aggregator supports multiple source-destination statements in a monitor session, but there may only be one destination port in a monitoring session.
In the example below, 0/25 and 0/26 belong to Port-pipe 1. This port-pipe has the same restriction of only four destination ports, new or used.
15 Security The Aggregator provides many security features. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell PowerEdge Command Line Reference Guide for the M I/O Aggregator . Supported Modes Standalone, PMUX, VLT, Stacking Understanding Banner Settings This functionality is supported on the Aggregator.
Dell Networking uses local usernames/passwords (stored on the Dell Networking system) or AAA for login authentication. With AAA, you can specify the security protocol or mechanism for different login methods and different users. In Dell Networking OS, AAA uses a list of authentication methods, called method lists, to define the types of authentication and the sequence in which they are applied. You can define a method list or use the default method list.
login authentication {method-list-name | default} To view the configuration, use the show config command in LINE mode or the show running-config in EXEC Privilege mode. NOTE: Dell Networking recommends using the none method only as a backup. This method does not authenticate users. The none and enable methods do not work with secure shell (SSH). You can create multiple method lists and assign them to different terminal lines.
Example of Enabling Local Authentication for the Console and Remote Authentication for VTY Lines Dell(config)# aaa authentication enable mymethodlist radius tacacs Dell(config)# line vty 0 9 Dell(config-line-vty)# enable authentication mymethodlist Server-Side Configuration • TACACS+ — When using TACACS+, Dell Networking OS sends an initial packet with service type SVC_ENABLE, and then sends a second packet with just the password. The TACACS server must have an entry for username $enable$.
• Configuring the Enable Password Command (mandatory) • Configuring Custom Privilege Levels (mandatory) • Specifying LINE Mode Password and Privilege (optional) • Enabling and Disabling Privilege Levels (optional) For a complete listing of all commands related to privilege levels and passwords, refer to the Security chapter in the Dell Networking OS Command Reference Guide.
Configuring Custom Privilege Levels In addition to assigning privilege levels to the user, you can configure the privilege levels of commands so that they are visible in different privilege levels. Within the Dell Networking OS, commands have certain privilege levels. With the privilege command, you can change the default level or you can reset their privilege level back to the default. Assign the launch keyword (for example, configure) for the keyword’s command mode.
Line 2: All other users are assigned a password to access privilege level 8. Line 3: The configure command is assigned to privilege level 8 because it needs to reach CONFIGURATION mode where the snmp-server commands are located. Line 4: The snmp-server commands, in CONFIGURATION mode, are assigned to privilege level 8.
privilege level level • – level level: The range is from 0 to 15. Levels 0, 1, and 15 are pre-configured. Levels 2 to 14 are available for custom configuration. Specify either a plain text or encrypted password. LINE mode password [encryption-type] password Configure the following optional and required parameters: – encryption-type: Enter 0 for plain text or 7 for encrypted text. – password: Enter a text string up to 25 characters long.
Idle Time Every session line has its own idle-time. If the idle-time value is not changed, the default value of 30 minutes is used. RADIUS specifies idle-time allow for a user during a session before timeout. When a user logs in, the lower of the two idle-time values (configured or default) is used. The idle-time value is updated if both of the following happens: • The administrator changes the idle-time of the line on which the user has logged in.
• Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [end-number]} • Enable AAA login authentication for the specified RADIUS method list. LINE mode login authentication {method-list-name | default} • This procedure is mandatory if you are not using default lists. To use the method list.
CONFIGURATION mode radius-server deadtime seconds • – seconds: the range is from 0 to 2147483647. The default is 0 seconds. Configure a key for all RADIUS communications between the system and RADIUS server hosts. CONFIGURATION mode radius-server key [encryption-type] key – encryption-type: enter 7 to encrypt the password. Enter 0 to keep the password as plain text. • – key: enter a string. The key can be up to 42 characters long. You cannot use spaces in the key.
To select TACACS+ as the login authentication method, use the following commands. 1. Configure a TACACS+ server host. CONFIGURATION mode tacacs-server host {ip-address | host} Enter the IP address or host name of the TACACS+ server. Use this command multiple times to configure multiple TACACS+ server hosts. 2. Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the TACAS+ authentication method.
authentication success on vty0 ( 10.11.9.209 ) %RPM0-P:CP %SEC-5-LOGOUT: Exec session is terminated for user admin on line vty0 (10.11.9.209) Dell(conf)#username angeline password angeline Dell(conf)#%RPM0-P:CP %SEC-5-LOGIN_SUCCESS: Login successful for user angeline on vty0 (10.11.9.209) %RPM0-P:CP %SEC-3-AUTHENTICATION_ENABLE_SUCCESS: Enable password authentication success on vty0 ( 10.11.9.209 ) Monitoring TACACS+ To view information on TACACS+ transactions, use the following command.
Login: admin Password: Dell# Dell# Enabling SCP and SSH Secure shell (SSH) is a protocol for secure remote login and other secure network services over an insecure network. Dell Networking OS is compatible with SSH versions 1.5 and 2, in both the client and server modes. SSH sessions are encrypted and use authentication. SSH is enabled by default. For details about the command syntax, refer to the Security chapter in the Dell Networking OS Command Line Interface Reference Guide.
Using SCP with SSH to Copy a Software Image To use secure copy (SCP) to copy a software image through an SSH connection from one switch to another, use the following commands. On the chassis, invoke SCP. CONFIGURATION mode copy scp: flash: Example of Using SCP to Copy from an SSH Server on Another Switch The following example shows the use of SCP and SSH to copy a software image from one switch running SSH server on UDP port 99 to the local switch.
Using RSA Authentication of SSH The following procedure authenticates an SSH client based on an RSA key using RSA authentication. This method uses SSH version 2. 1. On the SSH client (Unix machine), generate an RSA key, as shown in the following example. 2. Copy the public key id_rsa.pub to the Dell Networking system. 3. Disable password authentication if enabled. CONFIGURATION mode no ip ssh password-authentication enable 4. Bind the public keys to RSA authentication.
ip ssh pub-key-file flash://filename or ip ssh rhostsfile flash://filename Examples of Creating shosts and rhosts The following example shows creating shosts. admin@Unix_client# cd /etc/ssh admin@Unix_client# ls moduli sshd_config ssh_host_dsa_key.pub ssh_host_key.pub ssh_host_rsa_key.pub ssh_config ssh_host_dsa_key ssh_host_key ssh_host_rsa_key admin@Unix_client# cat ssh_host_rsa_key.
If the IP address in the RSA key does not match the IP address from which you attempt to log in, the following message appears. In this case, verify that the name and IP address of the client is contained in the file /etc/hosts: RSA Authentication Error. Telnet To use Telnet with SSH, first enable SSH, as previously described. By default, the Telnet daemon is enabled. If you want to disable the Telnet daemon, use the following command, or disable Telnet in the startup config.
The following example shows how to allow or deny a Telnet connection to a user. Users see a login prompt even if they cannot log in. No access class is configured for the VTY line. It defaults from the local database.
16 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
agents and managers that are allowed to interact. Communities are necessary to secure communication between SNMP managers and agents; SNMP agents do not respond to requests from management stations that are not part of the community. The Dell Networking OS enables SNMP automatically when you create an SNMP community and displays the following message. You must specify whether members of the community may retrieve values in Read-Only mode. Read-write access is not supported.
Operating System Version: 1.0 Application Software Version: E8-3-17-46 Series: I/O-Aggregator Copyright (c) 1999-2012 by Dell Inc. All Rights Reserved. Build Time: Sat Jul 28 03:20:24 PDT 2012 SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.6027.1.4.2 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (77916) 0:12:59.16 SNMPv2-MIB::sysContact.0 = STRING: SNMPv2-MIB::sysName.0 = STRING: FTOS SNMPv2-MIB::sysLocation.0 = STRING: SNMPv2-MIB::sysServices.
Codes: Q: U x G - * - Default VLAN, G - GVRP VLANs Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Vlan-stack NUM Status Description 10 Inactive Q Ports U Tengig 0/2 [Unix system output] > snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.
Example of Fetching Dynamic MAC Addresses on the Default VLAN -----------------------------MAC Addresses on Dell Networking System------------------------------Dell#show mac-address-table VlanId Mac Address Type Interface State 1 00:01:e8:06:95:ac Dynamic Tengig 0/7 Active ----------------Query from Management Station--------------------->snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.
represented by a 0 bit, and the unused bit is always 0. These two bits are not given because they are the most significant bits, and leading zeros are often omitted. For interface indexing, slot and port numbering begins with binary one. If the Dell Networking system begins slot and port numbering from 0, binary 1 represents slot and port 0. In S4810, the first interface is 0/0, but in the Aggregator the first interface is 0/1.
For L3 LAG, you do not have this support. SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500842) 23:36:48.42 SNMPv2-MIB::snmpTrapOID.0 = OID: IF-MIB::linkDown IF-MIB::ifIndex.33865785 = INTEGER: 33865785 SNMPv2-SMI::enterprises.6027.3.1.1.4.1.2 = STRING: "OSTATE_DN: Changed interface state down: Tengig 0/1" 2010-02-10 14:22:39 10.16.130.4 [10.16.130.4]: SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500842) 23:36:48.42 SNMPv2-MIB::snmpTrapOID.0 = OID: IF-MIB::linkDown IF-MIB::ifIndex.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.16 = STRING: "Unit: 0 Port 13 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.17 = STRING: "Unit: 0 Port 14 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.18 = STRING: "Unit: 0 Port 15 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.19 = STRING: "Unit: 0 Port 16 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.20 = STRING: "Unit: 0 Port 17 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.21 = STRING: "Unit: 0 Port 18 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.110 SNMPv2-SMI::mib-2.47.1.1.1.1.2.111 SNMPv2-SMI::mib-2.47.1.1.1.1.2.112 SNMPv2-SMI::mib-2.47.1.1.1.1.2.113 SNMPv2-SMI::mib-2.47.1.1.1.1.2.114 SNMPv2-SMI::mib-2.47.1.1.1.1.2.115 SNMPv2-SMI::mib-2.47.1.1.1.1.2.
to Te 0/8 server ports, as the 6th and 7th byte is removed from switch port, the respective bits are set to 0. In binary, the value is 11111001 and the corresponding hex decimal value is F9. In standalone mode, there are 4000 VLANs, by default. The SNMP output will display for all 4000 VLANs. To view a particular VLAN, issue the snmp query with VLAN interface ID. Dell#show interface vlan 1010 | grep “Interface index” Interface index is 1107526642 Use the output of the above command in the snmp query.
MIB Object OID Description chSysCoresInstance 1.3.6.1.4.1.6027.3.19.1.2.9.1.1 Stores the indexed information about the available software core files. chSysCoresFileName 1.3.6.1.4.1.6027.3.19.1.2.9.1.2 Contains the core file names and the file paths. chSysCoresTimeCreated 1.3.6.1.4.1.6027.3.19.1.2.9.1.3 Contains the time at which core files are created. chSysCoresStackUnitNumber 1.3.6.1.4.1.6027.3.19.1.2.9.1.
17 Stacking An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported only on the 40GbE ports on the base module. Stacking is limited to six Aggregators in the same or different m1000e chassis. To configure a stack, you must use the CLI. Stacking provides a single point of management for high availability and higher throughput.
Figure 29. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. • Stack master — primary management unit • Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
• Switch removal If the master switch goes off line, the standby replaces it as the new master. NOTE: For the Aggregator, the entire stack has only one management IP address. Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command. The following example shows sample output from an established stack.
Stacking LAG When you use multiple links between stack units, Dell Networking Operating System automatically bundles them in a stacking link aggregation group (LAG) to provide aggregated throughput and redundancy. The stacking LAG is established automatically and transparently by operating system (without user configuration) after peering is detected and behaves as follows: • The stacking LAG dynamically aggregates; it can lose link members or gain new links.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator. Figure 30.
Stacking in PMUX Mode PMUX stacking allows the stacking of two or more IOA units. This allows grouping of multiple units for high availability. IOA supports a maximum of six stacking units. NOTE: Prior to configuring the stack-group, ensure the stacking ports are connected and in 40G native mode. 1. Configure stack groups on all stack units.
• Stacking is supported only with other Aggregators. A maximum of six Aggregators are supported in a single stack. You cannot stack the Aggregator with MXL 10/40GbE Switches or another type of switch. • A maximum of four stack groups (40GbE ports) is supported on a stacked Aggregator. • Interconnect the stack units by following the instructions in Cabling Stacked Switches. • You cannot stack a Standalone IOA and a PMUX.
NOTE: Stack-group is supported only in PMUX mode. 3. Continue to run the stack-unit 0 stack-group <0-3> command to add additional stack ports to the switch, using the stack-group mapping. Cabling Stacked Switches Before you configure MXL switches in a stack, connect the 40G direct attach or QSFP cables and transceivers to connect 40GbE ports on two Aggregators in the same or different chassis.
Dell# configure 3. Configure the Aggregator to operate in stacking mode. CONFIGURATION mode stack-unit 0 iom-mode stack 4. Repeat Steps 1 to 3 on the second Aggregator in the stack. 5. Log on to the CLI and reboot each switch, one after another, in as short a time as possible. EXEC PRIVILEGE mode reload NOTE: If the stacked switches all reboot at approximately the same time, the switch with the highest MAC address is automatically elected as the master switch.
• If the stack has been provisioned for the stack number that is assigned to the new unit, the pre-configured provisioning must match the switch type. If there is a conflict between the provisioned switch type and the new unit, a mismatch error message is displayed. Resetting a Unit on a Stack Use the following reset commands to reload any of the member units or the standby in a stack.
in stacking mode. This is because CMC is not involved with the configuration of parameters when the Aggregator operates in either of these modes, with the uplink interfaces being part of different LAG bundles. After you restart the Aggregator, the 4-Port 10 GbE Ethernet modules or the 40GbE QSFP+ port that is split into four 10GbE SFP+ ports cannot be configured to be part of the same uplink LAG bundle that is set up with the uplink speed of 40 GbE.
Merging Two Operational Stacks The recommended procedure for merging two operational stacks is as follows: 1. Always power off all units in one stack before connecting to another stack. 2. Add the units as a group by unplugging one stacking cable in the operational stack and physically connecting all unpowered units. 3. Completely cable the stacking connections, making sure the redundant link is also in place.
4 5 Member Member not present not present Example of the show system Command Dell# show system Stack MAC : 00:1e:c9:f1:00:9b Reload Type : normal-reload [Next boot : normal-reload] -- Unit 0 -Unit Type : Management Unit Status : online Next Boot : online Required Type : I/O-Aggregator - 34-port GE/TE (XL) Current Type : I/O-Aggregator - 34-port GE/TE (XL) Master priority : 0 Hardware Rev : Num Ports : 56 Up Time : 2 hr, 41 min FTOS Version : 8-3-17-46 Jumbo Capable : yes POE Capable : no Burned In MAC :
Example of the show system stack-unit stack-group Command Dell#show system stack-unit 1 stack-group Stack group Ports -----------------------------0 0/33 1 0/37 2 0/41 3 0/45 4 0/49 5 0/53 Dell# Example of the show system stack-ports (ring) Command Dell# show system stack-ports Topology: Ring Interface Connection Link Speed (Gb/s) 0/33 1/33 40 0/37 1/37 40 1/33 0/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Group up up up up Example of the show system stack-ports (daisy chain) Command De
0/33 0/37 1/33 1/37 1/33 1/37 0/33 0/37 40 40 40 40 up up up up up up up up Example of the show redundancy Command Dell#show redundancy -- Stack-unit Status --------------------------------------------------------Mgmt ID: 0 Stack-unit ID: 0 Stack-unit Redundancy Role: Primary Stack-unit State: Active (Indicates Master Unit.) Stack-unit SW Version: E8-3-16-46 Link to Peer: Up -- PEER Stack-unit Status --------------------------------------------------------Standby (Indicates Standby Unit.
Rate info (interval 45 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Failure Scenarios The following sections describe some of the common fault conditions that can happen in a switch stack and how they are resolved. Stack Member Fails • Problem: A unit that is not the stack master fails in an operational stack. • Resolution: If a stack member fails in a daisy chain topology, a split stack occurs.
---------------------------------------STANDBY UNIT-----------------------------------------10:55:18: %STKUNIT1-M:CP %KERN-2-INT: Error: Stack Port 50 has flapped 5 times within 10 seonds.Shutting down this stack port now. 10:55:18: %STKUNIT1-M:CP %KERN-2-INT: Error: Please check the stack cable/module and power-cycle the stack. Master Switch Recovers from Failure • Problem: The master switch recovers from a failure after a reboot and rejoins the stack as the standby unit or member unit.
upgrade system { flash: | ftp: | scp: | tftp: | usbflash: } partition Specify the system partition on the master switch into which you want to copy the Dell Networking OS image. The system then prompts you to upgrade all member units with the new Dell Networking OS version. The valid values are a: and b:. 3. Reboot all stack units to load the Dell Networking OS image from the same partition on all switches in the stack. CONFIGURATION mode boot system stack-unit all primary system partition 4.
upgrade system stack-unit unit-number partition 2. Reboot the stack unit from the master switch to load the Dell Networking OS image from the same partition. CONFIGURATION mode boot system stack-unit unit-number primary system partition 3. Save the configuration. EXEC Privilege mode write memory 4. Reset the stack unit to activate the new Dell Networking OS version.
18 Broadcast Storm Control On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcaststorm control operation by using the show io-aggregator broadcast storm-control status command.
19 System Time and Date The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
– timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone. * a minus sign (-) then a number from 1 to 23 as the number of hours. Example of the clock timezone Command Dell#conf Dell(conf)#clock timezone Pacific -8 Dell# Setting Daylight Savings Time Dell Networking OS supports setting the system to daylight savings time once or on a recurring basis every year.
clock summer-time time-zone recurring start-week start-day start-month start-time endweek end-day end-month end-time [offset] – time-zone: Enter the three-letter name for the time zone. This name displays in the show clock output. – start-week: (OPTIONAL) Enter one of the following as the week that daylight saving begins and then enter values for start-day through end-time: * week-number: Enter a number from 1 to 4 as the number of the week in the month to start daylight saving time.
20 Uplink Failure Detection (UFD) Supported Modes Standalone, PMUX, VLT, Stacking Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 31. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 32. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
For example, as shown previously, the switch/ router with UFD detects the uplink failure and automatically disables the associated downstream link port to the server. To continue to transmit traffic upstream, the server with NIC teaming detects the disabled link and automatically switches over to the backup link in order to continue to transmit traffic upstream. Important Points to Remember When you configure UFD, the following conditions apply. • You can configure up to 16 uplink-state groups.
enable Dell(conf)#uplink-state-group 1 Dell(conf-uplink-state-group-1)#enable To disable the uplink group tracking, use the no enable command. 3. Change the default timer.
• all: brings down all downstream links in the group. The default is no downstream links are disabled when an upstream link goes down. To revert to the default setting, use the no downstream disable links command. 4. (Optional) Enable auto-recovery so that UFD-disabled downstream ports in the uplink-state group come up when a disabled upstream port in the group comes back up. UPLINK-STATE-GROUP mode downstream auto-recover The default is auto-recovery of UFD-disabled downstream ports is enabled.
* Where port-range and port-channel-range specify a range of ports separated by a dash (-) and/or individual ports/port channels in any order; for example: tengigabitethernet 1/1-2,5,9,11-12 port-channel 1-3,5 * A comma is required to separate each port and port-range entry. clear ufd-disable {interface interface | uplink-state-group group-id}: re-enables all UFD-disabled downstream interfaces in the group. The range is from 1 to 16.
– 10 Gigabit Ethernet: enter tengigabitethernet slot/port. – 40 Gigabit Ethernet: enter fortygigabitethernet slot/port. – Port channel: enter port-channel {1-512}. • If a downstream interface in an uplink-state group is disabled (Oper Down state) by uplink-state tracking because an upstream port is down, the message error-disabled[UFD] displays in the output. Display the current configuration of all uplink-state groups or a specified group.
MTU 1554 bytes, IP MTU 1500 bytes LineSpeed 1000 Mbit, Mode auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:25:46 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byt
Dell(conf-uplink-state-group-3)#show config ! uplink-state-group 3 description Testing UFD feature downstream disable links 2 downstream TenGigabitEthernet 0/1-2,5,9,11-12 upstream TenGigabitEthernet 0/3-4 Dell#show running-config uplink-state-group ! uplink-state-group 3 description Testing UFD feature downstream disable links 2 downstream TenGigabitEthernet 0/1-2,5,9,11-12 upstream TenGigabitEthernet 0/3-4 Dell#show uplink-state-group 3 Uplink State Group: 3 Status: Enabled, Up Dell#show uplink-state-grou
21 PMUX Mode of the IO Aggregator This chapter provides an overview of the PMUX mode. I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability. This involves creating multiple LAGs, configuring VLANs on uplinks and the server side, configuring data center bridging (DCB) parameters, and so forth. By default, IOA starts up in IOA Standalone mode.
Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile. The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands. The IOA PMUX Mode CLI Commands section lists the PMUX mode CLI commands that you can now configure without a separate user account.
• Assures high availability. CAUTION: Dell Networking does not recommend enabling Stacking and VLT simultaneously. If you enable both features at the same time, unexpected behavior occurs. As shown in the following example, VLT presents a single logical Layer 2 domain from the perspective of attached devices that have a virtual link trunk terminating on separate chassis in the VLT domain. However, the two VLT chassis are independent Layer2/Layer3 (L2/L3) switches for devices in the upstream network.
EXEC mode Dell# show interfaces port brief Codes: L - LACP Port-channel O - OpenFlow Controller Port-channel LAG L 127 Mode L2 Status up Uptime 00:18:22 128 L2 up 00:00:00 Ports Fo 0/33 Fo 0/37 Fo 0/41 (Up)<<<<<<<
Configured System MAC address: 00:01:09:06:06:06 Remote system version: 6(2) Delay-Restore timer: 90 seconds Peer-Routing : Disabled Peer-Routing-Timeout timer: 0 seconds Multicast peer-routing timeout: 150 seconds Dell# 5. Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unit-id.
12 Active 13 Active 14 Active 15 Active 20 Active Dell T T T T T T T T T U U Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 0/41-42) 0/41-42) 0/41-42) 0/41-42) 0/41-42) You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan ->vlan-id - Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged command.
– A VLT domain supports two chassis members, which appear as a single logical device to network access devices connected to VLT ports through a port channel. – A VLT domain consists of the two core chassis, the interconnect trunk, backup link, and the LAG members connected to attached devices. – Each VLT domain has a unique MAC address that you create or VLT creates automatically. – ARP tables are synchronized between the VLT peer nodes.
– If the link between VLT peer switches is established, any change to the VLT system MAC address or unit-id fails if the changes made create a mismatch by causing the VLT unit-ID to be the same on both peers and/or the VLT system MAC address does not match on both peers. – If you replace a VLT peer node, preconfigure the switch with the VLT system MAC address, unit-id, and other VLT parameters before connecting it to the existing VLT peer switch using the VLTi connection.
secondary switch disables its VLT port channels. If keepalive messages from the peer are not being received, the peer continues to forward traffic, assuming that it is the last device available in the network. In either case, after recovery of the peer link or reestablishment of message forwarding across the interconnect trunk, the two VLT peers resynchronize any MAC addresses learned while communication was interrupted and the VLT system continues normal data forwarding.
If you enable IGMP snooping, IGMP queries are also sent out on the VLT ports at this time allowing any receivers to respond to the queries and update the multicast table on the new node. This delay in bringing up the VLT ports also applies when the VLTi link recovers from a failure that caused the VLT ports on the secondary VLT peer node to be disabled. VLT Routing VLT routing is supported on the Aggregator. Layer 2 protocols from the ToR to the server are intra-rack and inter-rack.
EXEC mode show vlt statistics • Display the current status of a port or port-channel interface used in the VLT domain. EXEC mode show interfaces interface – interface: specify one of the following interface types: * Fast Ethernet: enter fastethernet slot/port. * 1-Gigabit Ethernet: enter gigabitethernet slot/port. * 10-Gigabit Ethernet: enter tengigabitethernet slot/port. * Port channel: enter port-channel {1-128}.
Role: Role Priority: ICL Link Status: HeartBeat Status: VLT Peer Status: Local Unit Id: Version: Local System MAC address: Remote System MAC address: Configured System MAC address: Remote system version: Delay-Restore timer: Primary 32768 Up Up Up 1 5(1) 00:01:e8:8a:e7:e7 00:01:e8:8a:e9:70 00:0a:0a:01:01:0a 5(1) 90 seconds Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ --
Example of the show vlt statistics Command Dell_VLTpeer1# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 987 986 148 98 Dell_VLTpeer2# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 994 978 89 89 Additional VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT do
Configure the port channel to an attached device. Verify that the port channels used in the VLT domain are assigned to the same VLAN. Verifying a Port-Channel Connection to a VLT Domain (From an Attached Access Switch) On an access device, verify the port-channel connection to a VLT domain. Troubleshooting VLT To help troubleshoot different VLT issues that may occur, use the following information. NOTE: For information on VLT Failure mode timing and its impact, contact your Dell Networking representative.
Description Behavior at Peer Up Behavior During Run Time Action to Take information, refer to the Release Notes for this release. VLT LAG ID is not configured on one VLT peer A syslog error message is generated. The peer with the VLT configured remains active. A syslog error message is generated. The peer with the VLT configured remains active. Verify the VLT LAG ID is configured correctly on both VLT peers. VLT LAG ID mismatch The VLT port channel is brought down.
22 FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
bandwidth of up to 64GB. It is possible to connect some of the ports to a different FC SAN fabric to provide access to multiple fabric devices. In a typical Fibre Channel storage network topology, separate network interface cards (NICs) and host bus adapters (HBAs) on each server (two each for redundancy purposes) are connected to LAN and SAN networks respectively. These deployments typically include a ToR SAN switch in addition to a ToR LAN switch.
• Multiple domains are supported in an NPIV proxy gateway (NPG). • You cannot configure the MXL or Aggregator switches in Stacking mode if the switches contain the FC Flex IO module. Similarly, FC Flex IO modules do not function when you insert them in to a stack of MXL or Aggregrator switches. • If the switch does not contain FC Flex modules, you cannot create a stack, and a log message states that stacking is not supported unless the switches contain only FC Flex modules.
• Fc-map: 0x0efc00 • Fcf-priority: 128 • Fka-adv-period: 8000mSec • Keepalive: enable • Vlan priority: 3 • On an IOA, the FCoE virtual local area network (VLAN) is automatically configured. • With FC Flex IO modules on an IOA, the following DCB maps are applied on all of the ENode facing ports.
Processing of Data Traffic The Dell Networking OS determines the module type that is plugged into the slot. Based on the module type, the software performs the appropriate tasks. The FC Flex IO module encapsulates and decapsulates the FCoE frames. The module directly switches any non-FCoE or non-FIP traffic, and only FCoE frames are processed and transmitted out of the Ethernet network.
Installing and Configuring Flowchart for FC Flex IO Modules 236 FC Flex IO Modules
To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: • Clearance — There is adequate front and rear clearance for operator access. Allow clearance for cabling, power connections, and ventilation.
• Configure the NPIV-related commands on MXL or I/O Aggregator. After you perform the preceding procedure, the following operations take place: • A physical link is established between the FC Flex I/O module and the Cisco MDS switch. • The FC Flex I/O module sends a proxy FLOGI request to the upstream F_Port of the FC switch or the MDS switch. The F_port accepts the proxy FLOGI request for the FC Flex IO virtual N_Port.
Figure 35. Case 2: Deployment Scenario of Configuring FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FCoE provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames. The Fibre Channel (FC) Flex IO module is supported on Dell Networking Operating System (OS) MXL 10/40GbE Switch and M I/O Aggregator (IOA).
23 FC FLEXIO FPORT FC FlexIO FPort is now supported on the MXL switch platform. FC FLEXIO FPORT The M I/O Aggregator blade switch is a Trident+ based switch which is plugged into the Dell M1000 Blade server chassis. The blade module contains two slots for pluggable flexible module. The goal is to provide support for direct connectivity to FC equipments through Fibre channel ports by FC Flex IO optional module.
Name Server Each participant in the FC environment has a unique ID, which is called the World Wide Name (WWN). This WWN is a 64-bit address. A Fibre Channel fabric uses another addressing scheme to address the ports in the switched fabric. Each port in the switched fabric is assigned a 24-bit address by the FC switch.
After you apply an FCoE map on an FC port, when you enable the port (using the no shutdown command), the NPG starts sending FIP multicast advertisements on behalf of the FC port to downstream servers to advertise the availability of a new FCF port on the FCoE VLAN. The FIP advertisement also contains a keepalive message to maintain connectivity between a SAN fabric and downstream servers. Creating an FCoE Map An FCoE map consists of the following elements.
The range is from 1 to 255. The default is 128. 6. Enable the monitoring FIP keep-alive messages (if it is disabled) to detect if other FCoE devices are reachable. FCoE MAP mode keepalive The default is FIP keep-alive monitoring is enabled. 7. Configure the time interval (in seconds) used to transmit FIP keepalive advertisements. FCoE MAP mode fka-adv-period seconds The range is from 8 to 90 seconds. The default is 8 seconds.
Dell(conf-fc-zone-z1)#member 020202 Dell(conf-fc-zone-z1)#exit Creating Zone Alias and Adding Members To create a zone alias and add devices to the alias, follow these steps. 1. Create a zone alias name. CONFIGURATION mode fc alias ZoneAliasName 2. Add devices to an alias. ALIAS CONFIGURATION mode member word The member can be WWPN (00:00:00:00:00:00:00:00), port ID (000000), or alias name (word).
Example: Dell(conf)#fcoe-map map Dell(conf-fcoe-map)#fc-fabric Dell(conf-fmap-map-fcfabric)#active-zoneset set Dell(conf-fmap-map-fcfabric)#no active-zoneset? active-zoneset Dell(conf-fmap-map-fcfabric)#no active-zoneset ? Dell(conf-fmap-map-fcfabric)#no active-zoneset 2. View the active zoneset. show fc zoneset active Displaying the Fabric Parameters To display information on switch-wide and interface-specific fabric parameters, use the show commands in the following table.
Oper-State UP ======================================================= Switch Config Parameters ======================================================= DomainID 2 ======================================================= Switch Zoning Parameters ======================================================= Default Zone Mode: Deny Active Zoneset: set ======================================================= Members Fc 0/41 Te 0/29 ======================================================= =================================
Example of the show fc zone Command Dell#show fc zone ZoneName ZoneMember ============================== brcd_sanb brcd_cna1_wwpn1 sanb_p2tgt1_wwpn Dell# Example of the show fc alias Command Dell(conf)#do show fc alias ZoneAliasName ZoneMember ======================================================= test 20:02:d4:ae:52:44:38:4f 20:34:78:2b:cb:6f:65:57 Example of the show fc switch Command Dell(conf)#do show fc switch Switch Mode : FPORT Switch WWN : 10:00:aa:00:00:00:00:ac Dell(conf)# FC FLEXIO FPORT 247
24 NPIV Proxy Gateway The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the Aggregator, allowing server CNAs to communicate with SAN fabrics over the Aggregator.
Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway. FCoE transit with FIP snooping is automatically enabled and configured on the FX2 gateway to prevent unauthorized access and data transmission to the SAN network. FIP is used by server CNAs to discover an FCoE switch operating as an FCoE forwarder (FCF).
Term Description N port Port mode of an Aggregator with the FC port that connects to an F port on an FC switch in a SAN fabric. On an Aggregator with the NPIV proxy gateway, an N port also functions as a proxy for multiple server CNA-port connections. ENode port Port mode of a server-facing Aggregator with the Ethernet port that provides access to FCF functionality on a fabric. CNA port N-port functionality on an FCoE-enabled server port.
An FCoE map applies the following parameters on server-facing Ethernet and fabric-facing FC ports on the Aggregator: • The dedicated FCoE VLAN used to transport FCoE storage traffic. • The FC-MAP value used to generate a fabric-provided MAC address. • The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed. Each Fibre Channel fabric serves as an isolated SAN topology within the same physical network.
-------------------PG:0 TSA:ETS BW:30 PFC:OFF Priorities:0 1 2 5 6 7 PG:1 TSA:ETS Priorities:4 BW:30 PFC:OFF PG:2 TSA:ETS Priorities:3 BW:40 PFC:ON Default FCoE map Dell(conf)#do show fcoe-map Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------------Dell(
Step Task Command Command Mode be handled with strict-priority scheduling. The sum of all percentage | strictallocated bandwidth percentages must be 100 percent. priority} pfc {on | off} Strict-priority traffic is serviced first. Afterward, bandwidth allocated to other priority groups is made available and allocated according to the specified percentages. If a priority group does not use its allocated bandwidth, the unused bandwidth is made available to other priority groups.
Step Task Command Command Mode dcb-map name INTERFACE You cannot apply a DCB map on a port channel. However, you can apply a DCB map on the ports that are members of the port channel. Apply the DCB map on an Ethernet port or port channel. The port is configured with the PFC and ETS settings in the DCB map, for example: 2 Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_DCB1 Repeat this step to apply a DCB map to more than one port or port channel.
Step Task Command Command Mode 3 Add a text description of the settings in the FCoE map. description text FCoE MAP fc-map fc-map-value FCoE MAP Maximum: 32 characters. Specify the FC-MAP value used to generate a fabricprovided MAC address, which is required to send FCoE traffic from a server on the FCoE VLAN to the FC fabric specified in Step 2. Enter a unique MAC address prefix as the FC-MAP value for each fabric. 4 Range: 0EFC00–0EFCFF. Default: None.
Step Task Command Command Mode 3 Enable the port for FCoE transmission using the map settings. no shutdown INTERFACE Applying an FCoE Map on Fabric-facing FC Ports The Aggregator, with the FC ports, are configured by default to operate in N port mode to connect to an F port on an FC switch in a fabric. You can apply only one FCoE map on an FC port.
Dell(conf-dcbx-name)# priority-pgid 0 0 0 1 2 4 4 4 2. Apply the DCB map on a downstream (server-facing) Ethernet port: Dell(config)# interface tengigabitethernet 1/0 Dell(config-if-te-0/0)#dcb-map SAN_DCB_MAP 3. Create the dedicated VLAN to be used for FCoE traffic: Dell(conf)#interface vlan 1002 4.
Command Description Enter the name of an FCoE map to display the FC and FCoE parameters configured in the map to be applied on the Aggregator with the FC ports. show qos dcb-map map-name Displays configuration parameters in a specified DCB map. show npiv devices [brief] Displays information on FCoE and FC devices currently logged in to the NPG. show fc switch Displays the FC mode of operation and worldwide node (WWN) name of an Aggregator.
fid_1004 1004 1004 0efc04 128 ACTIVE DOWN Dell# show fcoe-map fid_1003 Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/11 Te 0/12 fid_1003 1003 1003 3 0efc03 8 128 ACTIVE UP Table 26. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded.
Table 27. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Status Operational status of the link between a server CNA port and a SAN fabric: Logged In Server has logged in to the fabric and is able to transmit FCoE traffic.
Field Description Enode WWNN Worldwide node name of the server CNA. FCoE MAC Fabric-provided MAC address (FPMA). The FPMA consists of the FC-MAP value in the FCoE map and the FC-ID provided by the fabric after a successful FLOGI. In the FPMA, the most significant bytes are the FC-MAP; the least significant bytes are the FC-ID. FC-ID FC port ID provided by the fabric. LoginMethod Method used by the server CNA to log in to the fabric; for example, FLOGI or FDISC.
25 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
26 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Supported Modes Standalone, PMUX, VLT Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation.
Te 0/11(Dwn) Te 0/12(Dwn) Te 0/13(Up) Te 0/14(Dwn) Te 0/15(Up) Te 0/16(Up) Te 0/17(Dwn) Te 0/18(Dwn) Te 0/19(Dwn) Te 0/20(Dwn) Te 0/21(Dwn) Te 0/22(Dwn) Te 0/23(Dwn) Te 0/24(Dwn) Te 0/25(Dwn) Te 0/26(Dwn) Te 0/27(Dwn) Te 0/28(Dwn) Te 0/29(Dwn) Te 0/30(Dwn) 2. Te 0/31(Dwn) Te 0/32(Dwn) Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly.
2. Assign the port to a specified group of VLANs (vlan tagged command) and re-display the port mode status.. Dell(conf)#interface tengigabitethernet 0/1 Dell(conf-if-te-0/1)#vlan tagged 2-5,100,4010 Dell(conf-if-te-0/1)# Dell#show interfaces switchport tengigabitethernet 0/1 Codes: U x G i - Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Trunk, H - VSN tagged Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged Name: TenGigabitEthernet 0/1 802.
Hardware Rev : Num Ports : 56 Up Time : 17 hr, 8 min FTOS Version : 8-3-17-15 Jumbo Capable : yes POE Capable : no Boot Flash : A: 4.0.1.0 [booted] B: 4.0.1.0bt Boot Selector : 4.0.0.
Running Offline Diagnostics To run offline diagnostics, use the following commands. For more information, refer to the examples following the steps. 1. Place the unit in the offline state. EXEC Privilege mode offline stack-unit You cannot enter this command on a MASTER or Standby stack unit. NOTE: The system reboots when the offline diagnostics complete. This is an automatic process.
Auto Save on Crash or Rollover Exception information for MASTER or standby units is stored in the flash:/TRACE_LOG_DIR directory. This directory contains files that save trace information when there has been a task crash or timeout. • On a MASTER unit, you can reach the TRACE_LOG_DIR files by FTP or by using the show file command from the flash:// TRACE_LOG_DIR directory.
• View input and output statistics on the party bus, which carries inter-process communication traffic between CPUs. EXEC Privilege mode • show hardware stack-unit {0-5} cpu party-bus statistics View the ingress and egress internal packet-drop counters, MAC counters drop, and FP packet drops for the stack unit on per port basis. EXEC Privilege mode show hardware stack-unit {0-5} drops unit {0-0} port {33–56} • This view helps identifying the stack unit/port pipe/port that may experience internal drops.
SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 Serial Base ID fields Id = 0x03 Ext Id = 0x04 Connector = 0x07 Transceiver Code = 0x00 0x00 0x00 0x01 0x20 0x40 0x0c 0x01 Encoding = 0x01 BR Nominal = 0x0c Length(9um) Km = 0x00 Length(9um) 100m = 0x00 Length(50um) 10m = 0x37 Length(62.
When the system detects a genuine over-temperature condition, it powers off the card. To recognize this condition, look for the following system messages: CHMGR-2-MAJOR_TEMP: Major alarm: chassis temperature high (temperature reaches or exceeds threshold of [value]C) CHMGR-2-TEMP_SHUTDOWN_WARN: WARNING! temperature is [value]C; approaching shutdown threshold of [value]C To view the programmed alarm thresholds levels, including the shutdown value, use the show alarms threshold command.
Troubleshoot an Under-Voltage Condition To troubleshoot an under-voltage condition, check that the correct number of power supplies are installed and their Status light emitting diodes (LEDs) are lit. The following table lists information for SNMP traps and OIDs, which provide information about environmental monitoring hardware and hardware components. Table 32. SNMP Traps and OIDs OID String OID Name Description chSysPortXfpRecvPower OID displays the receiving power of the connected optics.
Physical memory is organized into cells of 128 bytes. The cells are organized into two buffer pools — the dedicated buffer and the dynamic buffer. • • Dedicated buffer — this pool is reserved memory that other interfaces cannot use on the same ASIC or by other queues on the same interface. This buffer is always allocated, and no dynamic re-carving takes place based on changes in interface status. Dedicated buffers introduce a trade-off.
• Reduce the dedicated buffer on all queues/interfaces. • Increase the dynamic buffer on all interfaces. • Increase the cell pointers on a queue that you are expecting will receive the largest number of packets. To define, change, and apply buffers, use the following commands. • Define a buffer profile for the FP queues. CONFIGURATION mode buffer-profile fp fsqueue • Define a buffer profile for the CSF queues.
To display the default buffer profile, use the show buffer-profile {summary | detail} command from EXEC Privilege mode. Example of Viewing the Default Buffer Profile Dell#show buffer-profile detail interface tengigabitethernet 0/1 Interface tengig 0/1 Buffer-profile Dynamic buffer 194.88 (Kilobytes) Queue# Dedicated Buffer Buffer Packets (Kilobytes) 0 2.50 256 1 2.50 256 2 2.50 256 3 2.50 256 4 9.38 256 5 9.38 256 6 9.38 256 7 9.
Using a Pre-Defined Buffer Profile The Dell Networking OS provides two pre-defined buffer profiles, one for single-queue (for example, non-quality-of-service [QoS]) applications, and one for four-queue (for example, QoS) applications. You must reload the system for the global buffer profile to take effect, a message similar to the following displays: % Info: For the global pre-defined buffer profile to take effect, please save the config and reload the system..
Troubleshooting Packet Loss The show hardware stack-unit command is intended primarily to troubleshoot packet loss. To troubleshoot packet loss, use the following commands.
PortSTPnotFwd Drops IPv4 L3 Discards Policy Discards Packets dropped by FP (L2+L3) Drops Port bitmap zero Drops Rx VLAN Drops : : : : : : : 0 0 0 14 0 16 0 --- Ingress MAC counters--Ingress FCSDrops : 0 Ingress MTUExceeds : 0 --- MMU Drops --HOL DROPS TxPurge CellErr Aged Drops : 0 : 0 : 0 --- Egress MAC counters--Egress FCS Drops : 0 --- Egress FORWARD PROCESSOR Drops --IPv4 L3UC Aged & Drops : 0 TTL Threshold Drops : 0 INVALID VLAN CNTR Drops : 0 L2MC Drops : 0 PKT Drops of ANY Conditions : 0 Hg Ma
txError txReqTooLarge txInternalError txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The show hardware stack-unit cpu party-bus statistics command displays input and output statistics on the party bus, which carries inter-process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell#show hardware stack-unit 2 cpu party-bus statistics Input Statistics: 27550 packets,
Important Points to Remember • When you restore all the units in a stack, all units in the stack are placed into stand-alone mode. • When you restore a single unit in a stack, only that unit is placed in stand-alone mode. No other units in the stack are affected. • When you restore the units in stand-alone mode, the units remain in stand-alone mode after the restoration. • After the restore is complete, the units power cycle immediately.
27 Standards Compliance This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http:// tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 33.
RFC# Full Name 2131 Dynamic Host Configuration Protocol 2338 Virtual Router Redundancy Protocol (VRRP) 3021 Using 31-Bit Prefixes on IPv4 Point-to-Point Links 3046 DHCP Relay Agent Information Option 3069 VLAN Aggregation for Efficient IP Address Allocation 3128 Protection Against a Variant of the Tiny Fragment Attack Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 35.
RFC# Full Name 2576 Coexistence Between Version 1, Version 2, and Version 3 of the Internetstandard Network Management Framework 2578 Structure of Management Information Version 2 (SMIv2) 2579 Textual Conventions for SMIv2 2580 Conformance Statements for SMIv2 2618 RADIUS Authentication Client MIB, except the following four counters: radiusAuthClientInvalidServerAddresses radiusAuthClientMalformedAccessResponses radiusAuthClientUnknownTypes radiusAuthClientPacketsDropped 3635 Definitions of Man
RFC# Full Name sFlow.org sFlow Version 5 sFlow.