Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.6(0.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide..................................................................................................13 Audience.............................................................................................................................................. 13 Conventions.........................................................................................................................................13 Related Documents..............................................................
Configuring Priority-Based Flow Control.................................................................................... 29 Enhanced Transmission Selection...................................................................................................... 31 Configuring Enhanced Transmission Selection........................................................................... 33 Configuring DCB Maps and its Attributes.......................................................................................
Viewing DHCP Statistics and Lease Information............................................................................... 69 6 FIP Snooping........................................................................................................ 71 Fibre Channel over Ethernet............................................................................................................... 71 Ensuring Robustness in a Converged Ethernet Network...................................................................
Configuring a Static Route for a Management Interface.............................................................97 VLAN Membership.............................................................................................................................. 98 Default VLAN ................................................................................................................................ 98 Port-Based VLANs.........................................................................................
11 Link Aggregation.............................................................................................122 How the LACP is Implemented on an Aggregator...........................................................................122 Uplink LAG................................................................................................................................... 122 Server-Facing LAGs.............................................................................................................
Relevant Management Objects........................................................................................................ 149 14 Port Monitoring.............................................................................................. 155 Configuring Port Monitoring.............................................................................................................155 Important Points to Remember..........................................................................................
Entity MIBS........................................................................................................................................ 190 Example of Sample Entity MIBS outputs.................................................................................... 190 Standard VLAN MIB........................................................................................................................... 192 Enhancements.............................................................................
Configuring Storm Control............................................................................................................... 217 19 System Time and Date................................................................................... 218 Setting the Time for the Software Clock..........................................................................................218 Setting the Timezone..........................................................................................................
Creating an FCoE Map .....................................................................................................................244 Applying a DCB Map on Server-Facing Ethernet Ports...................................................................245 Applying an FCoE Map on Fabric-Facing FC Ports......................................................................... 245 Sample Configuration.........................................................................................................
24 Debugging and Diagnostics.........................................................................296 Debugging Aggregator Operation................................................................................................... 296 All interfaces on the Aggregator are operationally down......................................................... 296 Broadcast, unknown multicast, and DLF packets switched at a very low rate.........................297 Flooded packets on all VLANs are received on a server.
About this Guide 1 This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.4(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Related Documents For more information about the Dell PowerEdge M I/O Aggregator MXL 10/40GbE Switch IO Module, refer to the following documents: • Dell Networking OS Command Line Reference Guide for the M I/O Aggregator • Dell Networking OS Getting Started Guide for the M I/O Aggregator • Release Notes for the M I/O Aggregator 14 About this Guide
Before You Start 2 To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Default Settings The I/O Aggregator provides zero-touch configuration with the following default configuration settings: • default user name (root) • password (calvin) • VLAN (vlan1) and IP address for in-band management (DHCP) • IP address for out-of-band (OOB) management (DHCP) • read-only SNMP community name (public) • broadcast storm control (enabled in Standalone and VLT modes and disabled in VLT mode) • IGMP multicast flooding (enabled) • VLAN configuration (in Standalone mode, all port
• Internet small computer system interface (iSCSI)optimization. • Internet group management protocol (IGMP) snooping. • Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. • Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface.
An aggregator also detects iSCSI storage devices on all interfaces and autoconfigures to optimize performance. Performance optimization operations are applied automatically, such as Jumbo frame size support on all the interfaces, disabling of storm control and enabling spanning-tree port fast on interfaces connected to an iSCSI equallogic (EQL) storage device. Link Aggregation All uplink ports are configured in a single LAG (LAG 128).
Server-Facing LAGs The tagged VLAN membership of a server-facing LAG is automatically configured based on the serverfacing ports that are members of the LAG. The untagged VLAN of a server-facing LAG is configured based on the untagged VLAN to which the lowest numbered server-facing port in the LAG belongs. NOTE: Dell Networking recommends configuring the same VLAN membership on all LAG member ports. Where to Go From Here You can customize the Aggregator for use in your data center network as necessary.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• EXEC Privilege mode has commands to view configurations, clear counters, manage configuration files, run diagnostics, and enable or disable debug operations. The privilege level is 15, which is unrestricted. You can configure a password for this mode. • CONFIGURATION mode allows you to configure security features, time settings, set logging and SNMP functions, configure static ARP and MAC addresses, and set line cards on the system.
CLI Command Mode Prompt Access Command CONFIGURATION Dell(conf)# • • From EXEC privilege mode, enter the configure command. From every mode except EXEC and EXEC Privilege, enter the exit command. NOTE: Access all of the following modes from CONFIGURATION mode.
Dell(conf)# Undoing Commands When you enter a command, the command line is added to the running configuration file (runningconfig). To disable a command and remove it from the running-config, enter the no command, then the original command. For example, to delete an IP address configured on an interface, use the no ip address ip-address command. NOTE: Use the help or ? command as described in Obtaining Help.
• Enter [space]? after a keyword lists all of the keywords that can follow the specified keyword. Dell(conf)#clock ? summer-time Configure summer (daylight savings) time timezone Configure time zone Dell(conf)#clock Entering and Editing Commands Notes for entering commands. • The CLI is not case-sensitive. • You can enter partial CLI keywords. – Enter the minimum number of letters to uniquely identify a command.
Short-Cut Key Combination Action Esc F Moves the cursor forward one word. Esc D Deletes all characters from the cursor to the end of the word. Command History Dell Networking OS maintains a history of previously-entered commands for each mode. For example: • • When you are in EXEC mode, the UP and DOWN arrow keys display the previously-entered EXEC mode commands. When you are in CONFIGURATION mode, the UP or DOWN arrows keys recall the previously-entered CONFIGURATION mode commands.
Admin mode is On Admin is enabled Local is enabled Link Delay 65535 pause quantum Dell(conf)# The find keyword displays the output of the show command beginning from the first occurrence of specified text. The following example shows this command used in combination with the show linecard all command.
Data Center Bridging (DCB) 4 On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode.
• Data Center Bridging Exchange (DCBx) protocol NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging. Priority-Based Flow Control In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in multiprotocol links so that it does not affect other traffic types and no frames are lost due to congestion. When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.
• By default, PFC is enabled on an interface with no dot1p priorities configured. You can configure the PFC priorities if the switch negotiates with a remote peer using DCBX. During DCBX negotiation with a remote peer: – DCBx communicates with the remote peer by link layer discovery protocol (LLDP) type, length, value (TLV) to determine current policies, such as PFC support and enhanced transmission selection (ETS) BW allocation.
The range (in quanta) is from 712 to 65535. The default is 45556 quantum in link delay. 3. Configure the CoS traffic to be stopped for the specified delay. DCB INPUT POLICY mode pfc priority priority-range Enter the 802.1p values of the frames to be paused. The range is from 0 to 7. The default is none. Maximum number of loss less queues supported on the switch: 2. Separate priority values with a comma. Specify a priority range with a dash, for example: pfc priority 1,3,5-7. 4.
By applying a DCB input policy with PFC enabled, you enable PFC operation on ingress port traffic. To achieve complete lossless handling of traffic, also enable PFC on all DCB egress ports or configure the dot1p priority-queue assignment of PFC priorities to lossless queues. To remove a DCB input policy, including the PFC configuration it contains, use the no dcb-input policy-name command in INTERFACE Configuration mode.
Although you can configure strict-priority queue scheduling for a priority group, ETS introduces flexibility that allows the bandwidth allocated to each priority group to be dynamically managed according to the amount of LAN, storage, and server traffic in a flow. Unused bandwidth is dynamically allocated to prioritized priority groups. Traffic is queued according to its 802.1p priority assignment, while flexible bandwidth allocation and the configured queue-scheduling for a priority group is supported.
other priority groups so that the sum of the bandwidth use is 100%. If priority group bandwidth use exceeds 100%, all configured priority group bandwidth is decremented based on the configured percentage ratio until all priority group bandwidth use is 100%. If priority group bandwidth usage is less than or equal to 100% and any default priority groups exist, a minimum of 1% bandwidth use is assigned by decreasing 1% of bandwidth from the other priority groups until priority group bandwidth use is 100%.
Step Task Command Command Mode 1 Enter global configuration mode to create a DCB map or edit PFC and ETS settings. dcb-map name CONFIGURATION 2 Configure the PFC setting (on or off) and the ETS bandwidth percentage allocated to traffic in each priority group, or whether the priority group traffic should be handled with strict priority scheduling. You can enable PFC on a maximum of two priority queues on an interface. Enabling PFC for dot1p priorities makes the corresponding port queue lossless.
ETS settings, and apply the new map to the interfaces to override the previous DCB map settings. Then, delete the original dot1p priority-priority group mapping. If you delete the dot1p priority-priority group mapping (no priority pgid command) before you apply the new DCB map, the default PFC and ETS parameters are applied on the interfaces. This change may create a DCB mismatch with peer DCB devices and interrupt network operation.
Step Task Command Command Mode fortygigabitEthernet slot/port} 2 Enable PFC on specified priorities. Range: 0-7. Default: None. INTERFACE pfc priority priority-range Maximum number of lossless queues supported on an Ethernet port: 2. Separate priority values with a comma. Specify a priority range with a dash, for example: pfc priority 3,5-7 1.
Step Task Command Command Mode 5 Apply the DCB map, created to disable the PFC operation, on the interface dcb-map {name | default} INTERFACE 6 Configure the port queues that still function as no-drop queues for lossless traffic. pfc no-drop queuesqueue-range INTERFACE The maximum number of lossless queues globally supported on a port is 2.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network. DCB is disabled by default. It must be enabled to support CEE.
To enable DCB with PFC buffers on a switch, enter the following commands, save the configuration, and reboot the system to allow the changes to take effect. 1. Enable DCB. CONFIGURATION mode dcb enable 2. Set PFC buffering on the DCB stack unit. CONFIGURATION mode dcb stack-unit all pfc-buffering pfc-ports 64 pfc-queues 2 NOTE: To save the pfc buffering configuration changes, save the configuration and reboot the system.
interface TenGigabitEthernet 0/4 mtu 12000 portmode hybrid switchport auto vlan flowcontrol rx on tx off dcb-map DCB_MAP_PFC_OFF no keepalive ! protocol lldp advertise management-tlv management-address system-name dcbx port-role auto-downstream no shutdown Dell# When DCB is Enabled When an interface receives a DCBx protocol packet, it automatically enables DCB and disables link-level flow control. The dcb-map and flow control configurations are removed as shown in the following example.
To reconfigure the Aggregator so that all interfaces come up with DCB disabled and link-level flow control enabled, use the no dcb enable on-next-reload command. PFC buffer memory is automatically freed. Enabling Auto-DCB-Enable Mode on Next Reload To configure the Aggregator so that all interfaces come up in auto-DCB-enable mode with DCB disabled and flow control enabled, use the dcb enable aut-detect on-next-reload command.
dot1p Value in the Incoming Frame Egress Queue Assignment 2 0 3 1 4 2 5 3 6 3 7 3 How Priority-Based Flow Control is Implemented Priority-based flow control provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default. As an enhancement to the existing Ethernet pause mechanism, PFC stops traffic transmission for specified priorities (CoS values) without impacting other priority classes.
ETS is implemented on an Aggregator as follows: • Traffic in priority groups is assigned to strict-queue or WERR scheduling in an ETS output policy and is managed using the ETS bandwidth-assignment algorithm. Dell Networking OS de-qeues all frames of strict-priority traffic before servicing any other queues. A queue with strict-priority traffic can starve other queues in the same port. • ETS-assigned bandwidth allocation and scheduling apply only to data queues, not to control queues.
– In the CEE version, the priority group/traffic class group (TCG) ID 15 represents a non-ETS priority group. Any priority group configured with a scheduler type is treated as a strict-priority group and is given the priority-group (TCG) ID 15. – The CIN version supports two types of strict-priority scheduling: * Group strict priority: Allows a single priority flow in a priority group to increase its bandwidth usage to the bandwidth total of the priority group.
DCBx Port Roles The following DCBX port roles are auto-configured on an Aggregator to propagate DCB configurations learned from peer DCBX devices internally to other switch ports: Auto-upstream The port advertises its own configuration to DCBx peers and receives its configuration from DCBX peers (ToR or FCF device). The port also propagates its configuration to other ports on the switch. The first auto-upstream that is capable of receiving a peer configuration is elected as the configuration source.
Default DCBX port role: Uplink ports are auto-configured in an auto-upstream role. Server-facing ports are auto-configured in an auto-downstream role. NOTE: On a DCBx port, application priority TLV advertisements are handled as follows: • The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port.
– No other port is the configuration source. – The port role is auto-upstream. – The port is enabled with link up and DCBx enabled. – The port has performed a DCBx exchange with a DCBx peer. – The switch is capable of supporting the received DCB configuration values through either a symmetric or asymmetric parameter exchange. A newly elected configuration source propagates configuration changes received from a peer to the other auto-configuration ports.
DCBX Example The following figure shows how DCBX is used on an Aggregator installed in a Dell PowerEdge M I/O Aggregator chassis in which servers are also installed. The external 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks configured as DCBx auto-upstream ports. The Aggregator is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of a Fibre Channel storage network.
DCBX Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBX requires LLDP in both send (TX) and receive (RX) mode to be enabled on a port interface. If multiple DCBX peer ports are detected on a local DCBX interface, LLDP is shut down. • The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN), logical link down (LLD), and network interface virtualization (NIV).
– tlv: enables traces for DCBx TLVs. Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 3. Displaying DCB Configurations Command Output show qos dot1p-queue mapping Displays the current 802.1p priority-queue mapping. show qos dcb-map map-name Displays the DCB parameters configured in a specified DCB map. show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFC-enabled ports, and number of PFC-enabled queues.
Example of the show dcb Command Dell# show dcb stack-unit 0 port-set 0 DCB Status PFC Queue Count Total Buffer[lossy + lossless] (in KB) PFC Total Buffer (in KB) PFC Shared Buffer (in KB) PFC Available Buffer (in KB) : : : : : : Enabled 2 3822 1912 832 1080 Example of the show interface pfc statistics Command Dell#show interfaces tengigabitethernet 0/3 pfc statistics Interface TenGigabitEthernet 0/3 Priority Rx XOFF Frames Rx Total Frames Tx Total Frames --------------------------------------------------
PFC Link Delay 45556 pause quanta Application Priority TLV Parameters : -------------------------------------FCOE TLV Tx Status is disabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x10 Remote FCOE PriorityMap is 0x8 Remote ISCSI PriorityMap is 0x8 0 Input TLV pkts, 1 Output TLV pkts, 0 Error pkts, 0 Pause Tx pkts, 0 Pause Rx pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts The following table describes th
Fields Description • Symmetric: for an IEEE version TLV Tx Status Status of PFC TLV advertisements: enabled or disabled. PFC Link Delay Link delay (in quanta) used to pause specified priority traffic. Application Priority TLV: FCOE TLV Tx Status Status of FCoE advertisements in application priority TLVs from local DCBx port: enabled or disabled. Application Priority TLV: ISCSI TLV Tx Status Status of ISCSI advertisements in application priority TLVs from local DCBx port: enabled or disabled.
TC-grp 0 1 2 3 4 5 6 7 Priority# 0,1,2,3,4,5,6,7 Priority# Bandwidth TSA 0 1 2 3 4 5 6 7 Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS 13% 13% 13% 13% 12% 12% 12% 12% ETS ETS ETS ETS ETS ETS ETS ETS Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Priority# Bandwidth 0 13% 1 13% 2 13% 3 13% 4 12
1 2 3 4 5 6 7 0% 0% 0% 0% 0% 0% 0% ETS ETS ETS ETS ETS ETS ETS Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled PG-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Oper status is init ETS DCBX Oper status is Down State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enabled 0 Input Conf TLV Pkts, 0 Output Conf TLV Pkts, 0 Error Conf TLV Pkts 0
Field Description Admin mode is enabled on the remote port for DCBx exchange, the Willing bit received in ETS TLVs from the remote peer is included. Local Parameters ETS configuration on local port, including Admin mode (enabled when a valid TLV is received from a peer), priority groups, assigned dot1p priorities, and bandwidth allocation. Operational status (local port) Port state for current operational ETS configuration: • • • Init: Local ETS configuration parameters were exchanged with peer.
Admin mode is On Admin is enabled, Priority list is 4-5 Local is enabled, Priority list is 4-5 Link Delay 45556 pause quantum 0 Pause Tx pkts, 0 Pause Rx pkts Example of the show stack-unit all stack-ports all ets details Command Dell# show stack-unit all stack-ports all ets details Stack unit 0 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA --------------------------------------
-Interface TenGigabitEthernet 0/4 Remote Mac Address 00:00:00:00:00:11 Port Role is Auto-Upstream DCBX Operational Status is Enabled Is Configuration Source? TRUE Local DCBX Compatibility mode is CEE Local DCBX Configured mode is CEE Peer Operating version is CEE Local DCBX TLVs Transmitted: ErPfi Local DCBX Status ----------------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 2 Acknowledgment Number: 2 Protocol State: In-Sync Peer DCBX Status: ---------------DCBX Operational
Field Description Local DCBx Compatibility mode DCBx version accepted in a DCB configuration as compatible. In auto-upstream mode, a port can only received a DCBx version supported on the remote peer. Local DCBx Configured mode DCBx version configured on the port: CEE, CIN, IEEE v2.5, or Auto (port auto-configures to use the DCBx version received from a peer). Peer Operating version DCBx version that the peer uses to exchange DCB parameters.
Field Description PG TLV Statistics: Output PG TLV Pkts Number of PG TLVs transmitted. PG TLV Statistics: Error PG TLV Pkts Number of PG error packets received. Application Priority TLV Statistics: Input Appln Priority TLV pkts Number of Application TLVs received. Application Priority TLV Statistics: Output Appln Priority TLV pkts Number of Application TLVs transmitted. Application Priority TLV Statistics: Error Appln Priority TLV Pkts Number of Application TLV error packets received.
Dynamic Host Configuration Protocol (DHCP) 5 The Aggregator is auto-configured to operate as a DHCP client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported.The dynamic host configuration protocol (DHCP) is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
DHCPDECLINE A client sends this message to the server in response to a DHCPACK if the configuration parameters are unacceptable; for example, if the offered address is already in use. In this case, the client starts the configuration process over by sending a DHCPDISCOVER. DHCPINFORM A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP. The server responds by unicast.
Dell Networking OS Behavior: DHCP is implemented in Dell Networking OS based on RFC 2131 and 3046. Debugging DHCP Client Operation To enable debug messages for DHCP client operation, enter the following debug commands: • Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces.
1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state START 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP DISABLED CMD sent to FTOS in state START Dell# release dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RELEASE CMD Received in state BOUND 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP RELEASE sent in Interface Ma 0
Dell# renew dhcp interface tengigabitethernet 0/1 Dell#May 27 15:55:28: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 : DHCP RENEW CMD Received in state STOPPED May 27 15:55:31: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 : Transitioned to state SELECTING May 27 15:55:31: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 May 27 15:55:31: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Rec
address remains in the running configuration for the interface. To acquire a new IP address, enter either the renew dhcp command at the EXEC privilege level or the ip address dhcp command at the interface configuration level. If you enter renew dhcp command on an interface already configured with a dynamic IP address, the lease time of the dynamically acquired IP address is renewed. Important: To verify the currently configured dynamic IP address on an interface, enter the show ip dhcp lease command.
DHCP Packet Format and Options DHCP uses the user datagram protocol (UDP) as its transport protocol. The server listens on port 67 and transmits to port 68; the client listens on port 68 and transmits to port 67. The configuration parameters are carried as options in the DHCP packet in Type, Length, Value (TLV) format; many options are specified in RFC 2132.
Option Number and Description • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request Option 55 List Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code. Renewal Time Option 58 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server.
• Insert Option 82 into DHCP packets. CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface. To manually acquire a new IP address from the DHCP server, use the following command.
DHCPREQUEST DHCPDECLINE DHCPRELEASE DHCPREBIND DHCPRENEW DHCPINFORM 0 0 0 0 0 0 Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ================ Ma 0/0 0.0.0.0/0 0.0.0.0 0.0.0.0 INIT -----NA--------NA---Vl 1 10.1.1.254/24 0.0.0.0 08-27-2011 04:33:39 Renew Time ========== ----NA---08-26-2011 16:21:50 70 10.1.1.
FIP Snooping 6 FIP snooping is auto-configured on an Aggregator in standalone mode. You can display information on FIP snooping operation and statistics by entering show commands. This chapter describes about the FIP snooping concepts and configuration procedures.
FIP provides a functionality for discovering and logging in to an FCF. After discovering and logging in, FIP allows FCoE traffic to be sent and received between FCoE end-devices (ENodes) and the FCF. FIP uses its own EtherType and frame format. The below illustration about FIP discovery, depicts the communication that occurs between an ENode server and an FCoE switch (FCF).
transmitted between an FCoE end-device and an FCF. An Ethernet bridge that provides these functions is called a FIP snooping bridge (FSB). On a FIP snooping bridge, ACLs are created dynamically as FIP login frames are processed. The ACLs are installed on switch ports configured for the following port modes: • ENode mode for server-facing ports • FCF mode for a trusted port directly connected to an FCF You must enable FIP snooping on an Aggregator and configure the FIP snooping parameters.
Figure 8. FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis.
• Process FIP VLAN discovery requests and responses, advertisements, solicitations, FLOGI/FDISC requests and responses, FLOGO requests and responses, keep-alive packets, and clear virtual-link messages. FIP Snooping in a Switch Stack FIP snooping supports switch stacking as follows:. • A switch stack configuration is synchronized with the standby stack unit. • Dynamic population of the FCoE database (ENode, Session, and FCF tables) is synchronized with the standby stack unit.
Bridge-to-FCF Links A port directly connected to an FCF is auto-configured in FCF mode. Initially, all FCoE traffic is blocked; only FIP frames are allowed to pass. FCoE traffic is allowed on the port only after a successful FLOGI request/response and confirmed use of the configured FC-MAP value for the VLAN.
Displaying FIP Snooping Information Use the show commands from the table below, to display information on FIP snooping. Command Output show fip-snooping sessions [interface vlan vlan-id] Displays information on FIP-snooped sessions on all VLANs or a specified VLAN, including the ENode interface and MAC address, the FCF interface and MAC address, VLAN ID, FCoE MAC address and FCoE session ID number (FC-ID), worldwide node name (WWNN) and the worldwide port name (WWPN).
aa:bb:cc:00:00:00 aa:bb:cc:00:00:00 aa:bb:cc:00:00:00 aa:bb:cc:00:00:00 Te Te Te Te 0/42 0/42 0/42 0/42 FCoE MAC 0e:fc:00:01:00:01 0e:fc:00:01:00:02 0e:fc:00:01:00:03 0e:fc:00:01:00:04 0e:fc:00:01:00:05 FC-ID 01:00:01 01:00:02 01:00:03 01:00:04 01:00:05 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 Port WWPN 31:00:0e:fc:00:00:00:00 41:00:0e:fc:00:00:00:00 41:00:0e:fc:00:00:00:01 41:00:0e:fc:00:00:00:02 41:00:0e:fc:00:00:00:03 Te Te Te Te 0/43 0/43 0/43 0/43 100 100 100 100
Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. VLAN VLAN ID number used by the session. FC-ID Fibre Channel session ID assigned by the FCF. show fip-snooping fcf Command Example Dell# show fip-snooping fcf FCF MAC FCF Interface No.
Number of VN Port Keep Alive Number of Multicast Discovery Advertisement Number of Unicast Discovery Advertisement Number of FLOGI Accepts Number of FLOGI Rejects Number of FDISC Accepts Number of FDISC Rejects Number of FLOGO Accepts Number of FLOGO Rejects Number of CVL Number of FCF Discovery Timeouts Number of VN Port Session Timeouts Number of Session failures due to Hardware Config Dell(conf)# :3349 :4437 :2 :2 :0 :16 :0 :0 :0 :0 :0 :0 :0 Dell# show fip-snooping statistics int tengigabitethernet 0/1
show fip-snooping statistics Command Description Field Description Number of Vlan Requests Number of FIP-snooped VLAN request frames received on the interface. Number of VLAN Notifications Number of FIP-snooped VLAN notification frames received on the interface. Number of Multicast Discovery Solicits Number of FIP-snooped multicast discovery solicit frames received on the interface.
Number of VN Port Session Timeouts Number of VN port session timeouts that occurred on the interface. Number of Session failures due to Hardware Config Number of session failures due to hardware configuration that occurred on the interface. show fip-snooping system Command Example Dell# show fip-snooping system Global Mode FCOE VLAN List (Operational) FCFs Enodes Sessions : : : : : Enabled 1, 100 1 2 17 NOTE: NPIV sessions are included in the number of FIP-snooped sessions displayed.
FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
The DCBX configuration on the FCF-facing port is detected by the server-facing port and the DCB PFC configuration on both ports is synchronized. For more information about how to configure DCBX and PFC on a port, refer to FIP Snooping After FIP packets are exchanged between the ENode and the switch, a FIP snooping session is established. ACLS are dynamically generated for FIP snooping on the FIP snooping bridge/switch.
Internet Group Management Protocol (IGMP) 7 On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. • Responding to an IGMP Query. – One router on a subnet is elected as the querier. The querier periodically multicasts (to allmulticast-systems address 224.0.0.1) a general query to all hosts on the subnet.
• • To enable filtering, routers must keep track of more state information, that is, the list of sources that must be filtered. An additional query type, the group-and-source-specific query, keeps track of state changes, while the group-specific and general queries still refresh existing state. Reporting is more efficient and robust.
• The host’s third message indicates that it is only interested in traffic from sources 10.11.1.1 and 10.11.1.2. Because this request again prevents all other sources from reaching the subnet, the router sends another group-and-source query so that it can satisfy all other hosts. There are no other interested hosts, so the request is recorded. Figure 13.
Figure 14. IGMP Membership Queries: Leaving and Staying in Groups IGMP Snooping IGMP snooping is auto-configured on an Aggregator. Multicast packets are addressed with multicast MAC addresses, which represents a group of devices rather than one unique device. Switches forward multicast frames out of all ports in a VLAN by default, even if there are only a small number of interested hosts, resulting in a waste of bandwidth.
Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode. When multicast flooding is disabled, unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs.
Source address 1.1.1.2 Member Ports: Po 1 Interface Group Uptime Expires Router mode Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 Dell# Uptime 00:00:21 Expires 00:01:48 Uptime 00:00:04 Expires 00:02:05 Vlan 1600 226.0.0.1 00:00:04 Never INCLUDE 1.1.1.
8 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• • – The tagged Virtual Local Area Network (VLAN) membership of the uplink LAG is automatically configured based on the VLAN configuration of all server-facing ports (ports 1 to 32). The untagged VLAN used for the uplink LAG is always the default VLAN 1. – The tagged VLAN membership of a server-facing LAG is automatically configured based on the server-facing ports that are members of the LAG.
Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :tenG2730001e800ab01 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 1000 Mbit Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 11:04:02 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttl
Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command. Step Command Syntax Command Mode Purpose 1. interface interface CONFIGURATION Enter the keyword interface followed by the type of interface and slot/port information: 2.
advertise management-tlv system-name dcbx port-role auto-downstream no shutdown Dell(conf-if-te-0/1)# To view the interfaces in Layer 2 mode, use the show interfaces switchport command in EXEC mode. Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The IOM management interface has both a public IP and private IP address on the internal Fabric D interface.
Slot range: 0-0 To configure an IP address on a management interface, use either of the following commands in MANAGEMENT INTERFACE mode: Command Syntax Command Mode Purpose ip address ip-address mask INTERFACE Configure an IP address and mask on the interface. • ip address dhcp INTERFACE ip-address mask: enter an address in dotted-decimal format (A.B.C.D), the mask must be in /prefix format (/x) Acquire an IP address from the DHCP server.
VLAN Membership A virtual LAN (VLANs) is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group. In Layer 2 mode, VLANs move traffic at wire speed and can span multiple devices. Dell Networking OS supports up to 4093 port-based VLANs and one default VLAN, as specified in IEEE 802.1Q. VLAN provide the following benefits: • Improved security because you can isolate groups of users into different VLANs.
VLANs and Port Tagging To add an interface to a VLAN, it must be in Layer 2 mode. After you place an interface in Layer 2 mode, it is automatically placed in the default VLAN. Dell Networking OS supports IEEE 802.1Q tagging at the interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses. The information that is preserved as the frame moves through the network.
vlan-id specifies a tagged VLAN number. Range: 2-4094 To reconfigure an interface as a member of only specified untagged VLANs, enter the vlan untagged command in INTERFACE mode: Command Syntax Command Mode Purpose vlan untagged {vlan-id} INTERFACE Add the interface as an untagged member of one or more VLANs, where: vlan-id specifies an untagged VLAN number.
Adding an Interface to a Tagged VLAN The following example shows you how to add a tagged interface (Tel/7) to a VLAN (VLAN 2). Enter the vlan tagged command to add interface Te 1/7 to VLAN 2, which is as shown below. Enter the show vlan command to verify that interface Te 1/7 is a tagged member of VLAN 2..
T Te 0/1-15,17-32 Dell# Port Channel Interfaces On an Aggregator, port channels are auto-configured as follows: • All 10GbE uplink interfaces (ports 33 to 56) are auto-configured to belong to the same 10GbE port channel (LAG 128). • Server-facing interfaces (ports 1 to 32) auto-configure in LAGs (1 to 127) according to the NIC teaming configuration on the connected servers. Port channel interfaces support link aggregation, as described in IEEE Standard 802.3ad. .
Member ports of a LAG are added and programmed into hardware in a predictable order based on the port ID, instead of in the order in which the ports come up. With this implementation, load balancing yields predictable results across switch resets and chassis reloads. A physical interface can belong to only one port channel at a time. Each port channel must contain interfaces of the same interface type/speed. Port channels can contain a mix of 1000 or 10000 Mbps Ethernet interfaces .
Displaying Port Channel Information To view the port channel’s status and channel members in a tabular format, use the show interfaces port-channel brief command in EXEC Privilege mode. Dell#show int port brief Codes: L - LACP Port-channel LAG 1 Dell# Mode Status L2 down Uptime 00:00:00 Ports Te 0/16 (Down) To display detailed information on a port channel, enter the show interfaces port-channel command in EXEC Privilege mode.
DHCP Client-ID :lag128001ec9f10358 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel: Te 1/49(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:14:06 Queueing strategy: fifo Input Statistics: 476 packets, 33180 bytes 414 64-byte pkts, 33 over 64-byte pkts, 29 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 476 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles, 0 CRC, 0 overrun, 0 discarded Output St
• Commas Create a Single-Range Creating a Single-Range Bulk Configuration Dell(conf)# interface range tengigabitethernet 0/1 - 23 Dell(conf-if-range-te-0/1-23)# no shutdown Dell(conf-if-range-te-0/1-23)# Create a Multiple-Range Creating a Multiple-Range Prompt Dell(conf)#interface range tengigabitethernet 0/5 - 10 , tengigabitethernet 0/1 , vlan 1 Dell(conf-if-range-te-0/5-10,te-0/1,vl-1)# Exclude a Smaller Port Range If the interface range has multiple port ranges, the smaller port range is excluded fr
Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command. This command displays an ongoing list of the interface status (up/down), number of packets, traffic statistics, etc. Command Syntax Command Mode Purpose monitor interface interface EXEC Privilege View interface statistics.
m l T q - Change mode Page up Increase refresh interval Quit c - Clear screen a - Page down t - Decrease refresh interval Maintenance Using TDR The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers. TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs. TDR sends a signal onto the physical cable and examines the reflection of the signal that returns.
a temporary stop in data transmission. A situation may arise where a sending device may transmit data faster than a destination device can accept it. The destination sends a pause frame back to the source, stopping the sender’s transmission for a period of time. The globally assigned 48-bit Multicast address 01-80-C2-00-00-01 is used to send and receive pause frames.
– negotiate: enable pause-negotiation with the egress port of the peer device. If the negotiate command is not used, pause-negotiation is disabled. NOTE: 40 Gig interfaces does not support pause-negotiation. The default is rx off. MTU Size The Aggregator auto-configures interfaces to use a maximum MTU size of 12,000 bytes. If a packet includes a Layer 2 header, the difference in bytes between the link MTU and IP MTU must be enough to include the Layer 2 header.
For example, the VLAN contains tagged members with a link MTU of 1522 and an IP MTU of 1500 and untagged members with a link MTU of 1518 and an IP MTU of 1500. The VLAN’s Link MTU cannot be higher than 1518 bytes and its IP MTU cannot be higher than 1500 bytes. Auto-Negotiation on Ethernet Interfaces Setting Speed and Duplex Mode of Ethernet Interfaces By default, auto-negotiation of speed and duplex mode is enabled on 1GbE and 10GbE Ethernet interfaces on an Aggregator.
show interface status Command Example: Dell#show interfaces status Port Description Status Te 0/1 Down Te 0/2 Down Te 0/3 Down Te 0/4 Down Te 0/5 Down Te 0/6 Down Te 0/7 Down Te 0/8 Down Te 0/9 Down Te 0/10 Down Te 0/11 Down Te 0/12 Down Te 0/13 Down [output omitted] Speed Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Duplex Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Vlan -------------- In the above example, several ports display “Auto” in the speed field, includ
Speed 100 interface, config not ignored Te 0/49) supported on this interface, config ignored Te 0/49) speed auto interfaceconfig mode Supported Not Not supported supported Not supported Error messages not thrown wherever it says not supported speed 1000 interfaceconfig mode Supported Supported Supported Supported speed 10000 interfaceconfig mode Supported Supported Not Supported Not supported Error messages not thrown wherever it says not supported negotiation auto interfaceconfig mode Suppo
forced-slave Dell(conf-if-autoneg)# Force port to slave mode Viewing Interface Information Displaying Non-Default Configurations. The show [ip | running-config] interfaces configured command allows you to display only interfaces that have non-default configurations are displayed. The below example illustrates the possible show commands that have the available configured keyword.
Command Syntax Command Mode Purpose clear counters [interface] EXEC Privilege Clear the counters used in the show interface commands for all VRRP groups, VLANs, and physical interfaces or selected ones. Without an interface specified, the command clears all interface counters. • (OPTIONAL) Enter the following interface keywords and slot/port or number information: • For a Loopback interface, enter the keyword loopback followed by a number from 0 to 16383.
9 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: Detection and Auto configuration for Dell EqualLogic Arrays iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
• iSCSI QoS — A user-configured iSCSI class of service (CoS) profile is applied to all iSCSI traffic. Classifier rules are used to direct the iSCSI data traffic to queues that can be given preferential QoS treatment over other data passing through the switch. Preferential treatment helps to avoid session interruptions during times of congestion that would otherwise cause dropped iSCSI packets. • iSCSI DCBx TLVs are supported.
Information Monitored in iSCSI Traffic Flows iSCSI optimization examines the following data in packets and uses the data to track the session and create the classifier entries that enable QoS treatment: • Initiator’s IP Address • Target’s IP Address • ISID (Initiator defined session identifier) • Initiator’s IQN (iSCSI qualified name) • Target’s IQN • Initiator’s TCP Port • Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data
• iSCSI LLDP monitoring starts to automatically detect EqualLogic arrays. iSCSI optimization requires LLDP to be enabled. LLDP is enabled by default when an Aggregator autoconfigures. The following message displays when you enable iSCSI on a switch and describes the configuration changes that are automatically performed: %STKUNIT0-M:CP %IFMGR-5-IFM_ISCSI_ENABLE: iSCSI has been enabled causing flow control to be enabled on all interfaces.
---------------------------------------------------------------------------------------Target: iqn.2001-05.com.equallogic:0-8a0906-0f60c2002-0360018428d48c94-iom011 Initiator: iqn.1991-05.com.microsoft:win-x9l8v27yajg ISID: 400001370000. show iscsi sessions detailed Command Example Dell# show iscsi sessions detailed Session 0 : ----------------------------------------------------------------------------Target:iqn.2010-11.com.ixia:ixload:iscsi-TG1 Initiator:iqn.2010-11.com.ixia.
Isolated Networks for Aggregators 10 An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a non-isolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
11 Link Aggregation The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • • All uplink ports are automatically configured in a single port channel (LAG 128). Server-facing LAGs are automatically configured if you configure server for link aggregation control protocol (LACP)-based NIC teaming (Network Interface Controller (NIC) Teaming). No manual configuration is required to configure Aggregator ports in the uplink or a server-facing LAG.
number is assigned based on the first available number in the range from 1 to 127. For each unique remote system-id and port-key combination, a new LAG is formed and the port automatically becomes a member of the LAG. All ports with the same combination of system ID and port key automatically become members of the same LAG. Ports are automatically removed from the LAG if the NIC teaming configuration on a serverfacing port changes or if the port goes operationally down.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Link Aggregation Control Protocol (LACP) This chapter contains commands for Dell Networks’s implementation of the link aggregation control protocol (LACP) for creating dynamic link aggregation groups (LAGs) — known as port-channels in the Dell Networking Operating System (OS). NOTE: For static LAG commands, refer to the Interfaces chapter), based on the standards specified in the IEEE 802.3 Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications.
You can add any physical interface to a port channel if the interface configuration is minimal. You can configure only the following commands on an interface if it is a member of a port channel: • description • shutdown/no shutdown • mtu • ip mtu (if the interface is on a Jumbo-enabled by default) NOTE: A logical port channel interface cannot have flow control. Flow control can only be present on the physical interfaces if they are part of a port channel.
Hardware address is 00:01:e8:01:46:fa Internet address is 1.1.120.
To reassign an interface to a new port channel, use the following commands. 1. Remove the interface from the first port channel. INTERFACE PORT-CHANNEL mode no channel-member interface 2. Change to the second port channel INTERFACE mode. INTERFACE PORT-CHANNEL mode interface port-channel id number 3. Add the interface to the second port channel.
Configuring VLAN Tags for Member Interfaces To configure and verify VLAN tags for individual members of a port channel, perform the following: 1. Configure VLAN membership on individual ports INTERFACE mode Dell(conf-if-te-0/2)#vlan tagged 2,3-4 2. Use the switchport command in INTERFACE mode to enable Layer 2 data transmissions through an individual interface INTERFACE mode Dell(conf-if-te-0/2)#switchport 3. Verify the manually configured VLAN membership (show interfaces switchport interface command).
Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled. This functionality is supported on the Aggregator in Standalone, Stacking, and VLT modes. To configure auto LAG, use the following commands: 1. Enable the auto LAG on all the server ports. CONFIGURATION mode io-aggregator auto-lag enable Dell(config)# io-aggregator auto-lag enable To disable the auto LAG on all the server ports, use the no io-aggregator auto-lag enable command.
Mode of IPv4 Address Assignment : NONE DHCP Client-ID :f8b156071d8e MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Auto-lag is disabled Flowcontrol rx on tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:53 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overru
example, based on your network deployment, you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state. If you enable this setting, the uplink LAG bundle is brought up only when the specified minimum number of links are up and the LAG bundle is moved to the down state when the number of active links in the LAG becomes less than the specified number of interfaces.
Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode, the VLT LAG configurations are saved in nonvolatile storage (NVS). By restoring the settings saved in NVS, the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced. The delay in restoring the VLT LAG parameters is reduced (90 seconds by default) on the secondary VLT peer node before it becomes operational.
LAG bundle. The functionality to detect the working efficiency of the LAG bundle interfaces is automatically activated on all the port channels, except the port channel that is configured as a VLT interconnect link, during the booting of the switch. This functionality is supported on I/O Aggregators in stacking, standalone, and VLT modes and it is not supported in programmable MUX (PMUX) mode. By default, this capability is enabled on all of the port channels set up on the switch.
show interfaces port-channel 128Command Example Dell# show interfaces port-channel 128 Port-channel 128 is up, line protocol is up Created by LACP protocol Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755136 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag1280001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 40000 Mbit Members in this channel: Te 0/41(U) Te
Oper: State ADEGIKNP Key 128 Priority 32768 Partner Admin: State BDFHJLMP Key 0 Priority 0 Oper: State ACEGIKNP Key 128 Priority 32768 Port Te 0/43 is enabled, LACP is enabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEGIKNP Key 128 Priority 32768 Partner Admin: State BDFHJLMP Key 0 Priority 0 Oper: State ACEGIKNP Key 128 Priority 32768 Port Te 0/44 is enabled, LACP is enabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 P
Port Te 0/53 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/54 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/55 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Pri
show lacp 1 Command Example Dell# show lacp 1 Port-channel 1 admin up, oper up, mode lacp Actor System ID: Priority 32768, Address 0001.e8e1.e1c3 Partner System ID: Priority 65535, Address 24b6.fd87.
Layer 2 12 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] – address: displays the specified entry. – aging-time: displays the configured aging-time.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves.
Figure 19. MAC Address Station Move MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
Link Layer Discovery Protocol (LLDP) 13 Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
Figure 20. Type, Length, Value (TLV) Segment TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDP-enabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Figure 21. LLDPDU Frame Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
• LLDP is not hitless. Viewing the LLDP Configuration To view the LLDP configuration, use the following command. • Display the LLDP configuration.
Te 0/2 Te 0/3 - 00:00:c9:b1:3b:82 00:00:c9:ad:f6:12 00:00:c9:b1:3b:82 00:00:c9:ad:f6:12 Dell#show lldp neighbors detail ======================================================================== Local Interface Te 0/2 has 1 neighbor Total Frames Out: 16843 Total Frames In: 17464 Total Neighbor information Age outs: 0 Total Multiple Neighbors Detected: 0 Total Frames Discarded: 0 Total In Error Frames: 0 Total Unrecognized TLVs: 0 Total TLVs Discarded: 0 Next packet will be sent after 16 seconds The neighb
Command Syntax Command Mode Purpose clear lldp counters [interface] EXEC Privilege Clear counters for LLDP frames sent to and received from neighboring devices on all Aggregator interfaces or on a specified interface. interface specifies a 10GbE uplink port in the format: tenGigabitEthernet slot/port. Debugging LLDP You can view the TLVs that your system is sending and receiving. To view the TLVs, use the following commands. • View a readable version of the TLVs.
Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • received and transmitted LLDP-MED TLVs Table 9.
MIB Object Category LLDP Variable LLDP MIB Object Description statsFramesInTotal lldpStatsRxPortFramesTotal Total number of LLDP frames received through the port. statsFramesOutTotal lldpStatsTxPortFramesTotal Total number of LLDP frames transmitted through the port. statsTLVsDiscardedTotal lldpStatsRxPortTLVsDiscard edTotal Total number of TLVs received then discarded.
TLV Type TLV Name TLV Variable System LLDP MIB Object 8 Management Address enabled capabilities Local lldpLocSysCapEnabl ed Remote lldpRemSysCapEnab led Local lldpLocManAddrLen Remote lldpRemManAddrLen Local lldpLocManAddrSubt ype Remote lldpRemManAddrSu btype Local lldpLocManAddr Remote lldpRemManAddr management address length management address subtype management address interface numbering Local subtype interface number OID lldpLocManAddrIfSu btype Remote lldpRemManAddrIfS
TLV Type 127 TLV Name VLAN Name TLV Variable System LLDP MIB Object PPVID Local lldpXdot1LocProtoVl anId Remote lldpXdot1RemProtoV lanId Local lldpXdot1LocVlanId Remote lldpXdot1RemVlanId Local lldpXdot1LocVlanNa me Remote lldpXdot1RemVlanN ame Local lldpXdot1LocVlanNa me Remote lldpXdot1RemVlanN ame VID VLAN name length VLAN name Table 12.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object Tagged Flag Local lldpXMedLocMediaP olicyTagged Remote lldpXMedLocMediaP olicyTagged Local lldpXMedLocMediaP olicyVlanID Remote lldpXMedRemMedia PolicyVlanID Local lldpXMedLocMediaP olicyPriority Remote lldpXMedRemMedia PolicyPriority Local lldpXMedLocMediaP olicyDscp Remote lldpXMedRemMedia PolicyDscp Local lldpXMedLocLocatio nSubtype Remote lldpXMedRemLocati onSubtype Local lldpXMedLocLocatio nInfo Remote lldpXMedRem
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object lldpXMedLocXPoEPS EPortPDPriority Remote lldpXMedRemXPoEP SEPowerPriority lldpXMedRemXPoEP DPowerPriority Power Value Local lldpXMedLocXPoEPS EPortPowerAv lldpXMedLocXPoEP DPowerReq Remote lldpXMedRemXPoEP SEPowerAv lldpXMedRemXPoEP DPowerReq 154 Link Layer Discovery Protocol (LLDP)
14 Port Monitoring The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
In the following example, the host and server are exchanging traffic which passes through the uplink interface 1/1. Port 1/1 is the monitored port and port 1/42 is the destination port, which is configured to only monitor traffic received on tengigabitethernet 1/1 (host-originated traffic). Figure 24.
SessionID --------1 Source -----TenGig 0/1 Destination ----------TenGig 0/8 Direction --------both Mode ---interface Type ---Port-based Dell(conf-mon-sess-1)#mon ses 2 Dell(conf-mon-sess-2)#source tengig 0/1 destination tengig 0/8 direction both % Error: MD port is already being monitored. NOTE: There is no limit to the number of monitoring sessions per system, provided that there are only four destination ports per port-pipe.
15 Security for M I/O Aggregator Security features are supported on the M I/O Aggregator. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Understanding Banner Settings This functionality is supported on the M I/O Aggregator.
AAA Authentication Dell Networking OS supports a distributed client/server system implemented through authentication, authorization, and accounting (AAA) to help secure networks against unauthorized access.
way, and does so to ensure that users are not locked out of the system if network-wide issue prevents access to these servers. 1. Define an authentication method-list (method-list-name) or specify the default. CONFIGURATION mode aaa authentication login {method-list-name | default} method1 [... method4] The default method-list is applied to all terminal lines. Possible methods are: 2. • enable: use the password you defined using the enable secret or enable password command in CONFIGURATION mode.
Enabling AAA Authentication — RADIUS To enable authentication from the RADIUS server, and use TACACS as a backup, use the following commands. 1. Enable RADIUS and set up TACACS as backup. CONFIGURATION mode aaa authentication enable default radius tacacs 2. Establish a host address and password. CONFIGURATION mode radius-server host x.x.x.x key some-password 3. Establish a host address and password. CONFIGURATION mode tacacs-server host x.x.x.
AAA Authorization The Dell Networking OS enables AAA new-model by default. You can set authorization to be either local or remote. Different combinations of authentication and authorization yield different results. By default, the system sets both to local. Privilege Levels Overview Limiting access to the system is one method of protecting the system and your network. However, at times, you might need to allow others access to the router and you can limit that access to a subset of commands.
For a complete listing of all commands related to privilege levels and passwords, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Configuring a Username and Password In the Dell Networking OS, you can assign a specific username to limit user access to the system. To configure a username and password, use the following command. • Assign a user name and password.
Configuring Custom Privilege Levels In addition to assigning privilege levels to the user, you can configure the privilege levels of commands so that they are visible in different privilege levels. Within the Dell Networking OS, commands have certain privilege levels. With the privilege command, you can change the default level or you can reset their privilege level back to the default. Assign the launch keyword (for example, configure) for the keyword’s command mode.
The following example shows a configuration to allow a user john to view only EXEC mode commands and all snmp-server commands. Because the snmp-server commands are enable level commands and, by default, found in CONFIGURATION mode, also assign the launch command for CONFIGURATION mode, configure, to the same privilege level as the snmp-server commands. Line 1: The user john is assigned privilege level 8 and assigned a password. Line 2: All other users are assigned a password to access privilege level 8.
Dell(conf)#? end Exit from Configuration mode Specifying LINE Mode Password and Privilege You can specify a password authentication of all users on different terminal lines. The user’s privilege level is the same as the privilege level assigned to the terminal line, unless a more specific privilege level is assigned to the user. To specify a password for the terminal line, use the following commands. • Configure a custom privilege level for the terminal lines.
• Access-Accept — the RADIUS server authenticates the user. • Access-Reject — the RADIUS server does not authenticate the user. If an error occurs in the transmission or reception of RADIUS packets, you can view the error by enabling the debug radius command. Transactions between the RADIUS server and the client are encrypted (the users’ passwords are not sent in plain text). RADIUS uses UDP as the transport protocol between the RADIUS server host and the client.
Defining a AAA Method List to be Used for RADIUS To configure RADIUS to authenticate or authorize users on the system, create a AAA method list. Default method lists do not need to be explicitly applied to the line, so they are not mandatory. To create a method list, use the following commands. • Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the RADIUS authentication method.
– auth-port port-number: the range is from 0 to 65535. Enter a UDP port number. The default is 1812. – retransmit retries: the range is from 0 to 100. Default is 3. – timeout seconds: the range is from 0 to 1000. Default is 5 seconds. – key [encryption-type] key: enter 0 for plain text or 7 for encrypted text, and a string for the key. The key can be up to 42 characters long. This key must match the key configured on the RADIUS server host.
• – retries: the range is from 0 to 100. Default is 3 retries. Configure the time interval the system waits for a RADIUS server host response. CONFIGURATION mode radius-server timeout seconds – seconds: the range is from 0 to 1000. Default is 5 seconds. To view the configuration of RADIUS communication parameters, use the show running-config command in EXEC Privilege mode. Monitoring RADIUS To view information on RADIUS transactions, use the following command.
Use this command multiple times to configure multiple TACACS+ server hosts. 2. Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the TACAS+ authentication method. CONFIGURATION mode aaa authentication login {method-list-name | default} tacacs+ [...method3] The TACACS+ method must not be the last method specified. 3. Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [end-number]} 4. Assign the method-list to the terminal line.
on vty0 (10.11.9.209) %RPM0-P:CP %SEC-3-AUTHENTICATION_ENABLE_SUCCESS: Enable password authentication success on vty0 ( 10.11.9.209 ) AAA Accounting Accounting, authentication, and authorization (AAA) accounting is part of the AAA security model. For details about commands related to AAA security, refer to the Security chapter in the Dell Networking OS Command Reference Guide.
Configuring Accounting of EXEC and Privilege-Level Command Usage The network access server monitors the accounting functions defined in the TACACS+ attribute/value (AV) pairs. • Configure AAA accounting to monitor accounting functions defined in TACACS+. CONFIGURATION mode aaa accounting system default start-stop tacacs+ aaa accounting command 15 default start-stop tacacs+ System accounting can use only the default method list.
Task ID 1, EXEC Accounting record, 00:00:39 Elapsed, service=shell Active accounted actions on tty3, User admin Priv 1 Task ID 2, EXEC Accounting record, 00:00:26 Elapsed, service=shell Dell# Monitoring TACACS+ To view information on TACACS+ transactions, use the following command. • View TACACS+ transactions to troubleshoot problems.
To view the TACACS+ configuration, use the show running-config tacacs+ command in EXEC Privilege mode. To delete a TACACS+ server host, use the no tacacs-server host {hostname | ip-address} command. freebsd2# telnet 2200:2200:2200:2200:2200::2202 Trying 2200:2200:2200:2200:2200::2202... Connected to 2200:2200:2200:2200:2200::2202. Escape character is '^]'.
Specifying an SSH Version The following example uses the ip ssh server version 2 command to enable SSH version 2 and the show ip ssh command to confirm the setting. Dell(conf)#ip ssh server version 2 Dell(conf)#do show ip ssh SSH server : disabled. SSH server version : v2. Password Authentication : enabled. Hostbased Authentication : disabled. RSA Authentication : disabled. To disable SSH server functions, use the no ip ssh server enable command.
• Using RSA Authentication of SSH • Configuring Host-Based SSH Authentication Important Points to Remember • If you enable more than one method, the order in which the methods are preferred is based on the ssh_config file on the Unix machine. • When you enable all the three authentication methods, password authentication is the backup method when the RSA method fails. • The files known_hosts and known_hosts2 are generated when a user tries to SSH using version 1 or version 2, respectively.
4. Bind the public keys to RSA authentication. EXEC Privilege mode ip ssh rsa-authentication enable 5. Bind the public keys to RSA authentication. EXEC Privilege mode ip ssh rsa-authentication my-authorized-keys flash://public_key Example of Generating RSA Keys admin@Unix_client#ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/admin/.ssh/id_rsa): /home/admin/.ssh/id_rsa already exists.
admin@Unix_client# ls moduli sshd_config ssh_host_dsa_key.pub ssh_host_key.pub ssh_host_rsa_key.pub ssh_config ssh_host_dsa_key ssh_host_key ssh_host_rsa_key admin@Unix_client# cat ssh_host_rsa_key.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA8K7jLZRVfjgHJzUOmXxuIbZx/ AyWhVgJDQh39k8v3e8eQvLnHBIsqIL8jVy1QHhUeb7GaDlJVEDAMz30myqQbJgXBBRTWgBpLWwL/ doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hL1by8cYZP2kYS2lnSyQWk= admin@Unix_client# ls id_rsa id_rsa.pub shosts admin@Unix_client# cat shosts 10.16.127.
If the IP address in the RSA key does not match the IP address from which you attempt to log in, the following message appears. In this case, verify that the name and IP address of the client is contained in the file /etc/hosts: RSA Authentication Error. Telnet To use Telnet with SSH, first enable SSH, as previously described. By default, the Telnet daemon is enabled. If you want to disable the Telnet daemon, use the following command, or disable Telnet in the startup config.
3. Assign an access class. 4. Enter a privilege level. You can assign line authentication on a per-VTY basis; it is a simple password authentication, using an access-class as authorization. Configure local authentication globally and configure access classes on a per-user basis. Dell Networking OS can assign different access classes to different users by username. Until users attempt to log in, Dell Networking OS does not know if they will be assigned a VTY line.
Dell(config-line-vty)#login authentication tacacsmethod Dell(config-line-vty)# Dell(config-line-vty)#access-class deny10 Dell(config-line-vty)#end (same applies for radius and line authentication) VTY MAC-SA Filter Support Dell Networking OS supports MAC access lists which permit or deny users based on their source MAC address. With this approach, you can implement a security policy based on the source MAC address. To apply a MAC ACL on a VTY line, use the same access-class command as IP ACLs.
16 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
Setting up SNMP Dell Networking OS supports SNMP version 1 and version 2 which are community-based security models. The primary difference between the two versions is that version 2 supports two additional protocol operations (informs operation and snmpgetbulk query) and one additional object (counter64 object). Creating a Community For SNMPv1 and SNMPv2, create a community to enable the community-based security in the Dell Networking OS.
• Read the value of the managed object directly below the specified object. • snmpgetnext -v version -c community agent-ip {identifier.instance | descriptor.instance} Read the value of many objects at once. snmpwalk -v version -c community agent-ip {identifier.instance | descriptor.instance} In the following example, the value “4” displays in the OID before the IP address for IPv4. For an IPv6 IP address, a value of “16” displays.
To display the ports in a VLAN, send an snmpget request for the object dot1qStaticEgressPorts using the interface index as the instance number, as shown in the following example. Example of Viewing the Ports in a VLAN in SNMP snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.
Fetching Dynamic MAC Entries using SNMP The Aggregator supports the RFC 1493 dot1d table for the default VLAN and the dot1q table for all other VLANs. NOTE: The table contains none of the other information provided by the show vlan command, such as port speed or whether the ports are tagged or untagged. NOTE: The 802.1q Q-BRIDGE MIB defines VLANs regarding 802.1d, as 802.1d itself does not define them.
>snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.172 = Hex-STRING: 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non-default VLANs In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN. To fetch the MAC addresses learned on non-default VLANs, use the object dot1qTpFdbTable. The instance number is the VLAN number concatenated with the decimal conversion of the MAC address.
are not given. The interface is physical, so this must be represented by a 0 bit, and the unused bit is always 0. These two bits are not given because they are the most significant bits, and leading zeros are often omitted. For interface indexing, slot and port numbering begins with binary one. If the Dell Networking system begins slot and port numbering from 0, binary 1 represents slot and port 0. In S4810, the first interface is 0/0, but in the Aggregator the first interface is 0/1.
dot3aCurAggVlanId SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.1.1.0.0.0.0.0.1.1 dot3aCurAggMacAddr SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.2.1.0.0.0.0.0.1.1 00 00 00 01 dot3aCurAggIndex SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.3.1.0.0.0.0.0.1.1 dot3aCurAggStatus SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.4.1.0.0.0.0.0.1.1 active, 2 – status inactive = INTEGER: 1 = Hex-STRING: 00 00 = INTEGER: 1 = INTEGER: 1 << Status For L3 LAG, you do not have this support. SNMPv2-MIB::sysUpTime.
Unit Slot Expected Inserted Next Boot Status/Power(On/Off) -----------------------------------------------------------------------1 0 SFP+ SFP+ AUTO Good/On 1 1 QSFP+ QSFP+ AUTO Good/On * - Mismatch Dell# The status of the MIBS is as follows: $ snmpwalk -c public -v 2c 10.16.130.148 1.3.6.1.2.1.47.1.1.1.1.2 SNMPv2-SMI::mib-2.47.1.1.1.1.2.1 = "" SNMPv2-SMI::mib-2.47.1.1.1.1.2.2 = STRING: "PowerConnect I/O-Aggregator" SNMPv2-SMI::mib-2.47.1.1.1.1.2.3 = STRING: "Module 0" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.76 = STRING: "Unit: 1 Port 9 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.77 = STRING: "Unit: 1 Port 10 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.78 = STRING: "Unit: 1 Port 11 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.79 = STRING: "Unit: 1 Port 12 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.80 = STRING: "Unit: 1 Port 13 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.81 = STRING: "Unit: 1 Port 14 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
• An additional 1 byte is reserved for future. Fetching the Switchport Configuration and the Logical Interface Configuration Important Points to Remember • The SNMP should be configured in the chassis and the chassis management interface should be up with the IP address. • If a port is configured in a VLAN, the respective bit for that port will be set to 1 in the specific VLAN. • In the aggregator, all the server ports and uplink LAG 128 will be in switchport. Hence, the respective bits are set to 1.
SNMP Traps for Link Status To enable SNMP traps for link status changes, use the snmp-server enable traps snmp linkdown linkup command. MIB Support to Display the Available Memory Size on Flash Dell Networking provides more MIB objects to display the available memory size on flash memory. The following table lists the MIB object that contains the available memory size on flash memory. Table 15.
MIB Object OID Description chSysCoresFileName 1.3.6.1.4.1.6027.3.19.1.2.9.1.2 Contains the core file names and the file paths. chSysCoresTimeCreated 1.3.6.1.4.1.6027.3.19.1.2.9.1.3 Contains the time at which core files are created. chSysCoresStackUnitNumber 1.3.6.1.4.1.6027.3.19.1.2.9.1.4 Contains information that includes which stack unit or processor the core file was originated from. chSysCoresProcess 1.3.6.1.4.1.6027.3.19.1.2.9.1.
Stacking 17 An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported only on the 40GbE ports on the base module. Stacking is limited to six Aggregators in the same or different m1000e chassis. To configure a stack, you must use the CLI. Stacking provides a single point of management for high availability and higher throughput.
Figure 25. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. • Stack master — primary management unit • Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
• Switch failure • Inter-switch stacking link failure • Switch insertion • Switch removal If the master switch goes off line, the standby replaces it as the new master. NOTE: For the Aggregator, the entire stack has only one management IP address. Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command.
from standby to master. The lack of a standby unit triggers an election within the remaining units for a standby role. After the former master switch recovers, despite having a higher priority or MAC address, it does not recover its master role but instead takes the next available role. MAC Addressing All port interfaces in the stack use the MAC address of the management interface on the master switch.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator.
Figure 26.
Configuring a Switch Stack To configure and bring up a switch stack, follow these steps: 1. Connect the 40GbE ports on the base module of two Aggregators using 40G direct attach or QSFP fibre cables. 2. Configure each Aggregator to operate in stacking mode. 3. Reload each Aggregator, one after the other in quick succession. Stacking Prerequisites Before you cable and configure a stack of MXL 10/40GbE switches, review the following prerequisites.
stack-unit unit-number priority 1-14 Dell(conf)# stack-unit 0 priority 12 Setting the priority will determine which switch will become the management (Master) switch. The switch with the highest priority number is elected Master. The default priority is 0. NOTE: It is best practice to assign priority values to all switches before stacking them in order to acquire and retain complete control over each units role in the stack. 2. Configure the stack-group for each stack-unit.
2. Connect a 40GbE base port on the second Aggregator to a 40GbE port on the first Aggregator. The resulting ring topology allows the entire stack to function as a single switch with resilient fail-over capabilities. If you do not connect the last switch to the first switch (Step 4), the stack operates in a daisy chain topology with less resiliency. Any failure in a non-edge stack unit causes a split stack.
Adding a Stack Unit You can add a new unit to an existing stack both when the unit has no stacking ports (stack groups) configured and when the unit already has stacking ports configured. If the units to be added to the stack have been previously used, they are assigned the smallest available unit ID in the stack. To add a standalone Aggregator to a stack, follow these steps: 1. Power on the switch. 2.
EXEC Privilege mode • reset stack-unit unit-number Reset a stack-unit when the unit is in a problem state. EXEC Privilege mode reset stack-unitunit-number {hard} Removing an Aggregator from a Stack and Restoring Quad Mode To remove an Aggregator from a stack and return the 40GbE stacking ports to 4x10GbE quad mode follow the below steps: 1. Disconnect the stacking cables from the unit. The unit can be powered on or off and can be online or offline. 2.
After you restart the Aggregator, the 4-Port 10 GbE Ethernet modules or the 40GbE QSFP+ port that is split into four 10GbE SFP+ ports cannot be configured to be part of the same uplink LAG bundle that is set up with the uplink speed of 40 GbE. In such a condition, you can perform a hot-swap of the 4-port 10 GbE Flex IO modules with a 2-port 40 GbE Flex IO module, which causes the module to become a part of the LAG bundle that is set up with 40 GbE as the uplink speed without another reboot.
-----------------------------------------------0 10G 40G Merging Two Operational Stacks The recommended procedure for merging two operational stacks is as follows: 1. Always power off all units in one stack before connecting to another stack. 2. Add the units as a group by unplugging one stacking cable in the operational stack and physically connecting all unpowered units. 3. Completely cable the stacking connections, making sure the redundant link is also in place.
Example of the show system brief Command Dell# show system brief StStack MAC : 00:1e:c9:f1:00:9b -- Stack Info -Unit UnitType Status ReqTyp CurTyp Version Ports ----------------------------------------------------------------------------------0 Management online I/O-Aggregator I/O-Aggregator 8-3-17-46 56 1 Standby online I/O-Aggregator I/O-Aggregator 8-3-17-46 56 2 Member not present 3 Member not present 4 Member not present 5 Member not present Example of the show system Command Dell# show system Stack MAC
-----------------------------------------0 0 SFP+ SFP+ AUTO Good 0 1 QSFP+ QSFP+ AUTO Good * - Mismatch Example of the show system stack-unit stack-group configured Command Dell# show system stack-unit 1 stack-group configured Configured stack groups in stack-unit 1 --------------------------------------0 1 4 5 Example of the show system stack-unit stack-group Command Dell#show system stack-unit 1 stack-group Stack group Ports -----------------------------0 0/33 1 0/37 2 0/41 3 0/45 4 0/49 5 0/53 Dell# Exam
show system stack-ports 2. Displays the master standby unit status, failover configuration, and result of the last master-standby synchronization; allows you to verify the readiness for a stack failover. show redundancy 3. Displays input and output flow statistics on a stacked port. show hardware stack-unit unit-number stack-port port-number 4. Clears statistics on the specified stack unit. The valid stack-unit numbers are from 0 to 5. clear hardware stack-unit unit-number counters 5.
-- Last Data Block Sync Record: ------------------------------------------------Stack Unit Config: succeeded Sep 03 1993 09:36:52 Start-up Config: succeeded Sep 03 1993 09:36:52 (Latest sync of config.
Unplugged Stacking Cable • Problem: A stacking cable is unplugged from a member switch. The stack loses half of its bandwidth from the disconnected switch. • Resolution: Intra-stack traffic is re-routed on a another link using the redundant stacking port on the switch. A recalculation of control plane and data plane connections is performed. Master Switch Fails • Problem: The master switch fails due to a hardware fault, software crash, or power loss. • Resolution: A failover procedure begins: 1.
Stack Unit in Card-Problem State Due to Incorrect Dell Networking OS Version • Problem: A stack unit enters a Card-Problem state because the switch has a different Dell Networking OS version than the master unit. The switch does not come online as a stack unit. • Resolution: To restore a stack unit with an incorrect Dell Networking OS version as a member unit, disconnect the stacking cables on the switch and install the correct Dell Networking OS version.
Specify the system partition on the master switch into which you want to copy the Dell Networking OS image. The system then prompts you to upgrade all member units with the new Dell Networking OS version. The valid values are a: and b:. 3. Reboot all stack units to load the Dell Networking OS image from the same partition on all switches in the stack. CONFIGURATION mode boot system stack-unit all primary system partition 4. Save the configuration. EXEC Privilege write memory 5.
Upgrading a Single Stack Unit Upgrading a single stacked switch is necessary when the unit was disabled due to an incorrect Dell Networking OS version. This procedure upgrades the image in the boot partition of the member unit from the corresponding partition in the master unit. To upgrade an individual stack unit with a new Dell Networking OS version, follow the below steps: 1.
Broadcast Storm Control 18 On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
System Time and Date 19 The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter. • Setting the Time for the Software Clock • Setting the Time Zone • Setting Daylight Savings Time Setting the Time for the Software Clock You can change the order of the month and day parameters to enter the time and date as time day month year.
• Set the clock to the appropriate timezone. CONFIGURATION mode clock timezone timezone-name offset – timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone. * a minus sign (-) then a number from 1 to 23 as the number of hours.
Example of the clock summer-time Command Dell(conf)#clock summer-time pacific date Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# Setting Recurring Daylight Saving Time Set a date (and time zone) on which to convert the switch to daylight saving time on a specific day every year. If you have already set daylight saving for a one-time setting, you can set that date and time as the recurring setting with the clock summer-time time-zone recurring command.
Example of the clock summer-time recurring Command Dell(conf)#clock summer-time pacific recurring Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# NOTE: If you enter after entering the recurring command parameter, and you have already set a one-time daylight saving time/date, the system uses that time and date as the recurring setting.
Uplink Failure Detection (UFD) 20 Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 27. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 28. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
Using UFD, you can configure the automatic recovery of downstream ports in an uplink-state group when the link status of an upstream port changes. The tracking of upstream link status does not have a major impact on central processing unit (CPU) usage. UFD and NIC Teaming To implement a rapid failover solution, you can use uplink failure detection on a switch with network adapter teaming on a server. For more information, refer to Network Interface Controller (NIC) Teaming.
– For an example of debug log message, refer to Clearing a UFD-Disabled Interface. Configuring Uplink Failure Detection (PMUX mode) To configure UFD, use the following commands. 1. Create an uplink-state group and enable the tracking of upstream links on the switch/router. CONFIGURATION mode uplink-state-group group-id • group-id: values are from 1 to 16. To delete an uplink-state group, use the no uplink-state-group group-id command. 2.
The default is auto-recovery of UFD-disabled downstream ports is enabled. To disable auto-recovery, use the no downstream auto-recover command. 5. Specify the time (in seconds) to wait for the upstream port channel (LAG 128) to come back up before server ports are brought down. UPLINK-STATE-GROUP mode defer-timer seconds NOTE: This command is available in Standalone and VLT modes only. The range is from 1 to 120. 6. (Optional) Enter a text description of the uplink-state group.
clear ufd-disable {interface interface | uplink-state-group group-id}: reenables all UFD-disabled downstream interfaces in the group. The range is from 1 to 16. Example of Syslog Messages Before and After Entering the clear ufd-disable uplink-stategroup Command (S50) The following example message shows the Syslog messages that display when you clear the UFDDisabled state from all disabled downstream interfaces in an uplink-state group by using the clear ufd-disable uplink-state-group group-id command.
Displaying Uplink Failure Detection To display information on the UFD feature, use any of the following commands. • Display status information on a specified uplink-state group or all groups. EXEC mode show uplink-state-group [group-id] [detail] – group-id: The values are 1 to 16. • – detail: displays additional status information on the upstream and downstream interfaces in each group. Display the current status of a port or port-channel interface assigned to an uplink-state group.
Downstream Interfaces : Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Tengig 0/46(Up) Tengig 0/47(Up) Downstream Interfaces : Te 13/0(Up) Te 13/1(Up) Te 13/3(Up) Te 13/5(Up) Te 13/6(Up) Uplink State Group : 5 Status: Enabled, Down Upstream Interfaces : Tengig 0/0(Dwn) Tengig 0/3(Dwn) Tengig 0/5(Dwn) Downstream Interfaces : Te 13/2(Dis) Te 13/4(Dis) Te 13/11(Dis) Te 13/12(Dis) Te 13/13(Dis) Te 13/14(Dis) Te 13/15(Dis) Uplink State Group : 6 Status: Enabled, Up Upstream Interfaces : Downstr
Dell(conf-uplink-state-group-16)# show configuration ! uplink-state-group 16 no enable description test downstream disable links all downstream TengigabitEthernet 0/40 upstream TengigabitEthernet 0/41 upstream Port-channel 8 Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD on a switch/router in which you configure as follows. • • • • • • Configure uplink-state group 3. Add downstream links Gigabitethernet 0/1, 0/2, 0/5, 0/9, 0/11, and 0/12.
Upstream Interfaces : Te 0/3(Up) Te 0/4(Up) Downstream Interfaces : Te 0/1(Up) Te 0/2(Up) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) < After a single uplink port fails > Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Downstream Interfaces : Te 0/1(Dis) Te 0/2(Dis) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) Uplink Failure Detection (SMUX mode) In Standalone or
PMUX Mode of the IO Aggregator 21 This chapter describes the various configurations applicable in PMUX mode. Introduction This document provides configuration instructions and examples for the Programmable MUX (PMUX mode) for the Dell Networking M I/O Aggregator using Dell Networking OS version 9.3(0.0).
Dell#show system stack-unit 0 iom-mode Unit Boot-Mode Next-Boot -----------------------------------------------0 standalone standalone Dell# 2. Change IOA mode to PMUX mode. Dell(conf)# stack-unit 0 iom-mode programmable-mux Where stack-unit 0 defines the default stack-unit number. 3. Delete the startup configuration file. Dell# delete startup-config 4. Reboot the IOA by entering the reload command. Dell# reload 5. Repeat the above steps for each member of the IOA in PMUX mode.
Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP. 1. Bring up all the ports. Dell#configure Dell(conf)#int range tengigabitethernet 0/1 - 56 Dell(conf-if-range-te-0/1-56)#no shutdown 2. Associate the member ports into LAG-10 and 11.
x - Dot1x untagged, X - Dot1x tagged o - OpenFlow untagged, O - OpenFlow tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged * 1 NUM 1000 1001 Dell# 5. Status Active Active Active Description Q Ports U Po10(Te 0/4-5) U Po11(Te 0/6) T Po10(Te 0/4-5) T Po11(Te 0/6) Show LAG member ports utilization.
Te 0/53 Te 0/54 Te 0/55 Te 0/56 obsolete after a save and reload. [confirm yes/no]:yes Please save and reset unit 0 for the changes to take effect. Dell(conf)# 2. Save the configuration. Dell#write memory ! 01:05:48: %STKUNIT0-M:CP %FILEMGR-5-FILESAVED: Copied running-config to startup-config in flash by default Dell#reload Proceed with reload [confirm yes/no]: yes 3. Configure the port-channel with 40G member ports.
6. Show the VLAN status.
Virtual Link Trunking (VLT) in PMUX Mode VLT allows the physical links between two devices (known as VLT nodes or peers) within a VLT domain to be considered a single logical link to connected external devices. For VLT operations, use the following configurations on both the primary and secondary VLT. Ensure the VLTi links are connected and administratively up. VLTi connects the VLT peers for VLT data exchange. 1. Configure VLTi.
Multicast peer-routing timeout: Dell# 5. 150 seconds Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unit-id.
11 Active 12 Active 13 Active 14 Active 15 Active 20 Active Dell T T T T T T T T T T U U Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 0/41-42) 0/41-42) 0/41-42) 0/41-42) 0/41-42) 0/41-42) You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan ->vlan-id - Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged command.
-- Stack Info -Unit UnitType Status ReqTyp CurTyp Version Ports --------------------------------------------------------0 Management online I/O-Aggregator I/OAggregator <> 56 1 Standby online I/O-Aggregator I/OAggregator <> 56 2 Member not present 3 Member not present 4 Member not present 5 Member not present Dell# Configuring an NPIV Proxy Gateway Prerequisite: Before you configure an NPIV proxy gateway on an IOA or MXL switch with the FC Flex IO module, ensure the follow
Creating a DCB Map Configure the priority-based flow control (PFC) and enhanced traffic selection (ETS) settings in a DCB map before you can apply them on downstream server-facing ports. Task Command Command Mode Create a DCB map to specify the PFC and ETS settings for groups of dot1p priorities.
Apply the DCB map on an Ethernet port. Repeat this step to apply a DCB map to more than one port. dcb-map name INTERFACE Creating an FCoE VLAN Create a dedicated VLAN to send and receive FC traffic over FCoE links between servers and a fabric over an NPG. The NPG receives FCoE traffic and forwards de-capsulated FC frames over FC links to SAN switches in a specified fabric. When you apply an FCoE map to an Ethernet port, the port is automatically configured as a tagged member of the FCoE VLAN.
is from 0 to 128. The default is 128. Enable the monitoring FIP keepalive messages (if it is disabled) to detect if other FCoE devices are reachable. The default is enabled. keepalive fka-adv-period seconds Configure the time (in seconds) used to transmit FIP keepalive advertisements. The range is 8 to 90 seconds. The default is 8 seconds.
You can apply a DCB or FCoE map to a range of Ethernet or Fibre Channel interfaces by using the interface range command; for example: Dell(config)# interface range tengigabitEthernet 1/12 - 23 , tengigabitEthernet 2/24 – 35 Dell(config)# interface range fibrechannel 0/0 - 3 , fibrechannel 0/8 – 11 Enter the keywords interface range then an interface type and port range. The port range must contain spaces before and after the dash. Separate each interface type and port range with a space, comma, and space.
show fcoe-map [brief | map-name] Displays the FC and FCoE configuration parameters in FCoE maps. show qos dcb-map map-name Displays the configuration parameters in a specified DCB map. show npiv devices [brief] Displays information on FCoE and FC devices currently logged in to the NPG. show fc switch Displays the FC mode of operation and worldwide node (WWN) name. For more information about NPIV Proxy Gateway information, refer to the 9.3(0.0) Addendum.
• INTERFACE level configurations override all CONFIGURATION level configurations. • LLDP is not hitless. LLDP Compatibility • Spanning tree and force10 ring protocol “blocked” ports allow LLDPDUs. • 802.1X controlled ports do not allow LLDPDUs until the connected device is authenticated. CONFIGURATION versus INTERFACE Configurations All LLDP configuration commands are available in PROTOCOL LLDP mode, which is a sub-mode of the CONFIGURATION mode and INTERFACE mode.
protocol lldp 2. Enable LLDP. PROTOCOL LLDP mode no disable Disabling and Undoing LLDP To disable or undo LLDP, use the following command. • Disable LLDP globally or for an interface. disable To undo an LLDP configuration, precede the relevant command with the keyword no. Advertising TLVs You can configure the system to advertise TLVs out of all interfaces or out of specific interfaces. • If you configure the system globally, all interfaces send LLDPDUs with the specified TLVs.
– video-signaling – voice – voice-signaling In the following example, LLDP is enabled globally. R1 and R2 are transmitting periodic LLDPDUs that contain management, 802.1, and 802.3 TLVs. Figure 29. Configuring LLDP Viewing the LLDP Configuration To view the LLDP configuration, use the following command. • Display the LLDP configuration.
! protocol lldp R1(conf-if-te-0/3-lldp)# Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. • Display brief information about adjacent devices. • show lldp neighbors Display all of the information that neighbors are advertising.
======================================================================== Configuring LLDPDU Intervals LLDPDUs are transmitted periodically; the default interval is 30 seconds. To configure LLDPDU intervals, use the following command. • Configure a non-default transmit interval.
no disable R1(conf-lldp)#multiplier ? <2-10> Multiplier (default=4) R1(conf-lldp)#multiplier 5 R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description multiplier 5 no disable R1(conf-lldp)#no multiplier R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilit
Figure 30. The debug lldp detail Command — LLDPDU Packet Dissection Virtual Link Trunking (VLT) VLT allows physical links between two chassis to appear as a single virtual link to the network core. VLT eliminates the requirement for Spanning Tree protocols by allowing link aggregation group (LAG) terminations on two separate distribution or core switches, and by supporting a loop-free topology.
NOTE: When you launch the VLT link, the VLT peer-ship is not established if any of the following is TRUE: • • • • The VLT System-MAC configured on both the VLT peers do not match. The VLT Unit-Id configured on both the VLT peers are identical. The VLT System-MAC or Unit-Id is configured only on one of the VLT peers. The VLT domain ID is not the same on both peers. If the VLT peer-ship is already established, changing the System-MAC or Unit-Id does not cause VLT peer-ship to go down.
upstream network. L2/L3 control plane protocols and system management features function normally in VLT mode. Features such as VRRP and internet group management protocol (IGMP) snooping require state information coordinating between the two VLT chassis. IGMP and VLT configurations must be identical on both sides of the trunk to ensure the same behavior on both sides. VLT Terminology The following are key VLT terms.
– ARP tables are synchronized between the VLT peer nodes. – VLT peer switches operate as separate chassis with independent control and data planes for devices attached on non-VLT ports. – One chassis in the VLT domain is assigned a primary role; the other chassis takes the secondary role. The primary and secondary roles are required for scenarios when connectivity between the chassis is lost. VLT assigns the primary chassis role according to the lowest MAC address. You can configure the primary role.
– If the link between the VLT peer switches is established, changing the VLT system MAC address or the VLT unit-id causes the link between the VLT peer switches to become disabled. However, removing the VLT system MAC address or the VLT unit-id may disable the VLT ports if you happen to configure the unit ID or system MAC address on only one VLT peer at any time.
– In a VLT domain, the following software features are supported on VLT physical ports: 802.1p, LLDP, flow control, port monitoring, and jumbo frames. • Software features not supported with VLT – In a VLT domain, the following software features are supported on non-VLT ports: 802.1x, , DHCP snooping, FRRP, IPv6 dynamic routing, ingress and egress QOS.
When the bandwidth usage drops below the 80% threshold, the system generates another syslog message (shown in the following message) and an SNMP trap. %STKUNIT0-M:CP %VLTMGR-6-VLT-LAG-ICL: Overall Bandwidth utilization of VLT-ICLLAG (port-channel 25) reaches below threshold. Bandwidth usage (74 )VLT show remote port channel status VLT and Stacking You cannot enable stacking with VLT. If you enable stacking on a unit on which you want to enable VLT, you must first remove the unit from the existing stack.
Non-VLT ARP Sync In the Dell Networking OS version 9.2(0.0), ARP entries (including ND entries) learned on other ports are synced with the VLT peer to support station move scenarios. Prior to Dell Networking OS version 9.2.(0.0), only ARP entries learned on VLT ports were synced between peers. Additionally, ARP entries resulting from station movements from VLT to non-VLT ports or to different non-VLT ports are learned on the non-VLT port and synced with the peer node.
show interfaces interface – interface: specify one of the following interface types: * Fast Ethernet: enter fastethernet slot/port. * 1-Gigabit Ethernet: enter gigabitethernet slot/port. * 10-Gigabit Ethernet: enter tengigabitethernet slot/port. * Port channel: enter port-channel {1-128}.
ICL Link Status: HeartBeat Status: VLT Peer Status: Local Unit Id: Version: Local System MAC address: Remote System MAC address: Configured System MAC address: Remote system version: Delay-Restore timer: Up Up Up 1 5(1) 00:01:e8:8a:e7:e7 00:01:e8:8a:e9:70 00:0a:0a:01:01:0a 5(1) 90 seconds Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20,
Dell_VLTpeer2# show running-config vlt ! vlt domain 30 peer-link port-channel 60 back-up destination 10.11.200.
Configure the port channel to an attached device. Dell_VLTpeer1(conf)#interface port-channel 110 Dell_VLTpeer1(conf-if-po-110)#no ip address Dell_VLTpeer1(conf-if-po-110)#switchport Dell_VLTpeer1(conf-if-po-110)#channel-member fortyGigE 0/52 Dell_VLTpeer1(conf-if-po-110)#no shutdown Dell_VLTpeer1(conf-if-po-110)#vlt-peer-lag port-channel 110 Dell_VLTpeer1(conf-if-po-110)#end Verify that the port channels used in the VLT domain are assigned to the same VLAN.
Verify that the port channels used in the VLT domain are assigned to the same VLAN.
Description Behavior at Peer Up Behavior During Run Time Action to Take Remote VLT port channel status N/A N/A Use the show vlt detail and show vlt brief commands to view the VLT port channel status information. System MAC mismatch A syslog error message and an SNMP trap are generated. A syslog error message and an SNMP trap are generated. Verify that the unit ID of VLT peers is not the same on both units and that the MAC address is the same on both units.
FC Flex IO Modules 22 This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
slots of the MXL 10/40GbE Switch and it provides four FC ports per module. If you insert only one FC Flex IO module, four ports are supported; if you insert two FC Flex IO modules, eight ports are supported. By installing an FC Flex IO module, you can enable the MXL 10/40GbE Switch and I/O Aggregator to directly connect to an existing FC SAN network.
FC Flex IO Module Capabilities and Operations The FC Flex IO module has the following characteristics: • You can install one or two FC Flex IO modules on the MXL 10/40GbE Switch or I/O Aggregator. Each module supports four FC ports. • Each port can operate in 2Gbps, 4Gbps, or 8Gbps of Fibre Channel speed. • All ports on an FC Flex IO module can function in the NPIV mode that enables connectivity to FC switches or directors, and also to multiple SAN topologies.
• With both FC Flex IO modules present in the MXL or I/O Aggregator switches, the power supply requirement and maximum thermal output are the same as these parameters needed for the M1000 chassis. • Each port on the FC Flex IO module contains status indicators to denote the link status and transmission activity. For traffic that is being transmitted, the port LED shows a blinking green light. The Link LED displays solid green when a proper link with the peer is established.
• On I/O Aggregators, uplink failure detection (UFD) is disabled if FC Flex IO module is present to allow server ports to communicate with the FC fabric even when the Ethernet upstream ports are not operationally up. • Ensure that the NPIV functionality is enabled on the upstream switches that operate as FC switches or FCoE forwarders (FCF) before you connect the FC port of the MXL or I/O Aggregator to these upstream switches.
the FCoE frames. The module directly switches any non-FCoE or non-FIP traffic, and only FCoE frames are processed and transmitted out of the Ethernet network. When the external device sends FCoE data frames to the switch that contains the FC Flex IO module, the destination MAC address represents one of the Ethernet MAC addresses assigned to FC ports. Based on the destination address, the FCoE header is removed from the incoming packet and the FC frame is transmitted out of the FC port.
Installing and Configuring Flowchart for FC Flex IO Modules 274 FC Flex IO Modules
To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: • Clearance — There is adequate front and rear clearance for operator access. Allow clearance for cabling, power connections, and ventilation.
Interconnectivity of FC Flex IO Modules with Cisco MDS Switches In a network topology that contains Cisco MDS switches, FC Flex IO modules that are plugged into the MXL and I/O Aggregator switches enable interoperation for a robust, effective deployment of the NPIV proxy gateway and FCoE-FC bridging behavior.
Figure 31. Case 1: Deployment Scenario of Configuring FC Flex IO Modules Figure 32. Case 2: Deployment Scenario of Configuring FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FCoE provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
Ethernet local area network (LAN) (IP cloud) for data — as well as FC links to one or more storage area network (SAN) fabrics. FCoE works with the Ethernet enhancements provided in Data Center Bridging (DCB) to support lossless (no-drop) SAN and LAN traffic. In addition, DCB provides flexible bandwidth sharing for different traffic types, such as LAN and SAN, according to 802.1p priority classes of service. DCBx should be enabled on the system before the FIP snooping feature is enabled.
• With the introduction of 10GbE links, FCoE is being implemented for server connections to optimize performance. However, a SAN traditionally uses Fibre Channel to transmit storage traffic. FCoE servers require an efficient and scalable bridging feature to access FC storage arrays, which an NPG provides. NPIV Proxy Gateway Operation Consider a sample scenario of NPG operation.
• Virtualization of FC N ports on an NPG so that they appear as FCoE FCFs to downstream servers. • NPIV service to perform the association and aggregation of FCoE servers to upstream F ports on core switches (through N ports on the NPG). Conversion of server FLOGIs and FDISCs, which are received over MXL 10/40GbE Switch and M I/O Aggregator with the FC Flex IO module ENode ports, are converted into FDISCs addressed to the upstream F ports on core switches.
Term Description Fibre Channel fabric Network of Fibre Channel devices and storage arrays that interoperate and communicate. FCF Fibre Channel forwarder: FCoE-enabled switch that can forward FC traffic to both downstream FCoE and upstream FC devices. An NPIV proxy gateway functions as an FCF to export upstream F port configurations to downstream server CNA ports.
An FCoE map applies the following parameters on server-facing Ethernet and fabric-facing FC ports on the MXL 10/40GbE Switch and M I/O Aggregator with the FC Flex IO module: • The dedicated FCoE VLAN used to transport FCoE storage traffic. • The FC-MAP value used to generate a fabric-provided MAC address. • The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed.
transit with FIP snooping is automatically enabled on all VLANs on the switch, using the default FCoE transit settings. Task Command Command Mode Enable an MXL 10/40GbE Switch and M I/O Aggregator with the FC Flex IO module for the Fibre Channel protocol.
Step Task Command Command Mode All priorities that map to the same egress queue must be in the same priority group. Important Points to Remember • If you remove a dot1p priority-to-priority group mapping from a DCB map (no priority pgid command), the PFC and ETS parameters revert to their default values on the interfaces on which the DCB map is applied. By default, PFC is not applied on specific 802.1p priorities; ETS assigns equal bandwidth to each 802.1p priority.
Step Task Command 1 Create the dedicated VLAN for FCoE traffic. interface vlan vlan- CONFIGURATION id Range: 2-4094. Command Mode VLAN 1002 is commonly used to transmit FCoE traffic. When you apply an FCoE map to an Ethernet port (Applying an FCoE map on server-facing Ethernet ports), the port is automatically configured as a tagged member of the FCoE VLAN.
Range: 0EFC00–0EFCFF. Default: None. 5 Configure the priority used by a server CNA to fcf-priority select the FCF for a fabric login (FLOGI). priority Range: 1-255. Default: 128. FCoE MAP 6 Enable the monitoring FIP keep-alive keepalive messages (if it is disabled) to detect if other FCoE devices are reachable. Default: FIP keepalive monitoring is enabled. FCoE MAP 7 Configure the time interval (in seconds) used to transmit FIP keepalive advertisements.
Applying an FCoE Map on Fabric-facing FC Ports The MXL 10/40GbE Switch and M I/O Aggregator, with the FC Flex IO module FC ports, are configured by default to operate in N port mode to connect to an F port on an FC switch in a fabric. You can apply only one FCoE map on an FC port. When you apply an FCoE map on a fabric-facing FC port, the FC port becomes part of the FCoE fabric, whose settings in the FCoE map are configured on the port and exported to downstream server CNA ports.
Dell(config-dcbx-name)# priority-group 4 strict-priority pfc off Dell(conf-dcbx-name)# priority-pgid 0 0 0 1 2 4 4 4 2. Apply the DCB map on a downstream (server-facing) Ethernet port: Dell(config)# interface tengigabitethernet 1/0 Dell(config-if-te-0/0)#dcb-map SAN_DCB_MAP 3. Create the dedicated VLAN to be used for FCoE traffic: Dell(conf)#interface vlan 1002 4.
Command Description NOTE: Although the show interface status command displays the Fiber Channel (FC) interfaces with the abbreviated label of 'Fc' in the output, if you attempt to specify a FC interface by using the interface fc command in the CLI interface, an error message is displayed. You must configure FC interfaces by using the interface fi command in CONFIGURATION mode. show fcoe-map [brief | mapname] Displays the Fibre Channel and FCoE configuration parameters in FCoE maps.
Table 21. show interfaces status Field Descriptions Field Description Port Server-facing 10GbE Ethernet (Te), 40GbE Ethernet (Fo), or fabricfacing Fibre Channel (Fc) port with slot/port information. Description Text description of port. Status Operational status of port: Ethernet ports - up (transmitting FCoE and LAN storage traffic) or down (not transmitting traffic).
Table 22. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID. VLAN priority FCoE traffic uses VLAN priority 3. (This setting is not user-configurable.
Table 23. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Fabric-Map Name of the FCoE map containing the FCoE/FC configuration parameters for the server CNA-fabric connection. Login Method Method used by the server CNA to log in to the fabric; for example: FLOGI - ENode logged in using a fabric login (FLOGI). FDISC - ENode logged in using a fabric discovery (FDISC).
Field Description FCF MAC Fibre Channel forwarder MAC: MAC address of MXL 10/40GbE Switch or M I/O Aggregator with the FC Flex IO module FCF interface. Fabric Intf Fabric-facing MXL 10/40GbE Switch or M I/O Aggregator with the FC Flex IO module Fibre Channel port (slot/port) on which FCoE traffic is transmitted to the specified fabric.
Upgrade Procedures 23 To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
24 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation. All interfaces on the Aggregator are operationally down This section describes how you can troubleshoot the scenario in which all the interfaces are down.
0/5(Up) 0/10(Up) Te 0/15(Up) Te 0/20(Dwn) Te 0/25(Dwn) Te 0/30(Dwn) 2. Te 0/6(Dwn) Te 0/7(Dwn) Te 0/8(Up) Te 0/9(Up) Te Te 0/11(Dwn) Te 0/12(Dwn) Te 0/13(Up) Te 0/14(Dwn) Te 0/16(Up) Te 0/17(Dwn) Te 0/18(Dwn) Te 0/19(Dwn) Te 0/21(Dwn) Te 0/22(Dwn) Te 0/23(Dwn) Te 0/24(Dwn) Te 0/26(Dwn) Te 0/27(Dwn) Te 0/28(Dwn) Te 0/29(Dwn) Te 0/31(Dwn) Te 0/32(Dwn) Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly.
x - Dot1x untagged, X - Dot1x tagged G - GVRP tagged, M - Trunk, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V VLT tagged Name: TenGigabitEthernet 0/1 802.1QTagged: Hybrid SMUX port mode: Auto VLANs enabled Vlan membership: Q Vlans U 1 T 2-4094 Native VlanId: 2. 1 Assign the port to a specified group of VLANs (vlan tagged command) and re-display the port mode status..
Copyright (c) 1999-2014 by Dell Inc. All Rights Reserved. Build Time: Thu Jul 5 11:20:28 PDT 2012 Build Path: /sites/sjc/work/build/buildSpaces/build05/E8-3-17/SW/SRC/Cp_src/ Tacacs st-sjc-m1000e-3-72 uptime is 17 hour(s), 1 minute(s) System Type: I/O-Aggregator Control Processor: MIPS RMI XLP with 2147483648 bytes of memory. 256M bytes of boot flash memory. 1 34-port GE/TE (XL) 56 Ten GigabitEthernet/IEEE 802.
Offline Diagnostics The offline diagnostics test suite is useful for isolating faults and debugging hardware. The diagnostics tests are grouped into three levels: • • • Level 0 — Level 0 diagnostics check for the presence of various components and perform essential path verifications. In addition, Level 0 diagnostics verify the identification registers of the components on the board. Level 1 — A smaller set of diagnostic tests.
the unit will be operationally down, except for running Diagnostics. Please make sure that stacking/fanout not configured for Diagnostics execution. Also reboot/online command is necessary for normal operation after the offline command is issued. Proceed with Offline [confirm yes/no]:yes Dell# 2. Confirm the offline status.
flash: 2143281152 bytes total (2069291008 bytes free) Using the Show Hardware Commands The show hardware command tree consists of commands used with the Aggregator switch. These commands display information from a hardware sub-component and from hardware-based feature tables. NOTE: Use the show hardware commands only under the guidance of the Dell Technical Assistance Center. • View internal interface status of the stack-unit CPU port which connects to the external management interface.
• This view helps identifying the stack unit/port pipe/port that may experience internal drops. View the input and output statistics for a stack-port interface. EXEC Privilege mode • show hardware stack-unit {0-5} stack-port {33–56} View the counters in the field processors of the stack unit. EXEC Privilege mode • show hardware stack-unit {0-5} unit {0-0} counters View the details of the FP Devices and Hi gig ports on the stack-unit.
SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 BR Nominal Length(9um) Km Length(9um) 100m Length(50um) 10m Length(62.
When the system detects a genuine over-temperature condition, it powers off the card. To recognize this condition, look for the following system messages: CHMGR-2-MAJOR_TEMP: Major alarm: chassis temperature high (temperature reaches or exceeds threshold of [value]C) CHMGR-2-TEMP_SHUTDOWN_WARN: WARNING! temperature is [value]C; approaching shutdown threshold of [value]C To view the programmed alarm thresholds levels, including the shutdown value, use the show alarms threshold command.
Recognize an Under-Voltage Condition If the system detects an under-voltage condition, it sends an alarm. To recognize this condition, look for the following system message: %CHMGR-1-CARD_SHUTDOWN: Major alarm: Line card 2 down - auto-shutdown due to under voltage. This message indicates that the specified card is not receiving enough power. In response, the system first shuts down Power over Ethernet (PoE).
Buffer Tuning Buffer tuning allows you to modify the way your switch allocates buffers from its available memory and helps prevent packet drops during a temporary burst of traffic. The application-specific integrated circuit (ASICs) implement the key functions of queuing, feature lookups, and forwarding lookups in hardware.
Figure 33. Buffer Tuning Points Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non-default buffer settings, as tuning can significantly affect system performance. The default values work for most cases. As a guideline, consider tuning buffers if traffic is bursty (and coming from several interfaces). In this case: • Reduce the dedicated buffer on all queues/interfaces. • Increase the dynamic buffer on all interfaces.
BUFFER PROFILE mode • buffer dedicated Change the maximum number of dynamic buffers an interface can request. BUFFER PROFILE mode • buffer dynamic Change the number of packet-pointers per queue. BUFFER PROFILE mode • buffer packet-pointers Apply the buffer profile to a CSF to FP link.
Queue# Dedicated Buffer (Kilobytes) 0 2.50 1 2.50 2 2.50 3 2.50 4 9.38 5 9.38 6 9.38 7 9.
Using a Pre-Defined Buffer Profile The Dell Networking OS provides two pre-defined buffer profiles, one for single-queue (for example, non-quality-of-service [QoS]) applications, and one for four-queue (for example, QoS) applications. You must reload the system for the global buffer profile to take effect, a message similar to the following displays: % Info: For the global pre-defined buffer profile to take effect, please save the config and reload the system..
! buffer fp-uplink stack-unit 0 port-set 0 buffer-policy fsqueue-hig buffer fp-uplink stack-unit 0 port-set 1 buffer-policy fsqueue-hig ! Interface range gi 0/1 - 48 buffer-policy fsqueue-fp Dell#show run int Tengig 0/10 ! interface TenGigabitEthernet 0/10 no ip address Troubleshooting Packet Loss The show hardware stack-unit command is intended primarily to troubleshoot packet loss. To troubleshoot packet loss, use the following commands.
Dell#show hardware stack-unit 0 drops unit 0 Port# :Ingress Drops :IngMac Drops :Total Mmu Drops :EgMac Drops :Egress Drops 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 0 5 0 0 0 0 0 6 0 0 0 0 0 7 0 0 0 0 0 8 0 0 0 0 0 Dell#show hardware stack-unit --- Ingress Drops --Ingress Drops : IBP CBP Full Drops : PortSTPnotFwd Drops : IPv4 L3 Discards : Policy Discards : Packets dropped by FP : (L2+L3) Drops : Port bitmap zero Drops : Rx VLAN Drops : 0 drops unit 0 port 1 30 0 0 0 0 14 0 16 0 --- Ingress MAC coun
noMbuf noClus recvd dropped recvToNet rxError rxDatapathErr rxPkt(COS0) rxPkt(COS1) rxPkt(COS2) rxPkt(COS3) rxPkt(COS4) rxPkt(COS5) rxPkt(COS6) rxPkt(COS7) rxPkt(UNIT0) rxPkt(UNIT1) rxPkt(UNIT2) rxPkt(UNIT3) transmitted txRequested noTxDesc txError txReqTooLarge txInternalError txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The
0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 1649714 packets, 1948622676 bytes, 0 underruns 0 64-byte pkts, 27234 over 64-byte pkts, 107970 over 127-byte pkts 34 over 255-byte pkts, 504838 over 511-byte pkts, 1009638 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 1649714 Unicasts 0 throttles, 0 discarded, 0 collisions Rate info (interval 45 seconds): Input 00.00 Mbits/sec, 2 packets/sec, 0.00% of line-rate Output 00.06 Mbits/sec, 8 packets/sec, 0.
--- Ingress MAC counters--Ingress FCSDrops : 0 Ingress MTUExceeds : 0 --- MMU Drops --HOL DROPS TxPurge CellErr Aged Drops : 0 : 0 : 0 --- Egress MAC counters--Egress FCS Drops : 0 --- Egress FORWARD PROCESSOR Drops --IPv4 L3UC Aged & Drops : 0 TTL Threshold Drops : 0 INVALID VLAN CNTR Drops : 0 L2MC Drops : 0 PKT Drops of ANY Conditions : 0 Hg MacUnderflow : 0 TX Err PKT Counter : 0 Restoring the Factory Default Settings Restoring factory defaults deletes the existing NVRAM settings, startup configura
Power-cycling the unit(s). ....
Standards Compliance 25 This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http://tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 29.
RFC# Full Name 1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy 1542 Clarifications and Extensions for the Bootstrap Protocol 1812 Requirements for IP Version 4 Routers 2131 Dynamic Host Configuration Protocol 2338 Virtual Router Redundancy Protocol (VRRP) 3021 Using 31-Bit Prefixes on IPv4 Point-to-Point Links 3046 DHCP Relay Agent Information Option 3069 VLAN Aggregation for Efficient IP Address Allocation 3128 Protection Against a Variant of th
RFC# Full Name 2571 An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks 2572 Message Processing and Dispatching for the Simple Network Management Protocol (SNMP) 2574 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3) 2575 View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP) 2576 Coexistence Between Version 1, Version 2, and Version 3 of the Internet-standard Network Manage
RFC# Full Name 3418 Management Information Base (MIB) for the Simple Network Management Protocol (SNMP) 3434 Remote Monitoring MIB Extensions for High Capacity Alarms, High-Capacity Alarm Table (64 bits) ANSI/TIA-1057 The LLDP Management Information Base extension module for TIA-TR41.4 Media Endpoint Discovery information draft-grant-tacacs -02 The TACACS+ Protocol IEEE 802.
RFC# Full Name IEEE 802.1Qaz Management Information Base extension module for IEEE 802.1 organizationally defined discovery information (LDP-EXT-DOT1-DCBX-MIB) IEEE 802.1Qbb Priority-based Flow Control module for managing IEEE 802.1Qbb MIB Location You can find Force10 MIBs under the Force10 MIBs subhead on the Documentation page of iSupport: https://www.force10networks.com/csportal20/KnowledgeBase/Documentation.