Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.10(0.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2016 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide........................................................................................................................................... 14 Audience............................................................................................................................................................................ 14 Conventions...................................................................................................................................................
Cloning Configuration Output Status.............................................................................................................................31 5 Data Center Bridging (DCB)........................................................................................................................ 34 Supported Modes............................................................................................................................................................
7 FIP Snooping............................................................................................................................................... 75 Supported Modes............................................................................................................................................................ 75 Fibre Channel over Ethernet...........................................................................................................................................
Configuring VLAN Membership..............................................................................................................................102 Displaying VLAN Membership.................................................................................................................................102 Adding an Interface to a Tagged VLAN................................................................................................................. 103 Adding an Interface to an Untagged VLAN...
LACP Modes............................................................................................................................................................. 127 Auto-Configured LACP Timeout.............................................................................................................................127 LACP Example................................................................................................................................................................
Extended Power via MDI TLV.................................................................................................................................156 LLDP Operation.............................................................................................................................................................. 156 Viewing the LLDP Configuration..................................................................................................................................
Troubleshooting SSH............................................................................................................................................... 195 Telnet............................................................................................................................................................................... 196 VTY Line and Access-Class Configuration..................................................................................................................
Configuring Priority and stack-group.................................................................................................................... 220 Cabling Stacked Switches....................................................................................................................................... 221 Accessing the CLI.....................................................................................................................................................
24 PMUX Mode of the IO Aggregator...........................................................................................................250 I/O Aggregator (IOA) Programmable MUX (PMUX) Mode.....................................................................................250 Configuring and Changing to PMUX Mode............................................................................................................... 250 Configuring the Commands without a Separate User Account.....................
Configuring an NPIV Proxy Gateway.......................................................................................................................... 285 Enabling Fibre Channel Capability on the Switch................................................................................................ 286 Creating a DCB Map ..............................................................................................................................................
Important Points to Remember.............................................................................................................................. 317 30 Standards Compliance..............................................................................................................................319 IEEE Compliance.............................................................................................................................................................319 RFC and I-D Compliance..
1 About this Guide This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.7(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
* (Exception). This symbol is a note associated with additional text on the page that is marked with an asterisk.
2 Before You Start To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, see Stacking.
• • • Internet group management protocol (IGMP) snooping. Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface. Server-facing links are autoconfigured to be brought up only if the uplink port-channel is up.
10 seconds by default. If you have configured VLAN, you can reduce the defer time by changing the defer-timer value or remove it by using the no defer-timer command. NOTE: If installed servers do not have connectivity to a switch, check the Link Status LED of uplink ports on the aggregator. If all LEDs are on, to ensure the LACP is correctly configured, check the LACP configuration on the ToR switch that is connected to the aggregator .
Deploying FN I/O Module This section provides design and configuration guidance for deploying the Dell PowerEdge FN I/O Module (FN IOM). By default the FN IOM is in Standalone Mode.
3 Verify the connection. By default the network ports on the PowerEdge FC-Series servers installed in the FX2 chassis is down, until the uplink port channel is operational on the FN IOM system. It is due to an Uplink Failure Detection, by that when upstream connectivity fails; the FN IOM disables the downstream links.
Mode of IPv4 Address Assignment : NONE DHCP Client-ID :f8b1566efc59 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 01:26:42 Queueing strategy: fifo Input Statistics: 941 packets, 98777 bytes 83 64-byte pkts, 591 over 64-byte pkts, 267 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 694 Multicasts, 247 Broadcasts, 0 Unicasts 0 runts, 0 giants, 0 throttles 0 CRC, 0
By default on the FN IOM, the external Ethernet ports are preconfigured in port channel 128 with LACP enabled. Port channel 128 is in hybrid (trunk) mode. To bring up the downstream (server) ports on the FN IOM, port channel 128 must be up. When the Port channel 128 is up, it is connected to a configured port channel on an upstream switch. To enable the Port channel 128, connect any combination of the FN IOM’s external Ethernet ports (ports TenGigabitethernet 0/9-12) to the upstream switch.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
CLI Modes Different sets of commands are available in each mode. A command found in one mode cannot be executed from another mode (except for EXEC mode commands with a preceding do command (refer to the do Command section). The Dell Networking OS CLI is divided into three major mode levels: • EXEC mode is the default mode and has a privilege level of 1, which is the most restricted level.
CLI Command Mode CONFIGURATION Prompt Dell(conf)# Access Command • From any other mode, use the end command. • From EXEC privilege mode, enter the configure command. • From every mode except EXEC and EXEC Privilege, enter the exit command. NOTE: Access all of the following modes from CONFIGURATION mode.
Example of Viewing Disabled Commands Dell(conf)# interface managementethernet 0/0 Dell(conf-if-ma-0/0)# ip address 192.168.5.6/16 Dell(conf-if-ma-0/0)# Dell(conf-if-ma-0/0)# Dell(conf-if-ma-0/0)#show config ! interface ManagementEthernet 0/0 ip address 192.168.5.
Short-Cut Key Combination Action CNTL-A Moves the cursor to the beginning of the command line. CNTL-B Moves the cursor back one character. CNTL-D Deletes character at cursor. CNTL-E Moves the cursor to the end of the line. CNTL-F Moves the cursor forward one character. CNTL-I Completes a keyword. CNTL-K Deletes all characters from the cursor to the end of the command line. CNTL-L Re-enters the previous command.
0 0 0 0 0 0 Pause Pause Pause Pause Pause Pause Tx Tx Tx Tx Tx Tx pkts, pkts, pkts, pkts, pkts, pkts, 0 0 0 0 0 0 Pause Pause Pause Pause Pause Pause Rx Rx Rx Rx Rx Rx pkts pkts pkts pkts pkts pkts NOTE: Dell accepts a space or no space before and after the pipe. To filter a phrase with spaces, underscores, or ranges, enclose the phrase with double quotation marks. The except keyword displays text that does not match the specified text.
Configuring a Unique Host Name on the System While you can manually configure a host name for the system, you can also configure the system to have a unique host name. The unique host name is a combination of the platform type and the serial number of the system. The unique host name appears in the command prompt. The running configuration gets updated with the feature unique-name command. It also overwrites any existing host name configured on the system using the hostname command.
4 Configuration Cloning Configuration Cloning enables you to clone the configuration from one aggregator to one or more aggregators. You can identify the source aggregator where running configuration is check-pointed, extracted and downloaded to the target aggregator for further use. The target aggregator checks the compatibilities of the cloning file based on the version, mode and optional modules.
• Cloning detailed status displays a string that gives detailed description of cloning status. When multiple error or warning messages are present, the status is separated by the ; delimiter. • Cloning status codes are useful when there are multiple warning or failure messages. Each warning or failure message is given a code number; this status can list the message codes that can be decoded when the cloning status string could not accommodate all the errors and warnings.
Cloning state (captured in command output) Cloning status (captured in command output) Applicability Warning Minor release version mismatch Target If the compatibility check passes through, the target aggregator strips the cloning header and proceeds to parsing actual configuration in the cloning-file. It goes through the configuration one by one and checks if any command or feature requires in reboot.
5 Data Center Bridging (DCB) On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode.
• LAN traffic consists of a large number of flows that are generally insensitive to latency requirements, while certain applications, such as streaming video, are more sensitive to latency. Ethernet functions as a best-effort network that may drop packets in case of network congestion. IP networks rely on transport protocols (for example, TCP) for reliable data transmission with the associated cost of greater processing overhead and performance impact.
• By default, PFC is enabled when you enabled DCB. When you enable DCB globally, you cannot simultaneously enable TX and RX on the interface for flow control and link-level flow control is disabled. • Buffer space is allocated and de-allocated only when you configure a PFC priority on the port. • PFC delay constraints place an upper limit on the transmit time of a queue after receiving a message to pause a specified priority.
The following table lists the traffic groupings ETS uses to select multiprotocol traffic for transmission. Table 4. ETS Traffic Groupings Traffic Groupings Description Priority group A group of 802.1p priorities used for bandwidth allocation and queue scheduling. All 802.1p priority traffic in a group must have the same traffic handling requirements for latency and frame loss. Group ID A 4-bit identifier assigned to each priority group. The range is from 0 to 7.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network. DCB is disabled by default. It must be enabled to support CEE.
CONFIGURATION mode dcb stack-unit all pfc-buffering pfc-ports 64 pfc-queues 2 NOTE: To save the pfc buffering configuration changes, save the configuration and reboot the system. NOTE: Dell Networking OS Behavior: DCB is not supported if you enable link-level flow control on one or more interfaces. For more information, refer to Flow Control Using Ethernet Pause Frames.
Important Points to Remember • If you remove a dot1p priority-to-priority group mapping from a DCB map (no priority pgid command), the PFC and ETS parameters revert to their default values on the interfaces on which the DCB map is applied. By default, PFC is not applied on specific 802.1p priorities; ETS assigns equal bandwidth to each 802.1p priority. As a result, PFC and lossless port queues are disabled on 802.
INTERFACE mode pfc priority priority-range Maximum number of lossless queues supported on an Ethernet port: 2. Separate priority values with a comma. Specify a priority range with a dash, for example: pfc priority 3,5-7 You cannot configure PFC using the pfc priority command on an interface on which a DCB map has been applied or which is already configured for lossless queues (pfc no-drop queues command).
pfc no-drop queuesqueue-range The maximum number of lossless queues globally supported on a port is 2. You cannot configure PFC no-drop queues on an interface on which a DCB map with PFC enabled has been applied, or which is already configured for PFC using the pfc priority command. Data Center Bridging: Default Configuration Before you configure PFC and ETS on a switch see the priority group setting taken into account the following default settings: DCB is enabled.
interface TenGigabitEthernet 0/4 mtu 12000 portmode hybrid switchport auto vlan flowcontrol rx on tx off dcb-map DCB_MAP_PFC_OFF ! protocol lldp advertise management-tlv management-address system-name dcbx port-role auto-downstream no shutdown Dell# When DCB is Enabled When an interface receives a DCBx protocol packet, it automatically enables DCB and disables link-level flow control. The dcb-map and flow control configurations are removed as shown in the following example.
Enabling DCB To configure the Aggregator so that all interfaces are DCB enabled and flow control disabled, use the dcb enable command. Disabling DCB To configure the Aggregator so that all interfaces are DCB disabled and flow control enabled, use the no dcb enable command. dcb enable auto-detect on-next-reload Command Example Dell#dcb enable auto-detect on-next-reload Configuring Priority-Based Flow Control PFC provides a flow control mechanism based on the 802.
DCB INPUT POLICY mode pfc mode on The default is PFC mode is on. 5 (Optional) Enter a text description of the input policy. DCB INPUT POLICY mode description text The maximum is 32 characters. 6 Exit DCB input policy configuration mode. DCB INPUT POLICY mode exit 7 Enter interface configuration mode. CONFIGURATION mode interface type slot/port 8 Apply the input policy with the PFC configuration to an ingress interface.
• In a switch stack, configure all stacked ports with the same PFC configuration. A DCB input policy for PFC applied to an interface may become invalid if you reconfigure dot1p-queue mapping. This situation occurs when the new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch. In this case, all PFC configurations received from PFC-enabled peers are removed and resynchronized with the peer devices.
For example, storage traffic is sensitive to frame loss; interprocess communication (IPC) traffic is latency-sensitive. ETS allows different traffic types to coexist without interruption in the same converged link. NOTE: The IEEE 802.1Qaz, CEE, and CIN versions of ETS are supported. ETS is implemented on an Aggregator as follows: • Traffic in priority groups is assigned to strict-queue or WERR scheduling in a dcb-map and is managed using the ETS bandwidthassignment algorithm.
Hierarchical Scheduling in ETS Output Policies ETS supports up to three levels of hierarchical scheduling. For example, you can apply ETS output policies with the following configurations: Priority group 1 Assigns traffic to one priority queue with 20% of the link bandwidth and strict-priority scheduling. Priority group 2 Assigns traffic to one priority queue with 30% of the link bandwidth.
DCBx Port Roles The following DCBx port roles are auto-configured on an Aggregator to propagate DCB configurations learned from peer DCBx devices internally to other switch ports: Auto-upstream The port advertises its own configuration to DCBx peers and receives its configuration from DCBx peers (ToR or FCF device). The port also propagates its configuration to other ports on the switch. The first auto-upstream that is capable of receiving a peer configuration is elected as the configuration source.
NOTE: On a DCBx port, application priority TLV advertisements are handled as follows: • The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port. • On auto-upstream and auto-downstream ports: • • If a configuration source is elected, the ports send an application priority TLV based on the application priority TLV received on the configuration-source port.
Propagation of DCB Information When an auto-upstream or auto-downstream port receives a DCB configuration from a peer, the port acts as a DCBx client and checks if a DCBx configuration source exists on the switch. • If a configuration source is found, the received configuration is checked against the currently configured values that are internally propagated by the configuration source.
Figure 4. DCBx Sample Topology DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
DCBx Error Messages The following syslog messages appear when an error in DCBx operation occurs. LLDP_MULTIPLE_PEER_DETECTED: DCBx is operationally disabled after detecting more than one DCBx peer on the port interface. LLDP_PEER_AGE_OUT: DCBx is disabled as a result of LLDP timing out on a DCBx peer interface. DSM_DCBx_PEER_VERSION_CONFLICT: A local port expected to receive the IEEE, CIN, or CEE version in a DCBx TLV from a remote peer but received a different, conflicting DCBx version.
Command Output show interface port-type slot/port pfc {summary Displays the PFC configuration applied to ingress traffic on an | detail} interface, including priorities and link delay. To clear PFC TLV counters, use the clear pfc counters {stack-unit unit-number | tengigabitethernet slot/port} command. show interface port-type slot/port ets {summary Displays the ETS configuration applied to egress traffic on an | detail} interface, including priority groups with priorities and bandwidth allocation.
Admin mode is on Admin is enabled Remote is enabled, Priority list is 4 Remote Willing Status is enabled Local is enabled Oper status is Recommended PFC DCBx Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quantams Application Priority TLV Parameters : -------------------------------------FCOE TLV Tx Status is disabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x10 Remote FCOE PriorityMap is 0x8 Remote ISCSI P
Fields Description PFC DCBx Oper status Operational status for exchange of PFC configuration on local port: match (up) or mismatch (down). State Machine Type Type of state machine used for DCBx exchanges of PFC parameters: • Feature: for legacy DCBx versions • Symmetric: for an IEEE version TLV Tx Status Status of PFC TLV advertisements: enabled or disabled. PFC Link Delay Link delay (in quanta) used to pause specified priority traffic.
Example of the show interface ets summary Command Dell# show interfaces te 0/0 ets summary Interface TenGigabitEthernet 0/0 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on Admin Parameters : -----------------Admin is enabled TC-grp Priority# Bandwidth 0 0,1,2,3,4,5,6,7 100% 1 0% 2 0% 3 0% 4 0% 5 0% 6 0% 7 0% Priority# Bandwidth TSA 0 1 2 3 4 5 6 7 Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2
Admin is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled PG-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Oper status is init ETS DCBX Oper status is Down State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enab
Field Description • Internally propagated: ETS configuration parameters were received from configuration source. ETS DCBx Oper status Operational status of ETS configuration on local port: match or mismatch. State Machine Type Type of state machine used for DCBx exchanges of ETS parameters: • Feature: for legacy DCBx versions • Asymmetric: for an IEEE version Conf TLV Tx Status Status of ETS Configuration TLV advertisements: enabled or disabled.
Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Example of the show interface DCBx detail Command Dell# show interface tengigabitethernet 0/4 dcbx detail Dell#show interface te 0/4 dcbx detail E-ETS Configuration TLV enabled e-ETS Configuration TLV disabled R-ETS Recommendation T
Field Description Interface Interface type with chassis slot and port number. Port-Role Configured DCBx port role: auto-upstream or auto-downstream. DCBx Operational Status Operational status (enabled or disabled) used to elect a configuration source and internally propagate a DCB configuration. The DCBx operational status is the combination of PFC and ETS operational status.
Field Description Total DCBx Frames unrecognized Number of unrecognizable DCBx frames received. PFC TLV Statistics: Input PFC TLV pkts Number of PFC TLVs received. PFC TLV Statistics: Output PFC TLV pkts Number of PFC TLVs transmitted. PFC TLV Statistics: Error PFC pkts Number of PFC error packets received. PFC TLV Statistics: PFC Pause Tx pkts Number of PFC pause frames transmitted. PFC TLV Statistics: PFC Pause Rx pkts Number of PFC pause frames received.
NOTE: Dell Networking does not recommend mapping all ingress traffic to a single queue when using PFC and ETS. However, Dell Networking does recommend using Ingress traffic classification using the service-class dynamic dot1p command (honor dot1p) on all DCB-enabled interfaces. If you use L2 class maps to map dot1p priority traffic to egress queues, take into account the default dot1p-queue assignments in the following table and the maximum number of two lossless queues supported on a port.
Reason Description Admin Mode OFF (Remote) Remote Admin Mode is disabled. Waiting for ACK from Peer For a legacy DCBx version, a peer has not acknowledged the reception of a sent packet. This reason displays only when a remote peer is willing to receive a DCB configuration. Error Bit set For a legacy DCBx version, a peer has sent packets with an error bit set. This reason displays only when a remote peer is willing to receive a DCB configuration.
3 Configure the number of PFC queues. CONFIGURATION mode Dell(conf)#dcb enable pfc-queues 4 The number of ports supported based on lossless queues configured will depend on the buffer. For each priority, you can specify the shared buffer threshold limit, the ingress buffer size, buffer limit for pausing the acceptance of packets, and the buffer offset limit for resuming the acceptance of received packets.
6 Dynamic Host Configuration Protocol (DHCP) The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
4 After receiving a DHCPREQUEST, the server binds the clients’ unique identifier (the hardware address plus IP address) to the accepted configuration parameters and stores the data in a database called a binding table. The server then broadcasts a DHCPACK message, which signals to the client that it may begin using the assigned parameters. There are additional messages that are used in case the DHCP negotiation deviates from the process previously described and shown in the illustration below.
Debugging DHCP Client Operation To enable debug messages for DHCP client operation, enter the following debug commands: • Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces. EXEC Privilege • [no] debug ip dhcp client packets [interface type slot/port] Enable the display of log messages for the following events on DHCP client interfaces: IP address acquisition, IP address release, Renewal of IP address and lease time, and Release of an IP address.
1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP RELEASE sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP IP RELEASED CMD sent to FTOS in state STOPPED Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state S
You can also manually configure an IP address for the VLAN 1 default management interface using the CLI. If no user-configured IP address exists for the default VLAN management interface exists and if the default VLAN IP address is not in the startup configuration, the Aggregator will automatically obtain it using DHCP. • The default VLAN 1 with all ports configured as members is the only L3 interface on the Aggregator.
DHCP Client on a VLAN The following conditions apply on a VLAN that operates as a DHCP client: • The default VLAN 1 with all ports auto-configured as members is the only L3 interface on the Aggregator. • When the default management VLAN has a DHCP-assigned address and you reconfigure the default VLAN ID number, the Aggregator: • Sends a DHCP release to the DHCP server to release the IP address. • Sends a DHCP request to obtain a new IP address.
Option Number and Description • 3: DHCPREQUEST • 4: DHCPDECLINE • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request List Option 55 Renewal Time Option 58 Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code. Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server.
For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface. To manually acquire a new IP address from the DHCP server, use the following command. • Release a dynamically-acquired IP address while retaining the DHCP client configuration on the interface.
Vl 1 10.1.1.254/24 0.0.0.0 Renew Time ========== ----NA---08-26-2011 16:21:50 74 10.1.1.
7 FIP Snooping This chapter describes about the FIP snooping concepts and configuration procedures.
FIP enables FCoE devices to discover one another, initialize and maintain virtual links over an Ethernet network, and access storage devices in a storage area network. FIP satisfies the Fibre Channel requirement for point-to-point connections by creating a unique virtual link for each connection between an FCoE end-device and an FCF via a transit switch. FIP provides a functionality for discovering and logging in to an FCF.
FIP Snooping on Ethernet Bridges In a converged Ethernet network, intermediate Ethernet bridges can snoop on FIP packets during the login process on an FCF. Then, using ACLs, a transit bridge can permit only authorized FCoE traffic to be transmitted between an FCoE end-device and an FCF. An Ethernet bridge that provides these functions is called a FIP snooping bridge (FSB). On a FIP snooping bridge, ACLs are created dynamically as FIP login frames are processed.
Figure 8. FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis. • Set the FCoE MAC address prefix (FC-MAP) value used by an FCF to assign a MAC address to an ECoE end-device (server ENode or storage device) after a server successfully logs in.
How FIP Snooping is Implemented As soon as the Aggregator is activated in an M1000e chassis as a switch-bridge, existing VLAN-specific and FIP snooping autoconfigurations are applied. The Aggregator snoops FIP packets on VLANs enabled for FIP snooping and allows legitimate sessions. By default, all FCoE and FIP frames are dropped unless specifically permitted by existing FIP snooping-generated ACLs.
• VLAN membership: • The Aggregator auto-configures the VLANs which handle FCoE traffic. You can reconfigure VLAN membership on a port (vlan tagged command). • • • Each FIP snooping port is auto-configured to operate in Hybrid mode so that it accepts both tagged and untagged VLAN frames. Tagged VLAN membership is auto-configured on each FIP snooping port that sends and receives FCoE traffic and has links with an FCF, ENode server or another FIP snooping bridge.
NOTE: All these configurations are available only in PMUX mode. NOTE: To disable the FIP snooping feature or FIP snooping on VLANs, use the no version of a command; for example, no feature fip-snooping or no fip-snooping enable. . Displaying FIP Snooping Information Use the show commands from the table below, to display information on FIP snooping. Table 6.
0e:fc:00:01:00:01 0e:fc:00:01:00:02 0e:fc:00:01:00:03 0e:fc:00:01:00:04 0e:fc:00:01:00:05 01:00:01 01:00:02 01:00:03 01:00:04 01:00:05 31:00:0e:fc:00:00:00:00 41:00:0e:fc:00:00:00:00 41:00:0e:fc:00:00:00:01 41:00:0e:fc:00:00:00:02 41:00:0e:fc:00:00:00:03 21:00:0e:fc:00:00:00:00 21:00:0e:fc:00:00:00:00 21:00:0e:fc:00:00:00:00 21:00:0e:fc:00:00:00:00 21:00:0e:fc:00:00:00:00 show fip-snooping sessions Command Description Field Description ENode MAC MAC address of the ENode.
Field Description FCF MAC MAC address of the FCF. FCF Interface Slot/port number of the interface to which the FCF is connected. VLAN VLAN ID number used by the session. FC-MAP FC-Map value advertised by the FCF. ENode Interface Slot/ number of the interface connected to the ENode. FKA_ADV_PERIOD Period of time (in milliseconds) during which FIP keep-alive advertisements are transmitted. No of ENodes Number of ENodes connected to the FCF. FC-ID Fibre Channel session ID assigned by the FCF.
Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number of of of of of of of of of of of of of of of of of of of Multicast Discovery Solicits Unicast Discovery Solicits FLOGI FDISC FLOGO Enode Keep Alive VN Port Keep Alive Multicast Discovery Advertisement Unicast Discovery Advertisement FLOGI Accepts FLOGI Rejects FDISC Accepts FDISC Rejects FLOGO Accepts FLOGO Rejects CVL FCF Discovery Timeouts VN Port Session Timeouts Session
Field Description Number of FLOGO Rejects Number of FIP FLOGO reject frames received on the interface. Number of CVLs Number of FIP clear virtual link frames received on the interface. Number of FCF Discovery Timeouts Number of FCF discovery timeouts that occurred on the interface. Number of VN Port Session Timeouts Number of VN port session timeouts that occurred on the interface.
FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
Debugging FIP Snooping To enable debug messages for FIP snooping events, enter the debug fip-snooping command.. 1 Enable FIP snooping debugging on for all or a specified event type, where: • all enables all debugging options. • acl enables debugging only for ACL-specific events. • error enables debugging only for error conditions. • ifm enables debugging only for IFM events. • info enables debugging only for information events. • ipc enables debugging only for IPC events.
8 Internet Group Management Protocol (IGMP) On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. • • Responding to an IGMP Query. • One router on a subnet is elected as the querier. The querier periodically multicasts (to all-multicast-systems address 224.0.0.1) a general query to all hosts on the subnet.
To accommodate these protocol enhancements, the IGMP version 3 packet structure is different from version 2. Queries (shown below in query packet format) are still sent to the all-systems address 224.0.0.1, but reports (shown below in report packet format) are sent to all the IGMP version 3 — capable multicast routers address 224.0.0.22. Figure 11. IGMP version 3 Membership Query Packet Format Figure 12.
Figure 13. IGMP Membership Reports: Joining and Filtering Leaving and Staying in Groups The below illustration shows how multicast routers track and refreshes the state change in response to group-and-specific and general queries. • Host 1 sends a message indicating it is leaving group 224.1.1.1 and that the included filter for 10.11.1.1 and 10.11.1.2 are no longer necessary.
in a waste of bandwidth. IGMP snooping enables switches to use information in IGMP packets to generate a forwarding table that associate ports with multicast groups, so that the received multicast frames are forwarded only to interested receivers. How IGMP Snooping is Implemented on an Aggregator • IGMP snooping is enabled by default on the switch. • Dell Networking OS supports version 1, version 2 and version 3 hosts.
Command Output clear ip igmp snooping groups [groupaddress | interface] Clears IGMP information for group addresses and IGMP-enabled interfaces. show ip igmp snooping groups Command Example Dell# show ip igmp snooping groups Total Number of Groups: 2 IGMP Connected Group Membership Group Address Interface 226.0.0.1 Vlan 1500 226.0.0.
show ip igmp snooping mrouter Command Example Dell# show ip igmp snooping mrouter Interface Router Ports Vlan 1000 Po 128 Dell# 94 Internet Group Management Protocol (IGMP)
9 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• Viewing Interface Information • Enabling the Management Address TLV on All Interfaces of an Aggregator • Enhanced Validation of Interface Ranges • Enhanced Control of Remote Fault Indication Processing Interface Auto-Configuration An Aggregator auto-configures interfaces as follows: • • All interfaces operate as layer 2 interfaces at 10GbE in standalone mode. FlexIO module interfaces support only uplink connections. You can only use the 40GbE ports on the base module for stacking.
NOTE: To end output from the system, such as the output from the show interfaces command, enter CTRL+C and the Dell Networking Operating System (OS) returns to the command prompt. NOTE: The CLI output may be incorrectly displayed as 0 (zero) for the Rx/Tx power values. Perform an simple network management protocol (SNMP) query to obtain the correct power information. The following example shows the configuration and status information for one interface.
stack-unit 1 port 37 portmode quad --More-- Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command.
To view the interfaces in Layer 2 mode, use the show interfaces switchport command in EXEC mode. Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The Aggregator management interface has both a public IP and private IP address on the internal Fabric D interface. The public IP address is exposed to the outside world for WebGUI configurations/WSMAN and other proprietary traffic.
INTERFACE mode ip address dhcp To access the management interface from another LAN, you must configure the management route command to point to the management interface. There is only one management interface for the whole stack. To display the routing table for a given port, use the show ip route command from EXEC Privilege mode.
Table 8. VLAN Defaults Feature Default Mode Layer 2 (no IP address is assigned) Default VLAN ID VLAN 1 Default VLAN When an Aggregator boots up, all interfaces are up in Layer 2 mode and placed in the default VLAN as untagged interfaces. Only untagged interfaces can belong to the default VLAN. By default, VLAN 1 is the default VLAN. To change the default VLAN ID, use the default vlan-id <1–4094> command in CONFIGURATION mode. You cannot delete the default VLAN.
Information contained in the tag header allows the system to prioritize traffic and to forward information to ports associated with a specific VLAN ID. Tagged interfaces can belong to multiple VLANs, while untagged interfaces can belong only to one VLAN. Configuring VLAN Membership By default, all Aggregator ports are member of all (4094) VLANs, including the default untagged VLAN 1.
G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged * NUM 1 20 1002 Dell# Status Inactive Active Description Q Ports U Po32() U Te 0/3,5,13,53-56 T Te 0/3,13,55-56 Active NOTE: A VLAN is active only if the VLAN contains interfaces and those interfaces are operationally up. In the above example, VLAN 1 is inactive because it does not contain any interfaces. The other VLANs listed contain enabled interfaces and are active.
x - Dot1x untagged, X - Dot1x tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged, C - CMC tagged NUM 4 Status Active Description Dell# Q U T T Ports Po1(Te 0/16) Po128(Te 0/33,39,51,56) Te 0/1-15,17-32 VLAN Configuration on Physical Ports and Port-Channels Unlike other Dell Networking OS platforms, IOA allows VLAN configurations on port and port-channel levels. This allows you to assign VLANs to a port/port-channel.
7 Show the VLAN configurations.
Port Channel Benefits A port channel interface provides many benefits, including easy management, link redundancy, and sharing. Port channels are transparent to network configurations and can be modified and managed as one interface. With this feature, you can create larger-capacity interfaces by utilizing a group of lower-speed links. For example, you can build a 40-Gigabit interface by aggregating four 10-Gigabit Ethernet interfaces together.
In this example, you can change the common speed of the port channel by changing its configuration so the first enabled interface referenced in the configuration is a 1000 Mb/s speed interface. You can also change the common speed of the port channel by setting the speed of the TenGig 0/1 interface to 1000 Mb/s. Uplink Port Channel: VLAN Membership The tagged VLAN membership of the uplink LAG is automatically configured based on the VLAN configuration of all server-facing ports (ports 1 to 32).
0 64-byte pkts, 133 over 64-byte pkts, 3980 over 127-byte pkts 9123852 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 4113 Multicasts, 9123852 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 1 packets/sec, 0.00% of line-rate Output 34.00 Mbits/sec, 12318 packets/sec, 0.
can associate multicast MAC or hardware addresses to an interface range and VLANs by using the mac-address-table static multicast-mac-address vlan vlan-id output-range interface command.
Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command. This command displays an ongoing list of the interface status (up/down), number of packets, traffic statistics, and so on. 1 View interface statistics. Enter the type of interface and slot/port information: • For a 10GbE interface, enter the keyword TenGigabitEthernet followed by the slot/port numbers; for example, interface tengigabitethernet 0/7.
Maintenance Using TDR The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers. TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs. TDR sends a signal onto the physical cable and examines the reflection of the signal that returns.
• When DCB is disabled on an interface, PFC, ETS, and DCBX are also disabled. • When DCBX protocol packets are received, interfaces automatically enable DCB and disable link level flow control. • DCB is required for PFC, ETS, DCBX, and FCoE initialization protocol (FIP) snooping to operate. Link-level flow control uses Ethernet pause frames to signal the other end of the connection to pause data transmission for a certain amount of time as specified in the frame.
MTU Size The Aggregator auto-configures interfaces to use a maximum MTU size of 12,000 bytes. If a packet includes a Layer 2 header, the difference in bytes between the link MTU and IP MTU must be enough to include the Layer 2 header. For example, for VLAN packets, if the MTU is 1400, the link MTU must be no less than 1422. 1400-byte IP MTU + 22-byte VLAN Tag = 1422-byte link MTU The MTU range is 592-12000, with a default of 1554.
EXEC mode or EXEC Privilege mode [Use the command on the remote system that is equivalent to the first command.] 3 Access CONFIGURATION mode. EXEC Privilege mode config 4 Access the port. CONFIGURATION mode interface interface slot/port 5 Set the local port speed. INTERFACE mode speed {100 | 1000 | 10000 | auto} NOTE: If you use an active optical cable (AOC), you can convert the QSFP+ port to a 10 Gigabit SFP+ port or 1 Gigabit SFP port. You can use the speed command to enable the required speed.
Dell(conf-if-te-0/1)#speed 100 Dell(conf-if-te-0/1)#duplex full Dell(conf-if-te-0/1)#no negotiation auto Dell(conf-if-te-0/1)#show config ! interface TenGigabitEthernet 0/1 no ip address speed 100 duplex full no shutdown Auto-Negotiation on Ethernet Interfaces Setting Speed and Duplex Mode of Ethernet Interfaces By default, auto-negotiation of speed and duplex mode is enabled on 1GbE and 10GbE Ethernet interfaces on an Aggregator.
INTERFACE mode duplex {half | full} 7 Disable auto-negotiation on the port. If the speed is set to 1000, you do not need to disable auto-negotiation. INTERFACE mode no negotiation auto 8 Verify configuration changes. INTERFACE mode show config NOTE: The show interfaces status command displays link status, but not administrative status. For link and administrative status, use the show ip interface [interface | brief] [configuration] command.
Table 11.
Viewing Interface Information Displaying Non-Default Configurations. The show [ip | running-config] interfaces configured command allows you to display only interfaces that have nondefault configurations are displayed. The below example illustrates the possible show commands that have the available configured keyword.
• For a VLAN, enter the keyword vlan followed by a number from 1 to 4094. EXEC Privilege mode clear counters [interface] When you enter this command, you must confirm that you want Dell Networking OS to clear the interface counters for the interface (refer to the below clearing interface example).
10 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
• iSCSI monitoring sessions — the switch monitors and tracks active iSCSI sessions in connections on the switch, including port information and iSCSI session information. • iSCSI QoS — A user-configured iSCSI class of service (CoS) profile is applied to all iSCSI traffic. Classifier rules are used to direct the iSCSI data traffic to queues that can be given preferential QoS treatment over other data passing through the switch.
• Target’s IP Address • ISID (Initiator defined session identifier) • Initiator’s IQN (iSCSI qualified name) • Target’s IQN • Initiator’s TCP Port • Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data clears.
Displaying iSCSI Optimization Information To display information on iSCSI optimization, use the show commands detailed in the below table: Table 12. Displaying iSCSI Optimization Information Command Output show iscsi Displays the currently configured iSCSI settings. show iscsi sessions Displays information on active iSCSI sessions on the switch that have been established since the last reload.
Initiator:iqn.2010-11.com.ixia.ixload:initiator-iscsi-35 Up Time:00:00:01:22(DD:HH:MM:SS) Time for aging out:00:00:09:31(DD:HH:MM:SS) ISID:806978696102 Initiator Initiator Target Target Connection ID IP Address TCP Port IP Address TCPPort 10.10.0.53 33432 10.10.0.
11 Isolated Networks for Aggregators An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a non-isolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
12 Link Aggregation Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128).
The Dell Networking OS implementation of LACP is based on the standards specified in the IEEE 802.3: “Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications.” LACP functions by constantly exchanging custom MAC protocol data units (PDUs) across local area network (LAN) Ethernet links. The protocol packets are only exchanged between ports that you configure as LACP-capable.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Configuration Tasks for Port Channel Interfaces To configure a port channel (LAG), use the commands similar to those found in physical interfaces. By default, no port channels are configured in the startup configuration. In VLT mode, port channel configurations are allowed in the startup configuration.
When an interface is added to a port channel, Dell Networking OS recalculates the hash algorithm. To add a physical interface to a port, use the following commands. 1 Add the interface to a port channel. INTERFACE PORT-CHANNEL mode channel-member interface This command is applicable only in PMUX mode. The interface variable is the physical interface type and slot/port information. 2 Double check that the interface was added to the port channel.
When more than one interface is added to a Layer 2-port channel, Dell Networking OS selects one of the active interfaces in the port channel to be the primary port. The primary port replies to flooding and sends protocol data units (PDUs). An asterisk in the show interfaces port-channel brief command indicates the primary port. As soon as a physical interface is added to a port channel, the properties of the port channel determine the properties of the physical interface.
channel-member TenGigabitEthernet 0/8 shutdown Dell(conf-if-po-3)# Configuring the Minimum Oper Up Links in a Port Channel You can configure the minimum links in a port channel (LAG) that must be in “oper up” status to consider the port channel to be in “oper up” status. To set the “oper up” status of your links, use the following command. • Enter the number of links in a LAG that must be in “oper up” status. INTERFACE mode minimum-links number The default is 1.
Deleting or Disabling a Port Channel To delete or disable a port channel, use the following commands. • Delete a port channel. CONFIGURATION mode no interface portchannel channel-number • Disable a port channel. shutdown When you disable a port channel, all interfaces within the port channel are operationally down also. Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled.
TenGigabitEthernet 0/1 is up, line protocol is down(error-disabled[UFD]) Hardware is DellEth, address is f8:b1:56:07:1d:8e Current address is f8:b1:56:07:1d:8e Server Port AdminState is Up Pluggable media not present Interface index is 15274753 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :f8b156071d8e MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Auto-lag is disabled Flowcontrol rx on tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" cou
Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports (the uplink port-channel is LAG 128) on the I/O Aggregator only when a minimum number of member interfaces of the LAG bundle is up. For example, based on your network deployment, you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state.
Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode, the VLT LAG configurations are saved in nonvolatile storage (NVS). By restoring the settings saved in NVS, the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced. The delay in restoring the VLT LAG parameters is reduced (90 seconds by default) on the secondary VLT peer node before it becomes operational.
Interface Te 0/5 Te 0/7 Line Protocol Up Up Utilization[In Percent] 0 0 Monitoring the Member Links of a LAG Bundle You can examine and view the operating efficiency and the traffic-handling capacity of member interfaces of a LAG or port channel bundle. This method of analyzing and tracking the number of packets processed by the member interfaces helps you manage and distribute the packets that are handled by the LAG bundle.
Created by LACP protocol Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755136 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag1280001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 40000 Mbit Members in this channel: Te 0/41(U) Te 0/42(U) Te 0/43(U) Te 0/44(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:11:50 Queueing str
Actor Admin: Oper: Partner Admin: Oper: State State State State ADEHJLMP ADEGIKNP BDFHJLMP ACEGIKNP Key Key Key Key 128 Priority 32768 128 Priority 32768 0 Priority 0 128 Priority 32768 Port Te 0/45 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/46 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority
Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present show interfaces port-channel 1 Command Example Dell# show interfaces port-channel 1 Port-channel 1 is up, line protocol is up Created by LACP protocol Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755009 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE
Multiple Uplink LAGs Unlike IOA Automated modes (Standalone, VLT, and Stacking Modes), the IOA Programmable MUX can support multiple uplink LAGs. You can provision multiple uplink LAGs. NOTE: In order to avoid loops, only disjoint VLANs are allowed between the uplink ports/uplink LAGs and uplink-to-uplink switching is disabled. Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP. 1 Bring up all the ports.
G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged * 1 NUM 1000 1001 Dell# 5 Status Active Active Active Description Q Ports U Po10(Te 0/4-5) U Po11(Te 0/6) T Po10(Te 0/4-5) T Po11(Te 0/6) Show LAG member ports utilization.
3 Configure the port-channel with 40G member ports.
13 Layer 2 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
• all: deletes all dynamic entries. • interface: deletes all entries for the specified interface. • vlan: deletes all entries for the specified VLAN. Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves. Figure 19.
MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
14 Link Layer Discovery Protocol (LLDP) Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
Figure 20. Type, Length, Value (TLV) Segment TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDPenabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Configure LLDP Configuring LLDP is a two-step process. 1 Enable LLDP globally. 2 Advertise TLVs out of an interface. Related Configuration Tasks • Viewing the LLDP Configuration • Viewing Information Advertised by Adjacent LLDP Agents • Configuring LLDPDU Intervals • Configuring a Time to Live • Debugging LLDP Important Points to Remember • LLDP is enabled by default. • Dell Networking systems support up to eight neighbors per interface.
no Negate a command or set its defaults show Show LLDP configuration Dell(conf-if-te-0/3-lldp)# Enabling LLDP LLDP is enabled by default. Enable and disable LLDP globally or per interface. If you enable LLDP globally, all UP interfaces send periodic LLDPDUs. To enable LLDP, use the following command. 1 Enter Protocol LLDP mode. CONFIGURATION or INTERFACE mode protocol lldp 2 Enable LLDP. PROTOCOL LLDP mode no disable Disabling and Undoing LLDP To disable or undo LLDP, use the following command.
• guest-voice-signaling • location-identification • power-via-mdi • softphone-voice • streaming-video • video-conferencing • video-signaling • voice • voice-signaling In the following example, LLDP is enabled globally. R1 and R2 are transmitting periodic LLDPDUs that contain management, 802.1, and 802.3 TLVs. Figure 22. Configuring LLDP Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.
Figure 23. Organizationally Specific TLVs IEEE Organizationally Specific TLVs Eight TLV types have been defined by the IEEE 802.1 and 802.3 working groups as a basic part of LLDP; the IEEE OUI is 00-80-C2. You can configure the Dell Networking system to advertise any or all of these TLVs. Table 15. Optional TLV Types Type TLV Description 4 Port description A user-defined alphanumeric string that describes the port. The Dell Networking OS does not currently support this TLV.
Type TLV Description of auto-negotiation. This TLV is not available in the Dell Networking OS implementation of LLDP, but is available and mandatory (nonconfigurable) in the LLDP-MED implementation. 127 Power via MDI Dell Networking supports the LLDP-MED protocol, which recommends that Power via MDI TLV be not implemented, and therefore Dell Networking implements Extended Power via MDI TLV only.
Bit Position TLV Dell Networking OS Support 6–15 reserved No Table 17. LLDP-MED Device Types Value Device Type 0 Type Not Defined 1 Endpoint Class 1 2 Endpoint Class 2 3 Endpoint Class 3 4 Network Connectivity 5–255 Reserved LLDP-MED Network Policies TLV A network policy in the context of LLDP-MED is a device’s VLAN configuration and associated Layer 2 and Layer 3 configurations.
Type Application Description 6 Video Conferencing Specify this application type for dedicated video conferencing and other similar appliances supporting real-time interactive video. 7 Streaming Video Specify this application type for dedicated video conferencing and other similar appliances supporting real-time interactive video. 8 Video Signaling Specify this application type only if video control packets use a separate network policy than video data. 9–255 Reserved — Figure 25.
• Dell Networking OS supports a maximum of 8000 total neighbors per system. If the number of interfaces multiplied by eight exceeds the maximum, the system does not configure more than 8000. • LLDP is not hitless. Viewing the LLDP Configuration To view the LLDP configuration, use the following command. • Display the LLDP configuration.
Total Frames Out: 16843 Total Frames In: 17464 Total Neighbor information Age outs: 0 Total Multiple Neighbors Detected: 0 Total Frames Discarded: 0 Total In Error Frames: 0 Total Unrecognized TLVs: 0 Total TLVs Discarded: 0 Next packet will be sent after 16 seconds The neighbors are given below: ----------------------------------------------------------------------Remote Chassis ID Subtype: Mac address (4) Remote Chassis ID: 00:00:c9:b1:3b:82 Remote Port Subtype: Mac address (3) Remote Port ID: 00:00:c9:b1
hello 10 Dell(conf-lldp)# Dell(conf-lldp)#no hello Dell(conf-lldp)#show config ! protocol lldp Dell(conf-lldp)# Configuring a Time to Live The information received from a neighbor expires after a specific amount of time (measured in seconds) called a time to live (TTL). The TTL is the product of the LLDPDU transmit interval (hello) and an integer called a multiplier. The default multiplier is 4, which results in a default TTL of 120 seconds. • Adjust the TTL value. CONFIGURATION mode or INTERFACE mode.
clear lldp counters [interface] Debugging LLDP You can view the TLVs that your system is sending and receiving. To view the TLVs, use the following commands. • View a readable version of the TLVs. debug lldp brief • View a readable version of the TLVs plus a hexadecimal version of the entire LLDPDU. debug lldp detail Figure 27. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects.
Table 19. LLDP Configuration MIB Objects MIB Object Category LLDP Variable LLDP MIB Object Description LLDP Configuration adminStatus lldpPortConfigAdminStatus Whether you enable the local LLDP agent for transmit, receive, or both. msgTxHold lldpMessageTxHoldMultiplier Multiplier value. msgTxInterval lldpMessageTxInterval Transmit Interval value. rxInfoTTL lldpRxInfoTTL Time to live for received TLVs. txInfoTTL lldpTxInfoTTL Time to live for transmitted TLVs.
TLV Type TLV Name TLV Variable System LLDP MIB Object 4 Port Description port description Local lldpLocPortDesc Remote lldpRemPortDesc Local lldpLocSysName Remote lldpRemSysName Local lldpLocSysDesc Remote lldpRemSysDesc Local lldpLocSysCapSupported Remote lldpRemSysCapSupported Local lldpLocSysCapEnabled Remote lldpRemSysCapEnabled Local lldpLocManAddrLen Remote lldpRemManAddrLen Local lldpLocManAddrSubtype Remote lldpRemManAddrSubtype Local lldpLocManAddr Remote lldp
TLV Type 127 TLV Name VLAN Name TLV Variable VID VLAN name length VLAN name System LLDP MIB Object Remote lldpXdot1RemProtoVlanId Local lldpXdot1LocVlanId Remote lldpXdot1RemVlanId Local lldpXdot1LocVlanName Remote lldpXdot1RemVlanName Local lldpXdot1LocVlanName Remote lldpXdot1RemVlanName Table 22.
TLV Sub-Type 3 TLV Name Location Identifier TLV Variable System LLDP-MED MIB Object DSCP Value Local lldpXMedLocMediaPolicyDs cp Remote lldpXMedRemMediaPolicyD scp Local lldpXMedLocLocationSubty pe Remote lldpXMedRemLocationSubt ype Local lldpXMedLocLocationInfo Remote lldpXMedRemLocationInfo Local lldpXMedLocXPoEDeviceTy pe Remote lldpXMedRemXPoEDeviceT ype Local lldpXMedLocXPoEPSEPow erSource Location Data Format Location ID Data 4 Extended Power via MDI Power Device Type Po
15 Object Tracking IPv4 or IPv6 object tracking is available on Dell Networking OS. Object tracking allows the Dell Networking OS client processes, such as virtual router redundancy protocol (VRRP), to monitor tracked objects (for example, interface or link status) and take appropriate action when the state of an object changes. NOTE: In Dell Networking OS release version 9.7(0.0), object tracking is supported only on VRRP.
Figure 28. Object Tracking Example When you configure a tracked object, such as an IPv4/IPv6 a route or interface, you specify an object number to identify the object. Optionally, you can also specify: • UP and DOWN thresholds used to report changes in a route metric. • A time delay before changes in a tracked object’s state are reported to a client. Track Layer 2 Interfaces You can create an object to track the line-protocol state of a Layer 2 interface.
Track IPv4 and IPv6 Routes You can create an object that tracks an IPv4 or IPv6 route entry in the routing table. Specify a tracked route by its IPv4 or IPv6 address and prefix-length. Optionally specify a tracked route by a virtual routing and forwarding (VRF) instance name if the route to be tracked is part of a VRF. The next-hop address is not part of the definition of the tracked object.
Tracking a Layer 2 Interface You can create an object that tracks the line-protocol state of a Layer 2 interface and monitors its operational status (UP or DOWN). You can track the status of any of the following Layer 2 interfaces: • 1 Gigabit Ethernet: Enter gigabitethernet slot/port in the track interface interface command (see Step 1). • 10 Gigabit Ethernet: Enter tengigabitethernet slot/port.
Tracking a Layer 3 Interface You can create an object that tracks the routing status of an IPv4 or IPv6 Layer 3 interface. You can track the routing status of any of the following Layer 3 interfaces: • For a 10-Gigabit Ethernet interface, enter the keyword TenGigabitEthernet then the slot/port information. • For a port channel interface, enter the keywords port-channel then a number. • For a VLAN interface, enter the keyword vlan then a number from 1 to 4094.
Dell(conf-track-101)#description NYC metro Dell(conf-track-101)#end Dell#show track 101 Track 101 Interface TenGigabitEthernet 7/2 ip routing Description: NYC metro The following is an example of configuring object tracking for an IPv6 interface: Examples of Configuring Object Tracking for an IPv4 or IPv6 Interface Dell(conf)#track 103 interface tengigabitethernet 7/11 ipv6 routing Dell(conf-track-103)#description Austin access point Dell(conf-track-103)#end Dell#show track 103 Track 103 Interface TenGigabi
The tracking process uses a protocol-specific resolution value to convert the actual metric in the routing table to a scaled metric in the range from 0 to 255. The resolution value is user-configurable and calculates the scaled metric by dividing a route’s cost by the resolution value set for the route type: • For ISIS, you can set the resolution in the range from 1 to 1000, where the default is 10. • For OSPF, you can set the resolution in the range from 1 to 1592, where the default is 1.
Example of the show track resolution Command Dell#show track resolution IP Route Resolution ISIS 1 OSPF 1 IPv6 Route Resolution ISIS 1 172 Object Tracking
16 Port Monitoring The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG).
In the following example, the host and server are exchanging traffic which passes through the uplink interface 1/1. Port 1/1 is the monitored port and port 1/42 is the destination port, which is configured to only monitor traffic received on tengigabitethernet 1/1 (host-originated traffic). Figure 29. Port Monitoring Example Important Points to Remember • Port monitoring is supported on physical ports only; virtual local area network (VLAN) and port-channel interfaces do not support port monitoring.
NOTE: There is no limit to the number of monitoring sessions per system, provided that there are only four destination ports per port-pipe. If each monitoring session has a unique destination port, the maximum number of session is four per port-pipe. Port Monitoring The Aggregator supports multiple source-destination statements in a monitor session, but there may only be one destination port in a monitoring session.
20 TenGig 0/15 TenGig 30 TenGig 0/16 TenGig 100 TenGig 0/25 TenGig 110 TenGig 0/26 TenGig 300 TenGig 0/17 TenGig Dell(conf-mon-sess-300)# 0/3 0/37 0/38 0/39 0/1 rx rx tx tx tx interface interface interface interface interface Port-based Port-based Port-based Port-based Port-based A source port may only be monitored by one destination port (% Error: Exceeding max MG ports for this MD port pipe.), but a destination port may monitor more than one source port.
17 Security The Aggregator provides many security features. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, see the Security chapter in the Dell PowerEdge Command Line Reference Guide for the M I/O Aggregator . Supported Modes Standalone, PMUX, VLT, Stacking NOTE: You can also perform some of the configurations using the Web GUI - Dell Blade IO Manager.
Accessing the I/O Aggregator Using the CMC Console Only This functionality is supported on the Aggregator. You can enable the option to access and administer an Aggregator only using the chassis management controller (CMC) interface, and prevent the usage of the CLI interface of the device to configure and monitor settings. You can configure the restrict-access session command to disable access of the Aggregator using a Telnet or SSH session; the device is accessible only using the CMC GUI.
Only the console port behaves this way, and does so to ensure that users are not locked out of the system if network-wide issue prevents access to these servers. 1 Define an authentication method-list (method-list-name) or specify the default. CONFIGURATION mode aaa authentication login {method-list-name | default} method1 [... method4] The default method-list is applied to all terminal lines.
Enabling AAA Authentication — RADIUS To enable authentication from the RADIUS server, and use TACACS as a backup, use the following commands. 1 Enable RADIUS and set up TACACS as backup. CONFIGURATION mode aaa authentication enable default radius tacacs 2 Establish a host address and password. CONFIGURATION mode radius-server host x.x.x.x key some-password 3 Establish a host address and password. CONFIGURATION mode tacacs-server host x.x.x.
Privilege Levels Overview Limiting access to the system is one method of protecting the system and your network. However, at times, you might need to allow others access to the router and you can limit that access to a subset of commands. In the Dell Networking OS, you can configure a privilege level for users who need limited access to the system. Every command in the Dell Networking OS is assigned a privilege level of 0, 1, or 15. You can configure up to 16 privilege levels.
• name: Enter a text string up to 63 characters long. • access-class access-list-name: Restrict access by access-class. • nopassword: Require password for the user to login. • encryption-type: Enter 0 for plain text or 7 for encrypted text. • password: Enter a string. Specify the password for the user. • privilege level: The range is from 0 to 15. • secret: Specify the secret for the user. To view username, use the show users command in EXEC Privilege mode.
2 • privilege level: the range is from 0 to 15. • nopassword: do not require the user to enter a password. • encryption-type: enter 0 for plain text or 7 for encrypted text. • password: enter a text string. • secret: specify the secret for the user. Configure a password for privilege level. CONFIGURATION mode enable password [level level] [encryption-mode] password Configure the optional and required parameters: • level level: specify a level from 0 to 15. Level 15 includes all levels.
username john password 0 john privilege 8 ! The following example shows the Telnet session for user john. The show privilege command output confirms that john is in privilege level 8. In EXEC Privilege mode, john can access only the commands listed. In CONFIGURATION mode, john can access only the snmpserver commands. Example of Privilege Level Login and Available Commands apollo% telnet 172.31.1.53 Trying 172.31.1.53... Connected to 172.31.1.53. Escape character is '^]'.
enable or enable privilege-level • If you do not enter a privilege level, the system sets it to 15 by default. Move to a lower privilege level. EXEC Privilege mode disable level-number • level-number: The level-number you wish to set. If you enter disable without a level-number, your security level is 1. RADIUS Remote authentication dial-in user service (RADIUS) is a distributed client/server protocol.
• The administrator changes the idle-time of the line on which the user has logged in. • The idle-time is lower than the RADIUS-returned idle-time. ACL Configuration Information The RADIUS server can specify an ACL. If an ACL is configured on the RADIUS server, and if that ACL is present, the user may be allowed access based on that ACL. If the ACL is absent, authorization fails, and a message is logged indicating this.
Defining a AAA Method List to be Used for RADIUS To configure RADIUS to authenticate or authorize users on the system, create a AAA method list. Default method lists do not need to be explicitly applied to the line, so they are not mandatory. To create a method list, use the following commands. • Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the RADIUS authentication method.
• retransmit retries: the range is from 0 to 100. Default is 3. • timeout seconds: the range is from 0 to 1000. Default is 5 seconds. • key [encryption-type] key: enter 0 for plain text or 7 for encrypted text, and a string for the key. The key can be up to 42 characters long. This key must match the key configured on the RADIUS server host. If you do not configure these optional parameters, the global default values for all RADIUS host are applied.
Monitoring RADIUS To view information on RADIUS transactions, use the following command. • View RADIUS transactions to troubleshoot problems. EXEC Privilege mode debug radius TACACS+ Dell Networking OS supports terminal access controller access control system (TACACS+ client, including support for login authentication. Configuration Task List for TACACS+ The following list includes the configuration task for TACACS+ functions.
login authentication {method-list-name | default} Example of a Failed Authentication To view the configuration, use the show config in LINE mode or the show running-config tacacs+ command in EXEC Privilege mode. If authentication fails using the primary method, Dell Networking OS employs the second method (or third method, if necessary) automatically. For example, if the TACACS+ server is reachable, but the server key is invalid, Dell Networking OS proceeds to the next authentication method.
Dell(conf)#aaa authentication exec tacacsauthorization tacacs+ Dell(conf)#tacacs-server host 25.1.1.2 key Force Dell(conf)# Dell(conf)#line vty 0 9 Dell(config-line-vty)#login authentication tacacsmethod Dell(config-line-vty)#end Specifying a TACACS+ Server Host To specify a TACACS+ server host and configure its communication parameters, use the following command. • Enter the host name or IP address of the TACACS+ server host.
hostname is the IP address or host name of the remote device. Enter an IPv4 or IPv6 address in dotted decimal format (A.B.C.D). • • SSH V2 is enabled by default on all the modes. Display SSH connection information. EXEC Privilege mode show ip ssh Specifying an SSH Version The following example uses the ip ssh server version 2 command to enable SSH version 2 and the show ip ssh command to confirm the setting. Dell(conf)#ip ssh server version 2 Dell(conf)#do show ip ssh SSH server : enabled.
Other SSH related command include: • crypto key generate : generate keys for the SSH server. • debug ip ssh : enables collecting SSH debug information. • ip scp topdir : identify a location for files used in secure copy transfer. • ip ssh authentication-retries : configure the maximum number of attempts that should be used to authenticate a user. • ip ssh connection-rate-limit : configure the maximum number of incoming SSH connections per minute.
RSA Vty Authentication : disabled. Encryption HMAC Remote IP Using RSA Authentication of SSH The following procedure authenticates an SSH client based on an RSA key using RSA authentication. This method uses SSH version 2. 1 On the SSH client (Unix machine), generate an RSA key, as shown in the following example. 2 Copy the public key id_rsa.pub to the Dell Networking system. 3 Disable password authentication if enabled.
CONFIGURATION mode ip ssh pub-key-file flash://filename or ip ssh rhostsfile flash://filename Examples of Creating shosts and rhosts The following example shows creating shosts. admin@Unix_client# cd /etc/ssh admin@Unix_client# ls moduli sshd_config ssh_host_dsa_key.pub ssh_host_key.pub ssh_host_rsa_key.pub ssh_config ssh_host_dsa_key ssh_host_key ssh_host_rsa_key admin@Unix_client# cat ssh_host_rsa_key.
Enable host-based authentication on the server (Dell Networking system) and the client (Unix machine). The following message appears if you attempt to log in via SSH and host-based is disabled on the client. In this case, verify that host-based authentication is set to “Yes” in the file ssh_config (root permission is required to edit this file): permission denied (host based). If the IP address in the RSA key does not match the IP address from which you attempt to log in, the following message appears.
them from the VTY line with a deny-all access class. After users identify themselves, Dell Networking OS retrieves the access class from the local database and applies it. (Dell Networking OS then can close the connection if a user is denied access.) NOTE: If a VTY user logs in with RADIUS authentication, the privilege level is applied from the RADIUS server only if you configure RADIUS authentication. The following example shows how to allow or deny a Telnet connection to a user.
Dell(config-line-vty)#access-class sourcemac Dell(config-line-vty)#end 198 Security
18 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
NOTE: IOA supports only Read-only mode. Important Points to Remember • Typically, 5-second timeout and 3-second retry values on an SNMP server are sufficient for both local area network (LAN) and wide area network (WAN) applications. If you experience a timeout with these values, increase the timeout value to greater than 3 seconds, and increase the retry value to greater than 2 seconds on your SNMP server.
• Configure the user with view privileges only (no password or privacy privileges). CONFIGURATION mode snmp-server user name group-name 3 noauth • Configure an SNMP group with view privileges only (no password or privacy privileges). CONFIGURATION mode snmp-server group group-name 3 noauth auth read name write name • Configure an SNMPv3 view.
• RFC 1157-defined traps — coldStart, warmStart, linkDown, linkUp, authenticationFailure, and egpNeighbborLoss. • Force10 enterpriseSpecific environment traps — fan, supply, and temperature. • Force10 enterpriseSpecific protocol traps — bgp, ecfm, stp, and xstp. To configure the system to send SNMP notifications, use the following commands. 1 Configure the Dell Networking system to send notifications to an SNMP server.
CARD_MISMATCH: Mismatch: line card %d is type %s - type %s required.
customer1 at Level 7 VLAN 1000 %ECFM-5-ECFM_RDI_ALARM: RDI Defect detected by MEP 3 in Domain customer1 at Level 7 VLAN 1000 entity Enable entity change traps Trap SNMPv2-MIB::sysUpTime.0 = Timeticks: (1487406) 4:07:54.06, SNMPv2-MIB::snmpTrapOID.0 = OID: SNMPv2-SMI::mib-2.47.2.0.1, SNMPv2-SMI::enterprises.6027.3.6.1.1.2.0 = INTEGER: 4 Trap SNMPv2-MIB::sysUpTime.0 = Timeticks: (1488564) 4:08:05.64, SNMPv2-MIB::snmpTrapOID.0 = OID: SNMPv2-SMI::mib-2.47.2.0.1, SNMPv2-SMI::enterprises.6027.3.6.1.1.2.
Copyright (c) 1999-2012 by Dell Inc. All Rights Reserved. Build Time: Sat Jul 28 03:20:24 PDT 2012 SNMPv2-MIB::sysObjectID.0 = OID: SNMPv2-SMI::enterprises.6027.1.4.2 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (77916) 0:12:59.16 SNMPv2-MIB::sysContact.0 = STRING: SNMPv2-MIB::sysName.0 = STRING: FTOS SNMPv2-MIB::sysLocation.0 = STRING: SNMPv2-MIB::sysServices.
NUM Status Description 10 Inactive Q Ports U Tengig 0/2 [Unix system output] > snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.1107787786 = Hex-STRING: 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 The value 40 is in the first set of 7 hex pairs, indicating that these ports are in Stack Unit 0.
1 00:01:e8:06:95:ac Dynamic Tengig 0/7 Active ----------------Query from Management Station--------------------->snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.172 = Hex-STRING: 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non-default VLANs In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN. To fetch the MAC addresses learned on non-default VLANs, use the object dot1qTpFdbTable.
Example of Deriving the Interface Index Number Dell#show interface tengig 1/21 TenGigabitEthernet 1/21 is up, line protocol is up Hardware is Dell Force10Eth, address is 00:01:e8:0d:b7:4e Current address is 00:01:e8:0d:b7:4e Interface index is 72925242 [output omitted] Monitor Port-Channels To check the status of a Layer 2 port-channel, use f10LinkAggMib (.1.3.6.1.4.1.6027.3.2). In the following example, Po 1 is a switchport and Po 2 is in Layer 3 mode.
6027.3.1.1.4.1.2 = STRING: "OSTATE_UP: Changed interface state to up: Tengig 0/1" 2010-02-10 14:22:40 10.16.130.4 [10.16.130.4]: SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500934) 23:36:49.34 SNMPv2-MIB::snmpTrapOID.0 = OID: IF-MIB::linkUp IF-MIB::ifIndex.1107755009 = INTEGER: 1107755009 SNMPv2-SMI::enterprises.6027.3.1.1.4.1.2 = STRING: "OSTATE_UP: Changed interface state to up: Po 1" Entity MIBS The Entity MIB provides a mechanism for presenting hierarchies of physical entities using SNMP tables.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.32 = STRING: "Unit: 0 Port 29 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.33 = STRING: "Unit: 0 Port 30 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.34 = STRING: "Unit: 0 Port 31 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.35 = STRING: "Unit: 0 Port 32 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.36 = STRING: "40G QSFP+ port" SNMPv2-SMI::mib-2.47.1.1.1.1.2.37 = STRING: "Unit: 0 Port 33 40G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.41 = STRING: "40G QSFP+ port" SNMPv2-SMI::mib-2.47.1.1.1.1.
Standard VLAN MIB When the Aggregator is in Standalone mode, where all the 4000 VLANs are part of all server side interfaces as well as the single uplink LAG, it takes a long time (30 seconds or more) for external management entities to discover the entire VLAN membership table of all the ports. Support for current status OID in the standard VLAN MIB is expected to simplify and speed up this process.
In standalone mode, there are 4000 VLANs, by default. The SNMP output will display for all 4000 VLANs. To view a particular VLAN, issue the snmp query with VLAN interface ID. Dell#show interface vlan 1010 | grep “Interface index” Interface index is 1107526642 Use the output of the above command in the snmp query. snmpwalk -Os -c public -v 1 10.16.151.151 1.3.6.1.2.1.17.7.1.4.2.1.4.0.1107526642 mib-2.17.7.1.4.2.1.4.0.
MIB Object OID Description chSysCoresFileName 1.3.6.1.4.1.6027.3.19.1.2.9.1.2 Contains the core file names and the file paths. chSysCoresTimeCreated 1.3.6.1.4.1.6027.3.19.1.2.9.1.3 Contains the time at which core files are created. chSysCoresStackUnitNumber 1.3.6.1.4.1.6027.3.19.1.2.9.1.4 Contains information that includes which stack unit or processor the core file was originated from. chSysCoresProcess 1.3.6.1.4.1.6027.3.19.1.2.9.1.
19 Stacking An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. In automated Stack mode, the base module 40GbE ports (33 and 37) operate as stack links and it is fixed. In Programmable MUX (PMUX) mode, you can select either the base or optional modules (ports 33 — 56). An Aggregator supports a maximum of six stacking units.
Figure 30. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. • Stack master — primary management unit • Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
• Switch removal If the master switch goes off line, the standby replaces it as the new master. NOTE: For the Aggregator, the entire stack has only one management IP address. Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command. The following example shows sample output from an established stack.
MAC Addressing All port interfaces in the stack use the MAC address of the management interface on the master switch. The MAC address of the chassis in which the master Aggregator is installed is used as the stack MAC address. The stack continues to use the master’s chassis MAC address even after a failover. The MAC address is not refreshed until the stack is reloaded and a different unit becomes the stack master.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator. Figure 31. Stack Groups on an Aggregator Stacking in PMUX Mode PMUX stacking allows the stacking of two or more IOA units. This allows grouping of multiple units for high availability. IOA supports a maximum of six stacking units.
Dell(conf)#stack-unit 0 stack-group 0 Dell(conf)#00:37:46: %STKUNIT0-M:CP %IFMGR-6-STACK_PORTS_ADDED: Ports Fo 0/33 have been configured as stacking ports. Please save and reset stack-unit 0 for config to take effect Dell(conf)#stack-unit 0 stack-group 1 Dell(conf)#00:37:57: %STKUNIT0-M:CP %IFMGR-6-STACK_PORTS_ADDED: Ports Fo 0/37 have been configured as stacking ports.
2 The switch with the highest MAC address at boot time. 3 A unit is selected as Standby by the administrator, and a fail over action is manually initiated or occurs due to a Master unit failure. No record of previous stack mastership is kept when a stack loses power. As it reboots, the election process will once again determine the Master and Standby switches. As long as the priority has not changed on any members, the stack will retain the same Master and Standby.
Cabling Stacked Switches Before you configure MXL switches in a stack, connect the 40G direct attach or QSFP cables and transceivers to connect 40GbE ports on two Aggregators in the same or different chassis. Cabling Restrictions The following restrictions apply when setting up a stack of Aggregators: • Only daisy-chain or ring topologies are supported; star and full mesh topologies are not supported. • Stacking is supported only on 40GbE links by connecting 40GbE ports on the base module.
Login: username Password: ***** Dell> enable Dell# configure 3 Configure the Aggregator to operate in stacking mode. CONFIGURATION mode stack-unit 0 iom-mode stack 4 Repeat Steps 1 to 3 on the second Aggregator in the stack. 5 Log on to the CLI and reboot each switch, one after another, in as short a time as possible. EXEC PRIVILEGE mode reload NOTE: If the stacked switches all reboot at approximately the same time, the switch with the highest MAC address is automatically elected as the master switch.
• If the new unit has been configured with a stack number that is already assigned to a stack member, the stack avoids a numbering conflict by assigning the new switch the first available stack number. • If the stack has been provisioned for the stack number that is assigned to the new unit, the pre-configured provisioning must match the switch type. If there is a conflict between the provisioned switch type and the new unit, a mismatch error message is displayed.
the Reset button, which causes the factory default settings to be applied when the device comes up online) from the CMC for the uplink speed to be effective. This functionality to set the uplink speed is available from the CLI or the CMC interface when the Aggregator functions as a simple MUX or a VLT node, with all of the uplink interfaces configured to be member links in the same LAG bundle.
Merging Two Operational Stacks The recommended procedure for merging two operational stacks is as follows: 1 Always power off all units in one stack before connecting to another stack. 2 Add the units as a group by unplugging one stacking cable in the operational stack and physically connecting all unpowered units. 3 Completely cable the stacking connections, making sure the redundant link is also in place.
Example of the show system Command Dell# show system Stack MAC : 00:1e:c9:f1:00:9b Reload Type : normal-reload [Next boot : normal-reload] -- Unit 0 -Unit Type : Management Unit Status : online Next Boot : online Required Type : I/O-Aggregator - 34-port GE/TE (XL) Current Type : I/O-Aggregator - 34-port GE/TE (XL) Master priority : 0 Hardware Rev : Num Ports : 56 Up Time : 2 hr, 41 min FTOS Version : 8-3-17-46 Jumbo Capable : yes POE Capable : no Burned In MAC : 00:1e:c9:f1:00:9b No Of MACs : 3 -- Unit 1 -U
Dell# 5 0/53 Example of the show system stack-ports (ring) Command Dell# show system stack-ports Topology: Ring Interface Connection Link Speed (Gb/s) 0/33 1/33 40 0/37 1/37 40 1/33 0/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Group up up up up Example of the show system stack-ports (daisy chain) Command Dell# show system stack-ports Topology: Daisy Chain Interface Connection Link Speed (Gb/s) 0/33 40 0/37 1/37 40 1/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Gro
Stack-unit State: Stack-unit SW Version: Link to Peer: Active (Indicates Master Unit.) E8-3-16-46 Up -- PEER Stack-unit Status --------------------------------------------------------Stack-unit State: Standby (Indicates Standby Unit.) Peer stack-unit ID: 1 Stack-unit SW Version: E8-3-16-46 -- Stack-unit Redundancy Configuration ---------------------------------------------------------Primary Stack-unit: mgmt-id 0 Auto Data Sync: Full Hot (Failover Failover type with redundancy.
The following syslog messages are generated when a member unit fails: Dell#May 31 01:46:17: %STKUNIT3-M:CP %IPC-2-STATUS: target stack unit 4 not responding May 31 01:46:17: %STKUNIT3-M:CP %CHMGR-2-STACKUNIT_DOWN: Major alarm: Stack unit 4 down - IPC timeout Dell#May 31 01:46:17: %STKUNIT3-M:CP %IFMGR-1-DEL_PORT: Removed port: Te 4/1-32,41-48, Fo 4/ 49,53 Dell#May 31 01:46:18: %STKUNIT5-S:CP %IFMGR-1-DEL_PORT: Removed port: Te 4/1-32,41-48, Fo 4/ 49,53 Unplugged Stacking Cable • Problem: A stacking cable
Stack Unit in Card-Problem State Due to Incorrect Dell Networking OS Version • Problem: A stack unit enters a Card-Problem state because the switch has a different Dell Networking OS version than the master unit. The switch does not come online as a stack unit. • Resolution: To restore a stack unit with an incorrect Dell Networking OS version as a member unit, disconnect the stacking cables on the switch and install the correct Dell Networking OS version.
CONFIGURATION mode boot system stack-unit all primary system partition 4 Save the configuration. EXEC Privilege write memory 5 Reload the stack unit to activate the new Dell Networking OS version. CONFIGURATION mode reload Example of Upgrading all Stacked Switches The following example shows how to upgrade all switches in a stack, including the master switch. Dell# upgrade system ftp: A: Address or name of remote host []: 10.11.200.241 Source file name []: //FTOS-XL-8.3.17.0.
4 Reset the stack unit to activate the new Dell Networking OS version. EXEC Privilege mode power-cycle stack-unit unit-number Example of Upgrading a Single Stack Unit The following example shows how to upgrade an individual stack unit.
20 Storm Control The storm control feature allows you to control unknown-unicast, muticast, and broadcast control traffic on Layer 2 and Layer 3 physical interfaces. Dell Networking OS Behavior: The Dell Networking OS supports broadcast control (the storm-control broadcast command) for Layer 2 and Layer 3 traffic. The minimum number of packets per second (PPS) that storm control can limit is two.
• Configure the packets per second (pps) of multicast traffic allowed on C-Series and S-Series networks only. CONFIGURATION mode storm-control multicast packets_per_second in • Configure the packets per second of unknown-unicast traffic allowed in or out of the network. CONFIGURATION mode storm-control unknown-unicast [packets_per_second in] Configuring Storm Control from INTERFACE Mode To configure storm control, use the following command.
21 Broadcast Storm Control On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
22 System Time and Date The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
Setting the Timezone Universal time coordinated (UTC) is the time standard based on the International Atomic Time standard, commonly known as Greenwich Mean time. When determining system time, you must include the differentiator between the UTC and your local timezone. For example, San Jose, CA is the Pacific Timezone with a UTC offset of -8. To set the clock timezone, use the following command. • Set the clock to the appropriate timezone.
Example of the clock summer-time Command Dell(conf)#clock summer-time pacific date Mar 14 2012 00:00 Nov 7 2012 00:00 Dell(conf)# Setting Recurring Daylight Saving Time Set a date (and time zone) on which to convert the switch to daylight saving time on a specific day every year. If you have already set daylight saving for a one-time setting, you can set that date and time as the recurring setting with the clock summer-time time-zone recurring command.
Dell(conf)#clock summer-time pacific recurring Dell(conf)# Configuring the Offset-Threshold for NTP Audit Log You can configure the system to send an audit log message to a syslog server if the time difference from the NTP server is greater than a threshold value (offset-threshold). However, time synchronization still occurs. To configure the offset-threshold, follow this procedure.
23 Uplink Failure Detection (UFD) Supported Modes Standalone, PMUX, VLT, Stacking Topics: • Feature Description • How Uplink Failure Detection Works • UFD and NIC Teaming • Important Points to Remember • Uplink Failure Detection (SMUX mode) • Configuring Uplink Failure Detection (PMUX mode) • Clearing a UFD-Disabled Interface (in PMUX mode) • Displaying Uplink Failure Detection • Sample Configuration: Uplink Failure Detection Feature Description UFD provides detection of the loss of upstr
Figure 32. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 33. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
For example, as shown previously, the switch/ router with UFD detects the uplink failure and automatically disables the associated downstream link port to the server. To continue to transmit traffic upstream, the server with NIC teaming detects the disabled link and automatically switches over to the backup link in order to continue to transmit traffic upstream. Important Points to Remember When you configure UFD, the following conditions apply. • • • You can configure up to 16 uplink-state groups.
UPLINK-STATE-GROUP mode defer-timer seconds Dell(conf)#uplink-state-group 1 Dell(conf-uplink-state-group-1)#defer-timer 20 Dell(conf-uplink-state-group-1)#show config ! uplink-state-group 1 downstream TenGigabitEthernet 0/1-32 upstream Port-channel 128 defer-timer 20 Configuring Uplink Failure Detection (PMUX mode) To configure UFD, use the following commands. 1 Create an uplink-state group and enable the tracking of upstream links on the switch/router.
The default is auto-recovery of UFD-disabled downstream ports is enabled. To disable auto-recovery, use the no downstream auto-recover command. 5 Specify the time (in seconds) to wait for the upstream port channel (LAG 128) to come back up before server ports are brought down. UPLINK-STATE-GROUP mode defer-timer seconds NOTE: This command is available in Standalone and VLT modes only. The range is from 1 to 120. 6 (Optional) Enter a text description of the uplink-state group.
00:10:12: 00:10:12: 00:10:12: 00:10:13: 3 00:10:13: Te 0/4 00:10:13: Te 0/5 00:10:13: Te 0/6 00:10:13: 00:10:13: 00:10:13: %STKUNIT0-M:CP %STKUNIT0-M:CP %STKUNIT0-M:CP %STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: %IFMGR-5-OSTATE_DN: %IFMGR-5-OSTATE_DN: %IFMGR-5-OSTATE_DN: Changed Changed Changed Changed interface state to interface state to interface state to uplink state group down: down: down: state Te Te Te to 0/1 0/2 0/3 down: Group %STKUNIT0-M:CP %IFMGR-5-OSTATE_DN: Downstream interface set to UFD error-
Uplink Uplink Uplink Uplink Uplink Uplink State State State State State State Group: Group: Group: Group: Group: Group: 1 Status: Enabled, Up 3 Status: Enabled, Up 5 Status: Enabled, Down 6 Status: Enabled, Up 7 Status: Enabled, Up 16 Status: Disabled, Up Dell# show uplink-state-group 16 Uplink State Group: 16 Status: Disabled, Up Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 1 Status: Enabled, Up Upstream Interfaces : Downstr
Examples of Viewing UFD Output Dell#show running-config uplink-state-group ! uplink-state-group 1 no enable downstream TenGigabitEthernet 0/0 upstream TenGigabitEthernet 0/1 Dell# Dell(conf-uplink-state-group-16)# show configuration ! uplink-state-group 16 no enable description test downstream disable links all downstream TengigabitEthernet 0/40 upstream TengigabitEthernet 0/41 upstream Port-channel 8 Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD o
Upstream Interfaces : Te 0/3(Up) Te 0/4(Up) Downstream Interfaces : Te 0/1(Up) Te 0/2(Up) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) < After a single uplink port fails > Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Downstream Interfaces : Te 0/1(Dis) Te 0/2(Dis) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) Uplink Failure Detection (UFD) 249
24 PMUX Mode of the IO Aggregator This chapter provides an overview of the PMUX mode. I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability. This involves creating multiple LAGs, configuring VLANs on uplinks and the server side, configuring data center bridging (DCB) parameters, and so forth. By default, IOA starts up in IOA Standalone mode.
Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile. The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands. The IOA PMUX Mode CLI Commands section lists the PMUX mode CLI commands that you can now configure without a separate user account.
• Optimized forwarding with virtual router redundancy protocol (VRRP). • Provides link-level resiliency. • Assures high availability. CAUTION: Dell Networking does not recommend enabling Stacking and VLT simultaneously. If you enable both features at the same time, unexpected behavior occurs. As shown in the following example, VLT presents a single logical Layer 2 domain from the perspective of attached devices that have a virtual link trunk terminating on separate chassis in the VLT domain.
EXEC mode Dell# show interfaces port brief Codes: L - LACP Port-channel O - OpenFlow Controller Port-channel LAG L 127 Mode L2 Status up Uptime 00:18:22 128 L2 up 00:00:00 Ports Fo 0/33 Fo 0/37 Fo 0/41 (Up)<<<<<<<
Delay-Restore Abort Threshold Peer-Routing Peer-Routing-Timeout timer Multicast peer-routing timeout Dell# 5 : 60 seconds : Disabled : 0 seconds : 150 seconds Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unitid.
14 Active 15 Active 20 Active Dell T T T T T U U Te 0/1 Po128(Te 0/41-42) Te 0/1 Po128(Te 0/41-42) Te 0/1 Po128(Te 0/41-42) Te 0/1 You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan ->vlan-id - Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged command. You can remove the untagged VLANs using the no vlan untagged command in the physical port/port-channel.
• • A VLT domain supports two chassis members, which appear as a single logical device to network access devices connected to VLT ports through a port channel. • A VLT domain consists of the two core chassis, the interconnect trunk, backup link, and the LAG members connected to attached devices. • Each VLT domain has a unique MAC address that you create or VLT creates automatically. • ARP tables are synchronized between the VLT peer nodes.
• • • • VLT backup link • In the backup link between peer switches, heartbeat messages are exchanged between the two chassis for health checks. The default time interval between heartbeat messages over the backup link is 1 second. You can configure this interval. The range is from 1 to 5 seconds. DSCP marking on heartbeat messages is CS6.
Primary and Secondary VLT Peers Primary and Secondary VLT Peers are supported on the Aggregator. To prevent issues when connectivity between peers is lost, you can designate Primary and Secondary roles for VLT peers . You can elect or configure the Primary Peer. By default, the peer with the lowest MAC address is selected as the Primary Peer. If the VLTi link fails, the status of the remote VLT Primary Peer is checked using the backup link.
The delay-restore feature waits for all saved configurations to be applied, then starts a configurable timer. After the timer expires, the VLT ports are enabled one-by-one in a controlled manner. The delay between bringing up each VLT port-channel is proportional to the number of physical members in the port-channel. The default is 90 seconds.
EXEC mode show vlt role • Display the current configuration of all VLT domains or a specified group on the switch. EXEC mode show running-config vlt • Display statistics on VLT operation. EXEC mode show vlt statistics • Display the current status of a port or port-channel interface used in the VLT domain. EXEC mode show interfaces interface • interface: specify one of the following interface types: • Fast Ethernet: enter fastethernet slot/port. • 1-Gigabit Ethernet: enter gigabitethernet slot/port.
Delay-Restore Abort Threshold Peer-Routing Peer-Routing-Timeout timer Multicast peer-routing timeout Dell# : 60 seconds : Disabled : 0 seconds : 150 seconds Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20, 30 UP UP 20, 30 Dell_VLTpeer2# show vlt detail Local LAG Id -----------2 100 Peer LAG Id ----------127 100 Local Status ---------
ICL Hello's Sent: ICL Hello's Received: 148 98 Dell_VLTpeer2# show vlt statistics VLT Statistics ---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 994 978 89 89 Additional VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT domain, configure a backup link and interconnect trunk, and connect the peer switches in a VLT domain to an attached access device (switch or server).
Table 28. Troubleshooting VLT Description Behavior at Peer Up Behavior During Run Time Action to Take Bandwidth monitoring A syslog error message and an SNMP trap is generated when the VLTi bandwidth usage goes above the 80% threshold and when it drops below 80%. A syslog error message and an SNMP trap is generated when the VLTi bandwidth usage goes above its threshold. Depending on the traffic that is received, the traffic can be offloaded inVLTi. Domain ID mismatch The VLT peer does not boot up.
25 FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. Topics: • FC Flex IO Modules • Understanding and Working of the FC Flex IO Modules • Fibre Channel over Ethernet for FC Flex IO Modules FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
The FC Flex IO module uses the same baseboard hardware of the IOA and the M1000e chassis. You can insert the FC Flex IO module into any of the optional module slots of the IOA and it provides four FC ports per module. If you insert only one FC Flex IO module, four ports are supported; if you insert two FC Flex IO modules, eight ports are supported. By installing an FC Flex IO module, you can enable the IOA to directly connect to an existing FC SAN network.
• There should a maximum of 64 server fabric login (FLOGI) requests or fabric discovery (FDISC) requests per server MAC address before being forwarded by the FC Flex IO module to the FC core switch. Without user configuration, only 32 server login sessions are permitted for each server MAC address. To increase the total number of sessions to 64, use the max sessions command. • A distance of up to 300 meters is supported at 8 Gbps for Fibre Channel traffic.
• Fc-map: 0x0efc00 • Fcf-priority: 128 • Fka-adv-period: 8000mSec • Keepalive: enable • Vlan priority: 3 • On an IOA, the FCoE virtual local area network (VLAN) is automatically configured. • With FC Flex IO modules on an IOA, the following DCB maps are applied on all of the ENode facing ports.
Processing of Data Traffic The Dell Networking OS determines the module type that is plugged into the slot. Based on the module type, the software performs the appropriate tasks. The FC Flex IO module encapsulates and decapsulates the FCoE frames. The module directly switches any non-FCoE or non-FIP traffic, and only FCoE frames are processed and transmitted out of the Ethernet network.
Figure 35. Installing and Configuring Flowchart for FC Flex IO Modules To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: • Clearance — There is adequate front and rear clearance for operator access.
2 Relative Humidity — The operating relative humidity is 8 percent to 85 percent (non‑condensing) with a maximum humidity gradation of 10 percent per hour.
• When the FC fabric discovery (FDISC) accept message is received from the SAN side, the FC Flex IO module converts the FDISC message again into an FLOGI accept message and transmits it to the CNA. • Internal tables of the switch are then programmed to enable the gateway device to forward FCoE traffic directly back and forth between the devices.
Fibre Channel over Ethernet for FC Flex IO Modules FCoE provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames. The Fibre Channel (FC) Flex IO module is supported on Dell Networking Operating System (OS) I/O Aggregator (IOA).
26 FC FLEXIO FPORT FC FlexIO FPort is now supported on the Dell Networking OS. Topics: • FC FLEXIO FPORT • Configuring Switch Mode to FCF Port Mode • Name Server • FCoE Maps • Creating an FCoE Map • Zoning • Creating Zone and Adding Members • Creating Zone Alias and Adding Members • Creating Zonesets • Activating a Zoneset • Displaying the Fabric Parameters FC FLEXIO FPORT The switch is a blade switch which is plugged into the Dell M1000 Blade server chassis.
Configuring Switch Mode to FCF Port Mode To configure switch mode to Fabric services, use the following commands. 1 Configure Switch mode to FCF Port. CONFIGURATION mode feature fc fport domain id 2 NOTE: Enable remote-fault-signaling rx off command in FCF FPort mode on interfaces connected to the Compellent and MDF storage devices. 2 Create an FCoE map with the parameters used in the communication between servers and a SAN fabric.
4 Configure the FCoE mapped address prefix (FC-MAP) value which is used to identify FCoE traffic transmitted on the FCoE VLAN for the specified fabric. FCOE MAP mode fc-map fc-map-value 5 Configure the SAN fabric to which the FC port connects by entering the name of the FCoE map applied to the interface. INTERFACE mode fcoe-map {tengigabitEthernet slot/port | fortygigabitEthernet slot/port} The FCoE map contains FCoE and FC parameter settings (refer to FCoE Maps).
• The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed. Each Fibre Channel fabric serves as an isolated SAN topology within the same physical network. • A server uses the priority to select an upstream FCoE forwarder (FCF priority). • FIP keepalive (FKA) advertisement timeout. NOTE: In each FCoE map, the fabric ID, FC-MAP value, and FCoE VLAN must be unique. To access one SAN fabric, use one FCoE map.
fc-map fc-map-value You must enter a unique MAC address prefix as the FC-MAP value for each fabric. The range is from 0EFC00 to 0EFCFF. The default is none. 5 Configure the priority used by a server CNA to select the FCF for a fabric login (FLOGI). FCoE MAP mode fcf-priority priority The range is from 1 to 255. The default is 128. 6 Enable the monitoring FIP keep-alive messages (if it is disabled) to detect if other FCoE devices are reachable.
ZONE CONFIGURATION mode member word The member can be WWPN (00:00:00:00:00:00:00:00), port ID (000000), or alias name (word). Example of Creating a Zone and Adding Members Dell(conf)#fc zone z1 Dell(conf-fc-zone-z1)#member 11:11:11:11:11:11:11:11 Dell(conf-fc-zone-z1)#member 020202 Dell(conf-fc-zone-z1)#exit Creating Zone Alias and Adding Members To create a zone alias and add devices to the alias, follow these steps. 1 Create a zone alias name.
Activating a Zoneset Activating a zoneset makes the zones within it effective. On a switch, only one zoneset can be active. Any changes in an activated zoneset do not take effect until it is re-activated. By default, the fcoe-map fabric map-namedoes not have any active zonesets. 1 Enter enter the fc-fabric command in fcoe-map to active or de-activate the zoneset.
Example of the show fcoe-map Command Dell(conf)#do show Fabric Name fcoe-map map Fabric Type Fport Fabric Id 1002 Vlan Id 1002 Vlan priority 3 FC-MAP 0efc00 FKA-ADV-Period 8 Fcf Priority 128 Config-State ACTIVE Oper-State UP ======================================================= Switch Config Parameters ======================================================= DomainID 2 ======================================================= Switch Zoning Parameters =======================================================
Example of the show fc zoneset active Command Dell#show fc zoneset active Active Zoneset: fcoe_srv_fc_tgt ZoneName ZoneMember ================================== brcd_sanb 10:00:8c:7c:ff:21:5f:8d 20:02:00:11:0d:03:00:00 Dell# Example of the show fc zone Command Dell#show fc zone ZoneName ZoneMember ============================== brcd_sanb brcd_cna1_wwpn1 sanb_p2tgt1_wwpn Dell# Example of the show fc alias Command Dell(conf)#do show fc alias ZoneAliasName ZoneMember ===========================================
27 NPIV Proxy Gateway The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the Aggregator, allowing server CNAs to communicate with SAN fabrics over the Aggregator.
An FX2 chassis FC port is configured as an N (node) port that logs in to an F (fabric) port on the upstream FC core switch and creates a channel for N-port identifier virtualization. NPIV allows multiple N-port fabric logins at the same time on a single, physical Fibre Channel link. Converged Network Adapter (CNA) ports on servers connect to the FX2 chassis Ten-Gigabit Ethernet ports and log in to an upstream FC core switch through the N port.
Table 29. Aggregator with the NPIV Proxy Gateway: Terms and Definitions Term Description FC port Fibre Channel port on the Aggregator that operates in autosensing, 2, 4, or 8-Gigabit mode. On an NPIV proxy gateway, an FC port can be used as a downlink for a server connection and an uplink for a fabric connection. F port Port mode of an FC port connected to an end node (N) port on an Aggregator with the NPIV proxy gateway.
FCoE Maps An FCoE map is used to identify the SAN fabric to which FCoE storage traffic is sent. Using an FCoE map, an Aggregator with the NPG operates as an FCoE-FC bridge between an FC SAN and FCoE network by providing FCoE-enabled servers and switches with the necessary parameters to log in to a SAN fabric. An FCoE map applies the following parameters on server-facing Ethernet and fabric-facing FC ports on the Aggregator: • • • • • The dedicated FCoE VLAN used to transport FCoE storage traffic.
State :Complete PfcMode:ON -------------------PG:0 TSA:ETS BW:30 PFC:OFF Priorities:0 1 2 5 6 7 PG:1 TSA:ETS Priorities:4 BW:30 PFC:OFF PG:2 TSA:ETS Priorities:3 BW:40 PFC:ON Default FCoE map Dell(conf)#do show fcoe-map Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:O
CONFIGURATION mode dcb-map name 2 Configure the PFC setting (on or off) and the ETS bandwidth percentage allocated to traffic in each priority group. Configure whether the priority group traffic should be handled with strict-priority scheduling. The sum of all allocated bandwidth percentages must be 100 percent. Strict-priority traffic is serviced first. Afterward, bandwidth allocated to other priority groups is made available and allocated according to the specified percentages.
Repeat this step to apply a DCB map to more than one port or port channel. INTERFACE mode dcb-map name Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_DCB1 Creating an FCoE VLAN Create a dedicated VLAN to send and receive Fibre Channel traffic over FCoE links between servers and a fabric over an NPG. The NPG receives FCoE traffic and forwards decapsulated FC frames over FC links to SAN switches in a specified fabric. 1 Create the dedicated VLAN for FCoE traffic. Range: 2–4094.
4 Specify the FC-MAP value used to generate a fabric-provided MAC address, which is required to send FCoE traffic from a server on the FCoE VLAN to the FC fabric specified in Step 2. Enter a unique MAC address prefix as the FC-MAP value for each fabric. Range: 0EFC00–0EFCFF. Default: None. FCoE MAP mode fc-map fc-map-value 5 Configure the priority used by a server CNA to select the FCF for a fabric login (FLOGI). Range: 1–255. Default: 128.
no shutdown Applying an FCoE Map on Fabric-facing FC Ports The Aggregator, with the FC ports, are configured by default to operate in N port mode to connect to an F port on an FC switch in a fabric. You can apply only one FCoE map on an FC port. When you apply an FCoE map on a fabric-facing FC port, the FC port becomes part of the FCoE fabric, whose settings in the FCoE map are configured on the port and exported to downstream server CNA ports.
2 Apply the DCB map on a downstream (server-facing) Ethernet port: Dell(config-if-te-0/0)#dcb-map SAN_DCB_MAP 3 Create the dedicated VLAN to be used for FCoE traffic: Dell(conf)#interface vlan 1002 4 Configure an FCoE map to be applied on downstream (server-facing) Ethernet and upstream (core-facing) FC ports: Dell(config)# fcoe-map SAN_FABRIC_A Dell(config-fcoe-name)# fabric-id 1002 vlan 1002 Dell(config-fcoe-name)# description "SAN_FABRIC_A" Dell(config-fcoe-name)# fc-map 0efc00 Dell(config-fcoe-name
show interfaces status Command Example Dell# show interfaces Port Description Te 0/1 Te 0/2 Te 0/3 Te 0/4 Te 0/5 Te 0/6 Te 0/7 Te 0/8 toB300 Fc 0/9 Fc 0/10 Te 0/11 Te 0/12 status Status Up Down Up Down Up Up Up Down Up Up Down Down Speed Duplex Vlan 10000 Mbit Full 1-4094 Auto Auto 1-1001,1003-4094 10000 Mbit Full 1-1001,1003-4094 Auto Auto 1-1001,1003-4094 10000 Mbit Full 1-4094 10000 Mbit Full 1-4094 10000 Mbit Full 1-4094 Auto Auto 1-1001,1003-4094 8000 Mbit Full -8000 Mbit Full -Auto Auto -Auto Auto -
Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/11 Te 0/12 128 ACTIVE UP Table 32. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID. VLAN priority FCoE traffic uses VLAN priority 3.
Table 33. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Status Operational status of the link between a server CNA port and a SAN fabric: Logged In - Server has logged in to the fabric and is able to transmit FCoE traffic.
Field Description Enode WWNN Worldwide node name of the server CNA. FCoE MAC Fabric-provided MAC address (FPMA). The FPMA consists of the FC-MAP value in the FCoE map and the FC-ID provided by the fabric after a successful FLOGI. In the FPMA, the most significant bytes are the FC-MAP; the least significant bytes are the FC-ID. FC-ID FC port ID provided by the fabric. LoginMethod Method used by the server CNA to log in to the fabric; for example, FLOGI or FDISC.
28 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
29 Debugging and Diagnostics This chapter contains the following sections:.
L 128 L2L3 up 17:36:24 Te Te Te Te Te Te Te Te 0/33 0/35 0/36 0/39 0/51 0/53 0/54 0/56 (Up) (Up) (Up) (Up) (Up) (Up) (Up) (Up) Dell#show uplink-state-group 1 detail (Up): Interface up (Dwn): Interface down Uplink State Group Defer Timer Upstream Interfaces Downstream Interfaces 2 : : : : 1 10 Po Te Te Te Te Te Te Te (Dis): Interface disabled Status: Enabled, Up sec 128(Up) 0/1(Up) Te 0/2(Up) Te 0/3(Dwn) Te 0/4(Dwn) Te 0/5(Up) 0/6(Dwn) Te 0/7(Dwn) Te 0/8(Up) Te 0/9(Up) Te 0/10(Up) 0/11(Dwn) Te
1 Display the current port mode for Aggregator L2 interfaces (show interfaces switchport interface command).. Dell#show interfaces switchport tengigabitethernet 0/1 Codes: U x G i - Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Trunk, H - VSN tagged Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged Name: TenGigabitEthernet 0/1 802.
256M bytes of boot flash memory. 1 34-port GE/TE (XL) 56 Ten GigabitEthernet/IEEE 802.
packets are transmitted through those components. These diagnostics also perform snake tests using virtual local area network (VLAN) configurations. NOTE: Diagnostic is not allowed in Stacking mode, including member stacking. Avoid stacking before executing the diagnostic tests in the chassis. Important Points to Remember • You can only perform offline diagnostics on an offline standalone unit. You cannot perform diagnostics if the ports are configured in a stacking group.
3 4 5 Member Member Member not present not present not present Dell# Trace Logs In addition to the syslog buffer, the Dell Networking OS buffers trace messages which are continuously written by various software tasks to report hardware and software events and status information. Each trace message provides the date, time, and name of the Dell Networking OS process. All messages are stored in a ring buffer. You can save the messages to a file either manually or automatically after failover.
show hardware stack-unit {0-5} buffer unit {0-1} total-buffer • View the forwarding plane statistics containing the packet buffer usage per port per stack unit. EXEC Privilege mode show hardware stack-unit {0-5} buffer unit {0-1} port {1-64 | all} buffer-info • View the forwarding plane statistics containing the packet buffer statistics per COS per port.
Environmental Monitoring Aggregator components use environmental monitoring hardware to detect transmit power readings, receive power readings, and temperature updates. To receive periodic power updates, you must enable the following command. • Enable environmental monitoring.
SFP 49 Rx LOS state SFP 49 Tx Fault state = False = False Recognize an Over-Temperature Condition An overtemperature condition occurs, for one of two reasons: the card genuinely is too hot or a sensor has malfunctioned. Inspect cards adjacent to the one reporting the condition to discover the cause. • If directly adjacent cards are not normal temperature, suspect a genuine overheating condition. • If directly adjacent cards are normal temperature, suspect a faulty sensor.
Dell# Recognize an Under-Voltage Condition If the system detects an under-voltage condition, it sends an alarm. To recognize this condition, look for the following system message: %CHMGR-1-CARD_SHUTDOWN: Major alarm: Line card 2 down - auto-shutdown due to under voltage. This message indicates that the specified card is not receiving enough power. In response, the system first shuts down Power over Ethernet (PoE).
Buffer Tuning Buffer tuning allows you to modify the way your switch allocates buffers from its available memory and helps prevent packet drops during a temporary burst of traffic. The application-specific integrated circuit (ASICs) implement the key functions of queuing, feature lookups, and forwarding lookups in hardware.
Figure 38. Buffer Tuning Points Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non-default buffer settings, as tuning can significantly affect system performance. The default values work for most cases. As a guideline, consider tuning buffers if traffic is bursty (and coming from several interfaces). In this case: • Reduce the dedicated buffer on all queues/interfaces. • Increase the dynamic buffer on all interfaces.
BUFFER PROFILE mode buffer dynamic • Change the number of packet-pointers per queue. BUFFER PROFILE mode buffer packet-pointers • Apply the buffer profile to a CSF to FP link. CONFIGURATION mode buffer csf linecard Dell Networking OS Behavior: If you attempt to apply a buffer profile to a non-existent port-pipe, the system displays the following message: %DIFFSERV-2-DSA_BUFF_CARVING_INVALID_PORT_SET: Invalid FP port-set 2 for linecard 2. Valid range of port-set is <0-1>.
Example of Viewing the Buffer Profile (Interface) Dell#show buffer-profile detail int gi 0/10 Interface Gi 0/10 Buffer-profile fsqueue-fp Dynamic buffer 1256.00 (Kilobytes) Queue# Dedicated Buffer Buffer Packets Kilobytes) 0 3.00 256 1 3.00 256 2 3.00 256 3 3.00 256 4 3.00 256 5 3.00 256 6 3.00 256 7 3.00 256 Example of Viewing the Buffer Profile (Linecard) Dell#show buffer-profile detail fp-uplink stack-unit 0 port-set 0 Linecard 0 Port-set 0 Buffer-profile fsqueue-hig Dynamic Buffer 1256.
CONFIGURATION mode buffer-profile global [1Q|4Q] Sample Buffer Profile Configuration The two general types of network environments are sustained data transfers and voice/data. Dell Networking recommends a single-queue approach for data transfers.
Displaying Drop Counters To display drop counters, use the following commands. • • • Identify which stack unit, port pipe, and port is experiencing internal drops. show hardware stack-unit 0–11 drops [unit 0 [port 0–63]] Display drop counters. show hardware stack-unit drops unit port Identify which interface is experiencing internal drops.
HOL DROPS on COS6 HOL DROPS on COS7 HOL DROPS on COS8 HOL DROPS on COS9 HOL DROPS on COS10 HOL DROPS on COS11 HOL DROPS on COS12 HOL DROPS on COS13 HOL DROPS on COS14 HOL DROPS on COS15 HOL DROPS on COS16 HOL DROPS on COS17 TxPurge CellErr Aged Drops --- Egress MAC counters--Egress FCS Drops --- Egress FORWARD PROCESSOR IPv4 L3UC Aged & Drops TTL Threshold Drops INVALID VLAN CNTR Drops L2MC Drops PKT Drops of ANY Conditions Hg MacUnderflow TX Err PKT Counter --- Error counters--Internal Mac Transmit Errors
txReqTooLarge txInternalError txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The show hardware stack-unit cpu party-bus statistics command displays input and output statistics on the party bus, which carries inter-process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell#show hardware stack-unit 2 cpu party-bus statistics Input Statistics: 27550 packets, 2559298 byt
You must enable this utility to be able to configure the parameters for buffer statistics tracking. By default, buffer statistics tracking is disabled. 2 Enable the buffer statistics tracking utility and enter the Buffer Statistics Snapshot configuration mode. CONFIGURATION mode Dell(conf)#buffer-stats-snapshot Dell(conf)#no disable You must enable this utility to be able to configure the parameters for buffer statistics tracking. By default, buffer statistics tracking is disabled.
Unit 1 unit: 3 port: 33 (interface Fo 1/176) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 37 (interface Fo 1/180) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------4 Use show hardware buffer-stats-snapshot resource interface interface{priority-group { id | all } | queue { ucast{id | all}{ mcast {id | all} | all} to view buffer statistics tracking resource i
* Warning - Restoring factory defaults will delete the existing * * persistent settings (stacking, fanout, etc.) * * After restoration the unit(s) will be powercycled immediately. * * Proceed with caution ! * *********************************************************************** Proceed with factory settings? Confirm [yes/no]:yes -- Restore status -- Unit Nvram Config -----------------------0 Success Power-cycling the unit(s). ....
30 Standards Compliance This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http://tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 39.
RFC# Full Name 1812 Requirements for IP Version 4 Routers 2131 Dynamic Host Configuration Protocol 2338 Virtual Router Redundancy Protocol (VRRP) 3021 Using 31-Bit Prefixes on IPv4 Point-to-Point Links 3046 DHCP Relay Agent Information Option 3069 VLAN Aggregation for Efficient IP Address Allocation 3128 Protection Against a Variant of the Tiny Fragment Attack Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 41.
RFC# Full Name 2575 View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP) 2576 Coexistence Between Version 1, Version 2, and Version 3 of the Internetstandard Network Management Framework 2578 Structure of Management Information Version 2 (SMIv2) 2579 Textual Conventions for SMIv2 2580 Conformance Statements for SMIv2 2618 RADIUS Authentication Client MIB, except the following four counters: radiusAuthClientInvalidServerAddresses radiusAuthClientMalformedAcces
RFC# Full Name sFlow.org sFlow Version 5 sFlow.