Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.5(0.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide...................................................................................................... 13 Audience.............................................................................................................................................. 13 Conventions.........................................................................................................................................13 Related Documents.........................................................
Configuring Priority-Based Flow Control.................................................................................... 29 Enhanced Transmission Selection......................................................................................................32 Configuring Enhanced Transmission Selection........................................................................... 33 Configuring DCB Maps and its Attributes........................................................................................
Viewing DHCP Statistics and Lease Information............................................................................... 69 6 FIP Snooping............................................................................................................ 71 Fibre Channel over Ethernet............................................................................................................... 71 Ensuring Robustness in a Converged Ethernet Network................................................................
Configuring a Static Route for a Management Interface.............................................................97 VLAN Membership.............................................................................................................................. 98 Default VLAN ................................................................................................................................ 98 Port-Based VLANs.........................................................................................
How the LACP is Implemented on an Aggregator...........................................................................122 Uplink LAG................................................................................................................................... 122 Server-Facing LAGs..................................................................................................................... 122 LACP Modes.............................................................................................
Configuring Port Monitoring.............................................................................................................152 Important Points to Remember........................................................................................................153 Port Monitoring................................................................................................................................. 154 15 Security for M I/O Aggregator......................................................
Example of Sample Entity MIBS outputs.................................................................................... 185 Standard VLAN MIB........................................................................................................................... 187 Enhancements.............................................................................................................................187 Fetching the Switchport Configuration and the Logical Interface Configuration ..................
20 Uplink Failure Detection (UFD)....................................................................... 215 Feature Description........................................................................................................................... 215 How Uplink Failure Detection Works............................................................................................... 216 UFD and NIC Teaming.......................................................................................................
CONFIGURATION versus INTERFACE Configurations.............................................................. 243 Enabling LLDP............................................................................................................................. 244 Advertising TLVs..........................................................................................................................244 Viewing the LLDP Configuration.....................................................................................
Important Points to Remember................................................................................................. 295 Running Offline Diagnostics.......................................................................................................296 Trace Logs......................................................................................................................................... 297 Auto Save on Crash or Rollover..................................................................
About this Guide 1 This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.4(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
• Dell Networking OS Getting Started Guide for the M I/O Aggregator • Release Notes for the M I/O Aggregator 14 About this Guide
Before You Start 2 To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Default Settings The I/O Aggregator provides zero-touch configuration with the following default configuration settings: • default user name (root) • password (calvin) • VLAN (vlan1) and IP address for in-band management (DHCP) • IP address for out-of-band (OOB) management (DHCP) • read-only SNMP community name (public) • broadcast storm control (enabled in Standalone and VLT modes and disabled in VLT mode) • IGMP multicast flooding (enabled) • VLAN configuration (in Standalone mode, all port
• Internet small computer system interface (iSCSI)optimization. • Internet group management protocol (IGMP) snooping. • Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. • Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface.
An aggregator also detects iSCSI storage devices on all interfaces and autoconfigures to optimize performance. Performance optimization operations are applied automatically, such as Jumbo frame size support on all the interfaces, disabling of storm control and enabling spanning-tree port fast on interfaces connected to an iSCSI equallogic (EQL) storage device. Link Aggregation All uplink ports are configured in a single LAG (LAG 128).
Server-Facing LAGs The tagged VLAN membership of a server-facing LAG is automatically configured based on the serverfacing ports that are members of the LAG. The untagged VLAN of a server-facing LAG is configured based on the untagged VLAN to which the lowest numbered server-facing port in the LAG belongs. NOTE: Dell Networking recommends configuring the same VLAN membership on all LAG member ports. Where to Go From Here You can customize the Aggregator for use in your data center network as necessary.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• CONFIGURATION mode allows you to configure security features, time settings, set logging and SNMP functions, configure static ARP and MAC addresses, and set line cards on the system. Beneath CONFIGURATION mode are submodes that apply to interfaces, protocols, and features. The following example shows the submode command structure.
CLI Command Mode Prompt Access Command • From every mode except EXEC and EXEC Privilege, enter the exit command. NOTE: Access all of the following modes from CONFIGURATION mode.
Undoing Commands When you enter a command, the command line is added to the running configuration file (runningconfig). To disable a command and remove it from the running-config, enter the no command, then the original command. For example, to delete an IP address configured on an interface, use the no ip address ip-address command. NOTE: Use the help or ? command as described in Obtaining Help.
timezone Configure time zone Dell(conf)#clock Entering and Editing Commands Notes for entering commands. • The CLI is not case-sensitive. • You can enter partial CLI keywords. – Enter the minimum number of letters to uniquely identify a command. For example, you cannot enter cl as a partial keyword because both the clock and class-map commands begin with the letters “cl.” You can enter clo, however, as a partial keyword because only one command begins with those three letters.
Command History Dell Networking OS maintains a history of previously-entered commands for each mode. For example: • When you are in EXEC mode, the UP and DOWN arrow keys display the previously-entered EXEC mode commands. • When you are in CONFIGURATION mode, the UP or DOWN arrows keys recall the previously-entered CONFIGURATION mode commands.
The find keyword displays the output of the show command beginning from the first occurrence of specified text. The following example shows this command used in combination with the show linecard all command.
Data Center Bridging (DCB) 4 On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode.
NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging. Priority-Based Flow Control In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in multiprotocol links so that it does not affect other traffic types and no frames are lost due to congestion. When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.1p priority traffic to the transmitting device.
• By default, PFC is enabled on an interface with no dot1p priorities configured. You can configure the PFC priorities if the switch negotiates with a remote peer using DCBX. During DCBX negotiation with a remote peer: – DCBx communicates with the remote peer by link layer discovery protocol (LLDP) type, length, value (TLV) to determine current policies, such as PFC support and enhanced transmission selection (ETS) BW allocation.
To configure PFC and apply a PFC input policy to an interface, follow these steps. 1. Create a DCB input policy to apply pause or flow control for specified priorities using a configured delay time. CONFIGURATION mode dcb-input policy-name The maximum is 32 alphanumeric characters. 2. Configure the link delay used to pause specified priority traffic. DCB INPUT POLICY mode pfc link-delay value One quantum is equal to a 512-bit transmission. The range (in quanta) is from 712 to 65535.
7. Enter interface configuration mode. CONFIGURATION mode interface type slot/port 8. Apply the input policy with the PFC configuration to an ingress interface. INTERFACE mode dcb-policy input policy-name 9. Repeat Steps 1 to 8 on all PFC-enabled peer interfaces to ensure lossless traffic service. Dell Networking OS Behavior: As soon as you apply a DCB policy with PFC enabled on an interface, DCBx starts exchanging information with PFC-enabled peers. The IEEE802.
(2) of lossless queues supported globally on the switch. In this case, all PFC configurations received from PFC-enabled peers are removed and resynchronized with the peer devices. Traffic may be interrupted when you reconfigure PFC no-drop priorities in an input policy or reapply the policy to an interface. Enhanced Transmission Selection Enhanced transmission selection (ETS) supports optimized bandwidth allocation between traffic types in multiprotocol (Ethernet, FCoE, SCSI) links.
Traffic Groupings Description traffic in a group must have the same traffic handling requirements for latency and frame loss. Group ID A 4-bit identifier assigned to each priority group. The range is from 0 to 7. Group bandwidth Percentage of available bandwidth allocated to a priority group. Group transmission selection algorithm (TSA) Type of queue scheduling a priority group uses. In the Dell Networking OS, ETS is implemented as follows: • ETS supports groups of 802.
4. Apply the DCB output policy to an interface. Configuring DCB Maps and its Attributes This topic contains the following sections that describe how to configure a DCB map, apply the configured DCB map to a port, configure PFC without a DCB map, and configure lossless queues. DCB Map: Configuration Procedure A DCB map consists of PFC and ETS parameters. By default, PFC is not enabled on any 802.1p priority and ETS allocates equal bandwidth to each priority.
Step Task Command Command Mode dot1p priority 4; priority group 4 maps to dot1p priorities 5, 6, and 7. Important Points to Remember • If you remove a dot1p priority-to-priority group mapping from a DCB map (no priority pgid command), the PFC and ETS parameters revert to their default values on the interfaces on which the DCB map is applied. By default, PFC is not applied on specific 802.1p priorities; ETS assigns equal bandwidth to each 802.1p priority.
Step Task Command Command Mode already configured for lossless queues (pfc no-drop queues command). Configuring PFC without a DCB Map In a network topology that uses the default ETS bandwidth allocation (assigns equal bandwidth to each priority), you can also enable PFC for specific dot1p-priorities on individual interfaces without using a DCB map. This type of DCB configuration is useful on interfaces that require PFC for lossless traffic, but do not transmit converged Ethernet traffic.
• If you configure lossless queues on an interface that already has a DCB map with PFC enabled (pfc on), an error message is displayed. Step Task Command Command Mode 1 Enter INTERFACE Configuration mode. interface{tengigabitE CONFIGURATION thernet slot/port | fortygigabitEthernet slot/port} 2 Open a DCB map and enter DCB map configuration mode. dcb-map name INTERFACE 3 Disable PFC. no pfc mode on DCB MAP 4 Return to interface configuration mode.
PFC parameters PFC Configuration TLV and Application Priority Configuration TLV. ETS parameters ETS Configuration TLV and ETS Recommendation TLV. Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network.
To enable DCB with PFC buffers on a switch, enter the following commands, save the configuration, and reboot the system to allow the changes to take effect. 1. Enable DCB. CONFIGURATION mode dcb enable 2. Set PFC buffering on the DCB stack unit. CONFIGURATION mode dcb stack-unit all pfc-buffering pfc-ports 64 pfc-queues 2 NOTE: To save the pfc buffering configuration changes, save the configuration and reboot the system.
interface TenGigabitEthernet 0/4 mtu 12000 portmode hybrid switchport auto vlan flowcontrol rx on tx off dcb-map DCB_MAP_PFC_OFF no keepalive ! protocol lldp advertise management-tlv management-address system-name dcbx port-role auto-downstream no shutdown Dell# When DCB is Enabled When an interface receives a DCBx protocol packet, it automatically enables DCB and disables link-level flow control. The dcb-map and flow control configurations are removed as shown in the following example.
To reconfigure the Aggregator so that all interfaces come up with DCB disabled and link-level flow control enabled, use the no dcb enable on-next-reload command. PFC buffer memory is automatically freed. Enabling Auto-DCB-Enable Mode on Next Reload To configure the Aggregator so that all interfaces come up in auto-DCB-enable mode with DCB disabled and flow control enabled, use the dcb enable aut-detect on-next-reload command.
dot1p Value in the Incoming Frame Egress Queue Assignment 2 0 3 1 4 2 5 3 6 3 7 3 How Priority-Based Flow Control is Implemented Priority-based flow control provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default. As an enhancement to the existing Ethernet pause mechanism, PFC stops traffic transmission for specified priorities (CoS values) without impacting other priority classes.
ETS is implemented on an Aggregator as follows: • Traffic in priority groups is assigned to strict-queue or WERR scheduling in an ETS output policy and is managed using the ETS bandwidth-assignment algorithm. Dell Networking OS de-qeues all frames of strict-priority traffic before servicing any other queues. A queue with strict-priority traffic can starve other queues in the same port. • ETS-assigned bandwidth allocation and scheduling apply only to data queues, not to control queues.
– In the CEE version, the priority group/traffic class group (TCG) ID 15 represents a non-ETS priority group. Any priority group configured with a scheduler type is treated as a strict-priority group and is given the priority-group (TCG) ID 15. – The CIN version supports two types of strict-priority scheduling: * Group strict priority: Allows a single priority flow in a priority group to increase its bandwidth usage to the bandwidth total of the priority group.
DCBx Port Roles The following DCBX port roles are auto-configured on an Aggregator to propagate DCB configurations learned from peer DCBX devices internally to other switch ports: Auto-upstream The port advertises its own configuration to DCBx peers and receives its configuration from DCBX peers (ToR or FCF device). The port also propagates its configuration to other ports on the switch. The first auto-upstream that is capable of receiving a peer configuration is elected as the configuration source.
Default DCBX port role: Uplink ports are auto-configured in an auto-upstream role. Server-facing ports are auto-configured in an auto-downstream role. NOTE: On a DCBx port, application priority TLV advertisements are handled as follows: • The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port.
– No other port is the configuration source. – The port role is auto-upstream. – The port is enabled with link up and DCBx enabled. – The port has performed a DCBx exchange with a DCBx peer. – The switch is capable of supporting the received DCB configuration values through either a symmetric or asymmetric parameter exchange. A newly elected configuration source propagates configuration changes received from a peer to the other auto-configuration ports.
DCBX Example The following figure shows how DCBX is used on an Aggregator installed in a Dell PowerEdge M I/O Aggregator chassis in which servers are also installed. The external 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks configured as DCBx auto-upstream ports. The Aggregator is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of a Fibre Channel storage network.
DCBX Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBX requires LLDP in both send (TX) and receive (RX) mode to be enabled on a port interface. If multiple DCBX peer ports are detected on a local DCBX interface, LLDP is shut down. • The CIN version of DCBx supports only PFC, ETS, and FCOE; it does not support iSCSI, backward congestion management (BCN), logical link down (LLD), and network interface virtualization (NIV).
– tlv: enables traces for DCBx TLVs. Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 3. Displaying DCB Configurations Command Output show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFC-enabled ports, and number of PFC-enabled queues. On the master switch in a stack, you can specify a stack-unit number. The range is from 0 to 5.
6 7 0 0 0 0 0 0 Example of the show interfaces pfc summary Command Dell# show interfaces tengigabitethernet 0/4 pfc summary Interface TenGigabitEthernet 0/4 Admin mode is on Admin is enabled Remote is enabled, Priority list is 4 Remote Willing Status is enabled Local is enabled Oper status is Recommended PFC DCBx Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quantams Application Priority TLV Parameters : -------------------------------------FCOE TLV
Fields Description is on, PFC advertisements are enabled to be sent and received from peers; received PFC configuration takes effect. The admin operational status for a DCBx exchange of PFC configuration is enabled or disabled. Remote is enabled; Priority list Remote Willing Status is enabled Operational status (enabled or disabled) of peer device for DCBx exchange of PFC configuration with a list of the configured PFC priorities.
Fields Description Application Priority TLV: Remote FCOE Priority Map Priority bitmap received from the remote DCBX port in FCoE advertisements in application priority TLVs. Application Priority TLV: Remote ISCSI Priority Map Priority bitmap received from the remote DCBX port in iSCSI advertisements in application priority TLVs. PFC TLV Statistics: Input TLV pkts Number of PFC TLVs received. PFC TLV Statistics: Output TLV pkts Number of PFC TLVs transmitted.
Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% Priority# Bandwidth 0 13% 1 13% 2 13% 3 13% 4 12% 5 12% 6 12% 7 12% Oper status is init Conf TLV Tx Status is disabled Traffic Class TLV Tx Status is disabled TSA ETS ETS ETS ETS ETS ETS ETS ETS TSA ETS ETS ETS ETS ETS ETS ETS ETS Example of the show interface ets detail Command Dell# show interfaces tengigabitethernet Interface TenGigabitEthernet 0/4 Max Supported TC
7 0% ETS Oper status is init ETS DCBX Oper status is Down State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enabled 0 Input Conf TLV Pkts, 0 Output Conf TLV Pkts, 0 Error Conf TLV Pkts 0 Input Reco TLV Pkts, 0 Output Reco TLV Pkts, 0 Error Reco TLV Pkts The following table describes the show interface ets detail command fields. Table 5. show interface ets detail Command Description Field Description Interface Interface type with stack-unit and port number.
Field Description ETS DCBx Oper status Operational status of ETS configuration on local port: match or mismatch. State Machine Type Type of state machine used for DCBx exchanges of ETS parameters: • • Feature: for legacy DCBx versions Asymmetric: for an IEEE version Conf TLV Tx Status Status of ETS Configuration TLV advertisements: enabled or disabled. Reco TLV Tx Status Status of ETS Recommendation TLV advertisements: enabled or disabled.
2 3 4 5 6 7 8 - - Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Example of the show interface DCBx detail Command Dell# show interface tengigabitethernet 0/4 dcbx detail Dell#show interface te 0/4 dcbx detail E-ETS Configuration TLV enabled e-ETS Configuration TLV disabled R
Peer DCBX Status: ---------------DCBX Operational Version is 0 DCBX Max Version Supported is 255 Sequence Number: 2 Acknowledgment Number: 2 2 Input PFC TLV pkts, 3 Output PFC TLV pkts, 0 Error PFC pkts, 0 PFC Pause Tx pkts, 0 Pause Rx pkts 2 Input PG TLV Pkts, 3 Output PG TLV Pkts, 0 Error PG TLV Pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts Total DCBX Frames transmitted 27 Total DCBX Frames received 6 Total DCBX Frame errors 0 Total DCBX Frames unr
Field Description Local DCBx Status: Acknowledgment Number Acknowledgement number transmitted in Control TLVs. Local DCBx Status: Protocol State Current operational state of DCBx protocol: ACK or IN-SYNC. Peer DCBx Status: DCBx Operational Version DCBx version advertised in Control TLVs received from peer device. Peer DCBx Status: DCBx Max Version Supported Highest DCBx version supported in Control TLVs received from peer device.
Priority group 1 Assigns traffic to one priority queue with 20% of the link bandwidth and strictpriority scheduling. Priority group 2 Assigns traffic to one priority queue with 30% of the link bandwidth. Priority group 3 Assigns traffic to two priority queues with 50% of the link bandwidth and strictpriority scheduling.
Dynamic Host Configuration Protocol (DHCP) 5 The Aggregator is auto-configured to operate as a DHCP client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported.The dynamic host configuration protocol (DHCP) is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
already in use. In this case, the client starts the configuration process over by sending a DHCPDISCOVER. DHCPINFORM A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP. The server responds by unicast. DHCPNAK A server sends this message to the client if it is not able to fulfill a DHCPREQUEST; for example, if the requested address is already in use.
1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP ENABLE CMD Received in state START 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Received DHCPOFFER packet in Interface Ma 0/0 with Lease-ip:10.16.134.250, Mask:255.255.0.
Interface Ma 0/0 :DHCP RENEW CMD Received in state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Received DHCPOFFER packet in Interface Ma 0/0 with Lease-Ip:10.16.134.250, Mask:255.255.0.0,Server-Id: 10.16.134.
You can override the DHCP-assigned address on the OOB management interface by manually configuring an IP address using the CLI or CMC interface. If no user-configured IP address exists for the OOB interface exists and if the OOB IP address is not in the startup configuration, the Aggregator will automatically obtain it using DHCP. You can also manually configure an IP address for the VLAN 1 default management interface using the CLI.
• Management routes added by a DHCP client display with Route Source as DHCP in the show ip management route and show ip management-route dynamic command output. • Management routes added by DHCP are automatically reinstalled if you configure a static IP route with the ip route command that replaces a management route added by the DHCP client. If you remove the statically configured IP route using the no ip route command, the management route is reinstalled.
Option Number and Description Subnet Mask Option 1 Specifies the client’s subnet mask. Router Option 3 Specifies the router IP addresses that may serve as the client’s default gateway. Domain Name Server Option 6 Domain Name Option 15 Specifies the domain name servers (DNSs) that are available to the client. Specifies the domain name that clients should use when resolving hostnames via DNS.
Option Number and Description Signals the last option in the DHCP packet. Option 82 RFC 3046 (the relay agent information option, or Option 82) is used for class-based IP address assignment. The code for the relay agent information option is 82, and is comprised of two sub-options, circuit ID and remote ID. Circuit ID This is the interface on which the client-originated message is received. Remote ID This identifies the host from which the message is received.
• Acquire a new IP address with renewed lease time from a DHCP server. EXEC Privilege mode renew dhcp interface type slot/port Viewing DHCP Statistics and Lease Information To display DHCP client information, enter the following show commands: • Display statistics about DHCP client interfaces. EXEC Privilege • show ip dhcp client statistics interface type slot/port Clear DHCP client statistics on a specified or on all interfaces.
08-26-2011 16:21:50 70 08-27-2011 01:33:39 Dynamic Host Configuration Protocol (DHCP)
FIP Snooping 6 FIP snooping is auto-configured on an Aggregator in standalone mode. You can display information on FIP snooping operation and statistics by entering show commands. This chapter describes about the FIP snooping concepts and configuration procedures.
FIP provides a functionality for discovering and logging in to an FCF. After discovering and logging in, FIP allows FCoE traffic to be sent and received between FCoE end-devices (ENodes) and the FCF. FIP uses its own EtherType and frame format. The below illustration about FIP discovery, depicts the communication that occurs between an ENode server and an FCoE switch (FCF).
transmitted between an FCoE end-device and an FCF. An Ethernet bridge that provides these functions is called a FIP snooping bridge (FSB). On a FIP snooping bridge, ACLs are created dynamically as FIP login frames are processed. The ACLs are installed on switch ports configured for the following port modes: • ENode mode for server-facing ports • FCF mode for a trusted port directly connected to an FCF You must enable FIP snooping on an Aggregator and configure the FIP snooping parameters.
Figure 8. FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis.
• Process FIP VLAN discovery requests and responses, advertisements, solicitations, FLOGI/FDISC requests and responses, FLOGO requests and responses, keep-alive packets, and clear virtual-link messages. FIP Snooping in a Switch Stack FIP snooping supports switch stacking as follows:. • A switch stack configuration is synchronized with the standby stack unit. • Dynamic population of the FCoE database (ENode, Session, and FCF tables) is synchronized with the standby stack unit.
Bridge-to-FCF Links A port directly connected to an FCF is auto-configured in FCF mode. Initially, all FCoE traffic is blocked; only FIP frames are allowed to pass. FCoE traffic is allowed on the port only after a successful FLOGI request/response and confirmed use of the configured FC-MAP value for the VLAN.
Displaying FIP Snooping Information Use the show commands from the table below, to display information on FIP snooping. Command Output show fip-snooping sessions [interface vlan vlan-id] Displays information on FIP-snooped sessions on all VLANs or a specified VLAN, including the ENode interface and MAC address, the FCF interface and MAC address, VLAN ID, FCoE MAC address and FCoE session ID number (FC-ID), worldwide node name (WWNN) and the worldwide port name (WWPN).
aa:bb:cc:00:00:00 aa:bb:cc:00:00:00 aa:bb:cc:00:00:00 aa:bb:cc:00:00:00 Te Te Te Te 0/42 0/42 0/42 0/42 FCoE MAC 0e:fc:00:01:00:01 0e:fc:00:01:00:02 0e:fc:00:01:00:03 0e:fc:00:01:00:04 0e:fc:00:01:00:05 FC-ID 01:00:01 01:00:02 01:00:03 01:00:04 01:00:05 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 aa:bb:cd:00:00:00 Port WWPN 31:00:0e:fc:00:00:00:00 41:00:0e:fc:00:00:00:00 41:00:0e:fc:00:00:00:01 41:00:0e:fc:00:00:00:02 41:00:0e:fc:00:00:00:03 Te Te Te Te 0/43 0/43 0/43 0/43 100 100 100 100
Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. VLAN VLAN ID number used by the session. FC-ID Fibre Channel session ID assigned by the FCF. show fip-snooping fcf Command Example Dell# show fip-snooping fcf FCF MAC FCF Interface No.
Number of VN Port Keep Alive Number of Multicast Discovery Advertisement Number of Unicast Discovery Advertisement Number of FLOGI Accepts Number of FLOGI Rejects Number of FDISC Accepts Number of FDISC Rejects Number of FLOGO Accepts Number of FLOGO Rejects Number of CVL Number of FCF Discovery Timeouts Number of VN Port Session Timeouts Number of Session failures due to Hardware Config Dell(conf)# :3349 :4437 :2 :2 :0 :16 :0 :0 :0 :0 :0 :0 :0 Dell# show fip-snooping statistics int tengigabitethernet 0/1
show fip-snooping statistics Command Description Field Description Number of Vlan Requests Number of FIP-snooped VLAN request frames received on the interface. Number of VLAN Notifications Number of FIP-snooped VLAN notification frames received on the interface. Number of Multicast Discovery Solicits Number of FIP-snooped multicast discovery solicit frames received on the interface.
Number of VN Port Session Timeouts Number of VN port session timeouts that occurred on the interface. Number of Session failures due to Hardware Config Number of session failures due to hardware configuration that occurred on the interface. show fip-snooping system Command Example Dell# show fip-snooping system Global Mode FCOE VLAN List (Operational) FCFs Enodes Sessions : : : : : Enabled 1, 100 1 2 17 NOTE: NPIV sessions are included in the number of FIP-snooped sessions displayed.
FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
The DCBX configuration on the FCF-facing port is detected by the server-facing port and the DCB PFC configuration on both ports is synchronized. For more information about how to configure DCBX and PFC on a port, refer to FIP Snooping After FIP packets are exchanged between the ENode and the switch, a FIP snooping session is established. ACLS are dynamically generated for FIP snooping on the FIP snooping bridge/switch.
Internet Group Management Protocol (IGMP) 7 On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. • Responding to an IGMP Query. – One router on a subnet is elected as the querier. The querier periodically multicasts (to allmulticast-systems address 224.0.0.1) a general query to all hosts on the subnet.
• • To enable filtering, routers must keep track of more state information, that is, the list of sources that must be filtered. An additional query type, the group-and-source-specific query, keeps track of state changes, while the group-specific and general queries still refresh existing state. Reporting is more efficient and robust.
• The host’s third message indicates that it is only interested in traffic from sources 10.11.1.1 and 10.11.1.2. Because this request again prevents all other sources from reaching the subnet, the router sends another group-and-source query so that it can satisfy all other hosts. There are no other interested hosts, so the request is recorded. Figure 13.
Figure 14. IGMP Membership Queries: Leaving and Staying in Groups IGMP Snooping IGMP snooping is auto-configured on an Aggregator. Multicast packets are addressed with multicast MAC addresses, which represents a group of devices rather than one unique device. Switches forward multicast frames out of all ports in a VLAN by default, even if there are only a small number of interested hosts, resulting in a waste of bandwidth.
Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode. When multicast flooding is disabled, unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs.
Source address 1.1.1.2 Member Ports: Po 1 Interface Group Uptime Expires Router mode Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 Dell# Uptime 00:00:21 Expires 00:01:48 Uptime 00:00:04 Expires 00:02:05 Vlan 1600 226.0.0.1 00:00:04 Never INCLUDE 1.1.1.
8 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
– The tagged VLAN membership of a server-facing LAG is automatically configured based on the server-facing ports that are members of the LAG. The untagged VLAN of a server-facing LAG is auto-configured based on the untagged VLAN to which the lowest numbered server-facing port in the LAG belongs. • All interfaces are auto-configured as members of all (4094) VLANs and untagged VLAN 1. All VLANs are up and can send or receive layer 2 traffic.
MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 1000 Mbit Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 11:04:02 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 14856 packets, 2349010 bytes, 0 underruns 0 64-by
Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command. Step Command Syntax Command Mode Purpose 1. interface interface CONFIGURATION Enter the keyword interface followed by the type of interface and slot/port information: 2.
advertise management-tlv system-name dcbx port-role auto-downstream no shutdown Dell(conf-if-te-0/1)# To view the interfaces in Layer 2 mode, use the show interfaces switchport command in EXEC mode. Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The IOM management interface has both a public IP and private IP address on the internal Fabric D interface.
Slot range: 0-0 To configure an IP address on a management interface, use either of the following commands in MANAGEMENT INTERFACE mode: Command Syntax Command Mode Purpose ip address ip-address mask INTERFACE Configure an IP address and mask on the interface. • ip address dhcp INTERFACE ip-address mask: enter an address in dotted-decimal format (A.B.C.D), the mask must be in /prefix format (/x) Acquire an IP address from the DHCP server.
VLAN Membership A virtual LAN (VLANs) is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group. In Layer 2 mode, VLANs move traffic at wire speed and can span multiple devices. Dell Networking OS supports up to 4093 port-based VLANs and one default VLAN, as specified in IEEE 802.1Q. VLAN provide the following benefits: • Improved security because you can isolate groups of users into different VLANs.
VLANs and Port Tagging To add an interface to a VLAN, it must be in Layer 2 mode. After you place an interface in Layer 2 mode, it is automatically placed in the default VLAN. Dell Networking OS supports IEEE 802.1Q tagging at the interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses. The information that is preserved as the frame moves through the network.
vlan-id specifies a tagged VLAN number. Range: 2-4094 To reconfigure an interface as a member of only specified untagged VLANs, enter the vlan untagged command in INTERFACE mode: Command Syntax Command Mode Purpose vlan untagged {vlan-id} INTERFACE Add the interface as an untagged member of one or more VLANs, where: vlan-id specifies an untagged VLAN number.
Adding an Interface to a Tagged VLAN The following example shows you how to add a tagged interface (Tel/7) to a VLAN (VLAN 2). Enter the vlan tagged command to add interface Te 1/7 to VLAN 2, which is as shown below. Enter the show vlan command to verify that interface Te 1/7 is a tagged member of VLAN 2..
T Te 0/1-15,17-32 Dell# Port Channel Interfaces On an Aggregator, port channels are auto-configured as follows: • All 10GbE uplink interfaces (ports 33 to 56) are auto-configured to belong to the same 10GbE port channel (LAG 128). • Server-facing interfaces (ports 1 to 32) auto-configure in LAGs (1 to 127) according to the NIC teaming configuration on the connected servers. Port channel interfaces support link aggregation, as described in IEEE Standard 802.3ad. .
Member ports of a LAG are added and programmed into hardware in a predictable order based on the port ID, instead of in the order in which the ports come up. With this implementation, load balancing yields predictable results across switch resets and chassis reloads. A physical interface can belong to only one port channel at a time. Each port channel must contain interfaces of the same interface type/speed. Port channels can contain a mix of 1000 or 10000 Mbps Ethernet interfaces .
Displaying Port Channel Information To view the port channel’s status and channel members in a tabular format, use the show interfaces port-channel brief command in EXEC Privilege mode. Dell#show int port brief Codes: L - LACP Port-channel LAG 1 Dell# Mode Status L2 down Uptime 00:00:00 Ports Te 0/16 (Down) To display detailed information on a port channel, enter the show interfaces port-channel command in EXEC Privilege mode.
DHCP Client-ID :lag128001ec9f10358 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel: Te 1/49(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:14:06 Queueing strategy: fifo Input Statistics: 476 packets, 33180 bytes 414 64-byte pkts, 33 over 64-byte pkts, 29 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 476 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles, 0 CRC, 0 overrun, 0 discarded Output St
• Commas Create a Single-Range Creating a Single-Range Bulk Configuration Dell(conf)# interface range tengigabitethernet 0/1 - 23 Dell(conf-if-range-te-0/1-23)# no shutdown Dell(conf-if-range-te-0/1-23)# Create a Multiple-Range Creating a Multiple-Range Prompt Dell(conf)#interface range tengigabitethernet 0/5 - 10 , tengigabitethernet 0/1 , vlan 1 Dell(conf-if-range-te-0/5-10,te-0/1,vl-1)# Exclude a Smaller Port Range If the interface range has multiple port ranges, the smaller port range is excluded fr
Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command. This command displays an ongoing list of the interface status (up/down), number of packets, traffic statistics, etc. Command Syntax Command Mode Purpose monitor interface interface EXEC Privilege View interface statistics.
m l T q - Change mode Page up Increase refresh interval Quit c - Clear screen a - Page down t - Decrease refresh interval Maintenance Using TDR The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers. TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs. TDR sends a signal onto the physical cable and examines the reflection of the signal that returns.
a temporary stop in data transmission. A situation may arise where a sending device may transmit data faster than a destination device can accept it. The destination sends a pause frame back to the source, stopping the sender’s transmission for a period of time. The globally assigned 48-bit Multicast address 01-80-C2-00-00-01 is used to send and receive pause frames.
Port Channels: • All members must have the same link MTU value and the same IP MTU value. • The port channel link MTU and IP MTU must be less than or equal to the link MTU and IP MTU values configured on the channel members. For example, if the members have a link MTU of 2100 and an IP MTU 2000, the port channel’s MTU values cannot be higher than 2100 for link MTU or 2000 bytes for IP MTU. VLANs: • All members of a VLAN must have the same IP MTU value. • Members can have different link MTU values.
4. Access the port. interface interface slot/port CONFIGURATION 5. Set the local port speed. speed {100 | 1000 | 10000 | auto} INTERFACE 6. Optionally, set full- or half-duplex. duplex {half | full} INTERFACE 7. Disable auto-negotiation on the port. If the speed is set to 1000, you do not need to disable autonegotiation no negotiation auto INTERFACE 8. Verify configuration changes.
Setting Auto-Negotiation Options The negotiation auto command provides a mode option for configuring an individual port to forced master/forced slave after you enable auto-negotiation. CAUTION: Ensure that only one end of the node is configured as forced-master and the other is configured as forced-slave. If both are configured the same (that is, both as forced-master or both as forced-slave), the show interface command flaps between an auto-neg-error and forcedmaster/slave states.
be thrown?) duplex half interfaceconfig mode Supported CLI not available CLI not available Invalid Input error- CLI not available duplex full interfaceconfig mode Supported CLI not available CLI not available Invalid Input error-CLI not available Setting Auto-Negotiation Options: Dell(conf)# int tengig 0/1 Dell(conf-if-te-0/1)#neg auto Dell(conf-if-autoneg)# ? end Exit from configuration mode exit Exit from autoneg configuration mode mode Specify autoneg mode no Negate a command or set its default
Name: TenGigabitEthernet 13/1 802.1QTagged: True Vlan membership: Vlan 2 Name: TenGigabitEthernet 13/2 802.1QTagged: True Vlan membership: Vlan 2 Name: TenGigabitEthernet 13/3 802.1QTagged: True Vlan membership: Vlan 2 --More-- Clearing Interface Counters The counters in the show interfaces command are reset by the clear counters command. This command does not clear the counters captured by any SNMP program.
Enabling the Management Address TLV on All Interfaces of an Aggregator The management address TLV, which is an optional TLV of type 8, denotes the network address of the management interface, and is supported by the Dell Networking OS. It is advertised on all the interfaces on an I/O Aggregator in the Link Layer Discovery Protocol (LLDP) data units.
9 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: Detection and Auto configuration for Dell EqualLogic Arrays iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
treatment over other data passing through the switch. Preferential treatment helps to avoid session interruptions during times of congestion that would otherwise cause dropped iSCSI packets. • iSCSI DCBx TLVs are supported. The following figure shows iSCSI optimization between servers in an M1000e enclosure and a storage array in which an Aggregator connects installed servers (iSCSI initiators) to a storage array (iSCSI targets) in a SAN network.
Information Monitored in iSCSI Traffic Flows iSCSI optimization examines the following data in packets and uses the data to track the session and create the classifier entries that enable QoS treatment: • Initiator’s IP Address • Target’s IP Address • ISID (Initiator defined session identifier) • Initiator’s IQN (iSCSI qualified name) • Target’s IQN • Initiator’s TCP Port • Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data
• iSCSI LLDP monitoring starts to automatically detect EqualLogic arrays. iSCSI optimization requires LLDP to be enabled. LLDP is enabled by default when an Aggregator autoconfigures. The following message displays when you enable iSCSI on a switch and describes the configuration changes that are automatically performed: %STKUNIT0-M:CP %IFMGR-5-IFM_ISCSI_ENABLE: iSCSI has been enabled causing flow control to be enabled on all interfaces.
---------------------------------------------------------------------------------------Target: iqn.2001-05.com.equallogic:0-8a0906-0f60c2002-0360018428d48c94-iom011 Initiator: iqn.1991-05.com.microsoft:win-x9l8v27yajg ISID: 400001370000. show iscsi sessions detailed Command Example Dell# show iscsi sessions detailed Session 0 : ----------------------------------------------------------------------------Target:iqn.2010-11.com.ixia:ixload:iscsi-TG1 Initiator:iqn.2010-11.com.ixia.
Isolated Networks for Aggregators 10 An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a non-isolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
11 Link Aggregation The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128). • Server-facing LAGs are automatically configured if you configure server for link aggregation control protocol (LACP)-based NIC teaming (Network Interface Controller (NIC) Teaming). No manual configuration is required to configure Aggregator ports in the uplink or a server-facing LAG.
remote system-id and port-key combination, a new LAG is formed and the port automatically becomes a member of the LAG. All ports with the same combination of system ID and port key automatically become members of the same LAG. Ports are automatically removed from the LAG if the NIC teaming configuration on a serverfacing port changes or if the port goes operationally down. Also, a server-facing LAG is removed when the last port member is removed from the LAG.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Link Aggregation Control Protocol (LACP) This chapter contains commands for Dell Networks’s implementation of the link aggregation control protocol (LACP) for creating dynamic link aggregation groups (LAGs) — known as port-channels in the Dell Networking Operating System (OS). NOTE: For static LAG commands, refer to the Interfaces chapter), based on the standards specified in the IEEE 802.3 Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications.
You can add any physical interface to a port channel if the interface configuration is minimal. You can configure only the following commands on an interface if it is a member of a port channel: • description • shutdown/no shutdown • mtu • ip mtu (if the interface is on a Jumbo-enabled by default) NOTE: A logical port channel interface cannot have flow control. Flow control can only be present on the physical interfaces if they are part of a port channel.
Hardware address is 00:01:e8:01:46:fa Internet address is 1.1.120.
To reassign an interface to a new port channel, use the following commands. 1. Remove the interface from the first port channel. INTERFACE PORT-CHANNEL mode no channel-member interface 2. Change to the second port channel INTERFACE mode. INTERFACE PORT-CHANNEL mode interface port-channel id number 3. Add the interface to the second port channel.
Configuring VLAN Tags for Member Interfaces To configure and verify VLAN tags for individual members of a port channel, perform the following: 1. Configure VLAN membership on individual ports INTERFACE mode Dell(conf-if-te-0/2)#vlan tagged 2,3-4 2. Use the switchport command in INTERFACE mode to enable Layer 2 data transmissions through an individual interface INTERFACE mode Dell(conf-if-te-0/2)#switchport 3. Verify the manually configured VLAN membership (show interfaces switchport interface command).
Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports (the uplink port-channel is LAG 128) on the I/O Aggregator only when a minimum number of member interfaces of the LAG bundle is up. For example, based on your network deployment, you may want the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state.
Output 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Time since last interface status change: 05:22:28 Optimizing Traffic Disruption Over LAG Interfaces On IOA Switches in VLT Mode When you use the write memory command while an Aggregator operates in VLT mode, the VLT LAG configurations are saved in nonvolatile storage (NVS). By restoring the settings saved in NVS, the VLT ports come up quicker on the primary VLT node and traffic disruption is reduced.
Monitoring the Member Links of a LAG Bundle You can examine and view the operating efficiency and the traffic-handling capacity of member interfaces of a LAG or port channel bundle. This method of analyzing and tracking the number of packets processed by the member interfaces helps you manage and distribute the packets that are handled by the LAG bundle.
Verifying LACP Operation and LAG Configuration To verify the operational status and configuration of a dynamically created LAG, and LACP operation on a LAG on an Aggregator, enter the show interfaces port-channel port-channel-number and show lacp port-channel-number commands.
O - Receiver is in expired state, P - Receiver is not in expired state Port Te 0/41 is enabled, LACP is enabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEGIKNP Key 128 Priority 32768 Partner Admin: State BDFHJLMP Key 0 Priority 0 Oper: State ACEGIKNP Key 128 Priority 32768 Port Te 0/42 is enabled, LACP is enabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEGIKNP Key 128 Priority 32768 Partne
Port Te 0/51 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/52 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/53 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Pri
135 packets, 19315 bytes, 0 underruns 0 64-byte pkts, 79 over 64-byte pkts, 32 over 127-byte pkts 24 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 93 Multicasts, 42 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.
Layer 2 12 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] – address: displays the specified entry. – aging-time: displays the configured aging-time.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves. Figure 19.
Link Layer Discovery Protocol (LLDP) 13 Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
Figure 20. Type, Length, Value (TLV) Segment TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDP-enabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Figure 21. LLDPDU Frame Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
• LLDP is not hitless. Viewing the LLDP Configuration To view the LLDP configuration, use the following command. • Display the LLDP configuration.
Te 0/2 Te 0/3 - 00:00:c9:b1:3b:82 00:00:c9:ad:f6:12 00:00:c9:b1:3b:82 00:00:c9:ad:f6:12 Dell#show lldp neighbors detail ======================================================================== Local Interface Te 0/2 has 1 neighbor Total Frames Out: 16843 Total Frames In: 17464 Total Neighbor information Age outs: 0 Total Multiple Neighbors Detected: 0 Total Frames Discarded: 0 Total In Error Frames: 0 Total Unrecognized TLVs: 0 Total TLVs Discarded: 0 Next packet will be sent after 16 seconds The neighb
Command Syntax Command Mode Purpose clear lldp counters [interface] EXEC Privilege Clear counters for LLDP frames sent to and received from neighboring devices on all Aggregator interfaces or on a specified interface. interface specifies a 10GbE uplink port in the format: tenGigabitEthernet slot/port. Debugging LLDP You can view the TLVs that your system is sending and receiving. To view the TLVs, use the following commands. • View a readable version of the TLVs.
Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • received and transmitted LLDP-MED TLVs Table 9.
MIB Object Category LLDP Variable LLDP MIB Object Description statsFramesInTotal lldpStatsRxPortFramesTotal Total number of LLDP frames received through the port. statsFramesOutTotal lldpStatsTxPortFramesTotal Total number of LLDP frames transmitted through the port. statsTLVsDiscardedTotal lldpStatsRxPortTLVsDiscard edTotal Total number of TLVs received then discarded.
TLV Type TLV Name TLV Variable System LLDP MIB Object 8 Management Address enabled capabilities Local lldpLocSysCapEnabl ed Remote lldpRemSysCapEnab led Local lldpLocManAddrLen Remote lldpRemManAddrLen Local lldpLocManAddrSubt ype Remote lldpRemManAddrSu btype Local lldpLocManAddr Remote lldpRemManAddr management address length management address subtype management address interface numbering Local subtype interface number OID lldpLocManAddrIfSu btype Remote lldpRemManAddrIfS
TLV Type 127 TLV Name VLAN Name TLV Variable System LLDP MIB Object PPVID Local lldpXdot1LocProtoVl anId Remote lldpXdot1RemProtoV lanId Local lldpXdot1LocVlanId Remote lldpXdot1RemVlanId Local lldpXdot1LocVlanNa me Remote lldpXdot1RemVlanN ame Local lldpXdot1LocVlanNa me Remote lldpXdot1RemVlanN ame VID VLAN name length VLAN name Table 12.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object Tagged Flag Local lldpXMedLocMediaP olicyTagged Remote lldpXMedLocMediaP olicyTagged Local lldpXMedLocMediaP olicyVlanID Remote lldpXMedRemMedia PolicyVlanID Local lldpXMedLocMediaP olicyPriority Remote lldpXMedRemMedia PolicyPriority Local lldpXMedLocMediaP olicyDscp Remote lldpXMedRemMedia PolicyDscp Local lldpXMedLocLocatio nSubtype Remote lldpXMedRemLocati onSubtype Local lldpXMedLocLocatio nInfo Remote lldpXMedRem
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object lldpXMedLocXPoEPS EPortPDPriority Remote lldpXMedRemXPoEP SEPowerPriority lldpXMedRemXPoEP DPowerPriority Power Value Local lldpXMedLocXPoEPS EPortPowerAv lldpXMedLocXPoEP DPowerReq Remote lldpXMedRemXPoEP SEPowerAv lldpXMedRemXPoEP DPowerReq Link Layer Discovery Protocol (LLDP) 151
Port Monitoring 14 The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
Example of Viewing Port Monitoring Configuration To display information on currently configured port-monitoring sessions, use the show monitor session command from EXEC Privilege mode.
• The destination interface must be an uplink port (ports 9 to 12). • In general, a monitoring port should have no ip address and no shutdown as the only configuration; the Dell Networking OS permits a limited set of commands for monitoring ports. You can display these commands using the ? command. • A monitoring port may not be a member of a VLAN. • There may only be one destination port in a monitoring session. • A source port (MD) can only be monitored by one destination port (MG).
• If the MD port is a Layer 2 port, the frames are tagged with the VLAN ID of the VLAN to which the MD belongs. • If the MD port is a Layer 3 port, the frames are tagged with VLAN ID 4095. • If the MD port is in a Layer 3 VLAN, the frames are tagged with the respective Layer 3 VLAN ID.
15 Security for M I/O Aggregator Security features are supported on the M I/O Aggregator. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Understanding Banner Settings This functionality is supported on the M I/O Aggregator.
AAA Authentication Dell Networking OS supports a distributed client/server system implemented through authentication, authorization, and accounting (AAA) to help secure networks against unauthorized access.
way, and does so to ensure that users are not locked out of the system if network-wide issue prevents access to these servers. 1. Define an authentication method-list (method-list-name) or specify the default. CONFIGURATION mode aaa authentication login {method-list-name | default} method1 [... method4] The default method-list is applied to all terminal lines. Possible methods are: 2. • enable: use the password you defined using the enable secret or enable password command in CONFIGURATION mode.
Enabling AAA Authentication — RADIUS To enable authentication from the RADIUS server, and use TACACS as a backup, use the following commands. 1. Enable RADIUS and set up TACACS as backup. CONFIGURATION mode aaa authentication enable default radius tacacs 2. Establish a host address and password. CONFIGURATION mode radius-server host x.x.x.x key some-password 3. Establish a host address and password. CONFIGURATION mode tacacs-server host x.x.x.
the RADIUS server and requests authentication of the user and password. The RADIUS server returns one of the following responses: • • Access-Accept — the RADIUS server authenticates the user. Access-Reject — the RADIUS server does not authenticate the user. If an error occurs in the transmission or reception of RADIUS packets, you can view the error by enabling the debug radius command. Transactions between the RADIUS server and the client are encrypted (the users’ passwords are not sent in plain text).
Defining a AAA Method List to be Used for RADIUS To configure RADIUS to authenticate or authorize users on the system, create a AAA method list. Default method lists do not need to be explicitly applied to the line, so they are not mandatory. To create a method list, use the following commands. • Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the RADIUS authentication method.
– auth-port port-number: the range is from 0 to 65335. Enter a UDP port number. The default is 1812. – retransmit retries: the range is from 0 to 100. Default is 3. – timeout seconds: the range is from 0 to 1000. Default is 5 seconds. – key [encryption-type] key: enter 0 for plain text or 7 for encrypted text, and a string for the key. The key can be up to 42 characters long. This key must match the key configured on the RADIUS server host.
• – retries: the range is from 0 to 100. Default is 3 retries. Configure the time interval the system waits for a RADIUS server host response. CONFIGURATION mode radius-server timeout seconds – seconds: the range is from 0 to 1000. Default is 5 seconds. To view the configuration of RADIUS communication parameters, use the show running-config command in EXEC Privilege mode. Monitoring RADIUS To view information on RADIUS transactions, use the following command.
To select TACACS+ as the login authentication method, use the following commands. 1. Configure a TACACS+ server host. CONFIGURATION mode tacacs-server host {ip-address | host} Enter the IP address or host name of the TACACS+ server. Use this command multiple times to configure multiple TACACS+ server hosts. 2. Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the TACAS+ authentication method.
tacacs-server host 10.10.10.10 timeout 1 Dell(conf)#tacacs-server key angeline Dell(conf)#%RPM0-P:CP %SEC-5-LOGIN_SUCCESS: Login successful for user admin on vty0 (10.11.9.209) %RPM0-P:CP %SEC-3-AUTHENTICATION_ENABLE_SUCCESS: Enable password authentication success on vty0 ( 10.11.9.209 ) %RPM0-P:CP %SEC-5-LOGOUT: Exec session is terminated for user admin on line vty0 (10.11.9.
Suppressing AAA Accounting for Null Username Sessions When you activate AAA accounting, the Dell Networking OS software issues accounting records for all users on the system, including users whose username string is NULL because of protocol translation. An example of this is a user who comes in on a line where the AAA authentication login method-list none command is applied.
To obtain accounting records displaying information about users currently logged in, use the following command. • Step through all active sessions and print all the accounting records for the actively accounted functions.
– timeout seconds: the range is from 0 to 1000. Default is 10 seconds. – key key: enter a string for the key. The key can be up to 42 characters long. This key must match a key configured on the TACACS+ server host. This parameter must be the last parameter you configure. If you do not configure these optional parameters, the default global values are applied. Example of Connecting with a TACACS+ Server Host To specify multiple TACACS+ server hosts, configure the tacacs-server host command multiple times.
accounting exec execAcct Example of Enabling AAA Accounting with a Named Method List Dell(config-line-vty)# accounting commands 15 com15 Dell(config-line-vty)# accounting exec execAcct Monitoring AAA Accounting Dell Networking OS does not support periodic interim accounting because the periodic command can cause heavy congestion when many users are logged in to the network. No specific show command exists for TACACS+ accounting.
To use the SSH client, use the following command. • Open an SSH connection and specify the host name, username, port number, and version of the SSH client. EXEC Privilege mode ssh {hostname} [-l username | -p port-number | -v {1 | 2} • hostname is the IP address or host name of the remote device. Enter an IPv4 or IPv6 address in dotted decimal format (A.B.C.D). Configure the Dell Networking system as an SCP/SSH server.
3. On Chassis Two, invoke SCP. CONFIGURATION mode copy scp: flash: 4. On Chassis Two, in response to prompts, enter the path to the desired file and enter the port number specified in Step 1. EXEC Privilege mode Example of Using SCP to Copy from an SSH Server on Another Switch Other SSH-related commands include: • crypto key generate: generate keys for the SSH server. • debug ip ssh: enables collecting SSH debug information.
• Configuring Host-Based SSH Authentication Important Points to Remember • If you enable more than one method, the order in which the methods are preferred is based on the ssh_config file on the Unix machine. • When you enable all the three authentication methods, password authentication is the backup method when the RSA method fails. • The files known_hosts and known_hosts2 are generated when a user tries to SSH using version 1 or version 2, respectively.
5. Bind the public keys to RSA authentication. EXEC Privilege mode ip ssh rsa-authentication my-authorized-keys flash://public_key Example of Generating RSA Keys admin@Unix_client#ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/admin/.ssh/id_rsa): /home/admin/.ssh/id_rsa already exists. Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/admin/.ssh/id_rsa.
admin@Unix_client# cat ssh_host_rsa_key.pub ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA8K7jLZRVfjgHJzUOmXxuIbZx/ AyWhVgJDQh39k8v3e8eQvLnHBIsqIL8jVy1QHhUeb7GaDlJVEDAMz30myqQbJgXBBRTWgBpLWwL/ doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hL1by8cYZP2kYS2lnSyQWk= admin@Unix_client# ls id_rsa id_rsa.pub shosts admin@Unix_client# cat shosts 10.16.127.
Telnet To use Telnet with SSH, first enable SSH, as previously described. By default, the Telnet daemon is enabled. If you want to disable the Telnet daemon, use the following command, or disable Telnet in the startup config. To enable or disable the Telnet daemon, use the [no] ip telnet server enable command. The Telnet server or client is VRF-aware. You can enable a Telnet server or client to listen to a specific VRF by using the vrf vrf-instance-name parameter in the telnet command.
You can assign line authentication on a per-VTY basis; it is a simple password authentication, using an access-class as authorization. Configure local authentication globally and configure access classes on a per-user basis. Dell Networking OS can assign different access classes to different users by username. Until users attempt to log in, Dell Networking OS does not know if they will be assigned a VTY line.
Dell(config-line-vty)#end (same applies for radius and line authentication) VTY MAC-SA Filter Support Dell Networking OS supports MAC access lists which permit or deny users based on their source MAC address. With this approach, you can implement a security policy based on the source MAC address. To apply a MAC ACL on a VTY line, use the same access-class command as IP ACLs. The following example shows how to deny incoming connections from subnet 10.0.0.0 without displaying a login prompt.
16 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
Setting up SNMP Dell Networking OS supports SNMP version 1 and version 2 which are community-based security models. The primary difference between the two versions is that version 2 supports two additional protocol operations (informs operation and snmpgetbulk query) and one additional object (counter64 object). Creating a Community For SNMPv1 and SNMPv2, create a community to enable the community-based security in the Dell Networking OS.
• Read the value of the managed object directly below the specified object. • snmpgetnext -v version -c community agent-ip {identifier.instance | descriptor.instance} Read the value of many objects at once. snmpwalk -v version -c community agent-ip {identifier.instance | descriptor.instance} In the following example, the value “4” displays in the OID before the IP address for IPv4. For an IPv6 IP address, a value of “16” displays.
To display the ports in a VLAN, send an snmpget request for the object dot1qStaticEgressPorts using the interface index as the instance number, as shown in the following example. Example of Viewing the Ports in a VLAN in SNMP snmpget -v2c -c mycommunity 10.11.131.185 .1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.1.4.3.1.2.
Fetching Dynamic MAC Entries using SNMP The Aggregator supports the RFC 1493 dot1d table for the default VLAN and the dot1q table for all other VLANs. NOTE: The table contains none of the other information provided by the show vlan command, such as port speed or whether the ports are tagged or untagged. NOTE: The 802.1q Q-BRIDGE MIB defines VLANs regarding 802.1d, as 802.1d itself does not define them.
>snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.172 = Hex-STRING: 00 01 E8 06 95 AC Example of Fetching Dynamic MAC Addresses on a Non-default VLANs In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN. To fetch the MAC addresses learned on non-default VLANs, use the object dot1qTpFdbTable. The instance number is the VLAN number concatenated with the decimal conversion of the MAC address.
are not given. The interface is physical, so this must be represented by a 0 bit, and the unused bit is always 0. These two bits are not given because they are the most significant bits, and leading zeros are often omitted. For interface indexing, slot and port numbering begins with binary one. If the Dell Networking system begins slot and port numbering from 0, binary 1 represents slot and port 0. In S4810, the first interface is 0/0, but in the Aggregator the first interface is 0/1.
dot3aCurAggVlanId SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.1.1.0.0.0.0.0.1.1 dot3aCurAggMacAddr SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.2.1.0.0.0.0.0.1.1 00 00 00 01 dot3aCurAggIndex SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.3.1.0.0.0.0.0.1.1 dot3aCurAggStatus SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.4.1.0.0.0.0.0.1.1 active, 2 – status inactive = INTEGER: 1 = Hex-STRING: 00 00 = INTEGER: 1 = INTEGER: 1 << Status For L3 LAG, you do not have this support. SNMPv2-MIB::sysUpTime.
Unit Slot Expected Inserted Next Boot Status/Power(On/Off) -----------------------------------------------------------------------1 0 SFP+ SFP+ AUTO Good/On 1 1 QSFP+ QSFP+ AUTO Good/On * - Mismatch Dell# The status of the MIBS is as follows: $ snmpwalk -c public -v 2c 10.16.130.148 1.3.6.1.2.1.47.1.1.1.1.2 SNMPv2-SMI::mib-2.47.1.1.1.1.2.1 = "" SNMPv2-SMI::mib-2.47.1.1.1.1.2.2 = STRING: "PowerConnect I/O-Aggregator" SNMPv2-SMI::mib-2.47.1.1.1.1.2.3 = STRING: "Module 0" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.76 = STRING: "Unit: 1 Port 9 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.77 = STRING: "Unit: 1 Port 10 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.78 = STRING: "Unit: 1 Port 11 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.79 = STRING: "Unit: 1 Port 12 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.80 = STRING: "Unit: 1 Port 13 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.81 = STRING: "Unit: 1 Port 14 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.
• An additional 1 byte is reserved for future. Fetching the Switchport Configuration and the Logical Interface Configuration Important Points to Remember • The SNMP should be configured in the chassis and the chassis management interface should be up with the IP address. • If a port is configured in a VLAN, the respective bit for that port will be set to 1 in the specific VLAN. • In the aggregator, all the server ports and uplink LAG 128 will be in switchport. Hence, the respective bits are set to 1.
SNMP Traps for Link Status To enable SNMP traps for link status changes, use the snmp-server enable traps snmp linkdown linkup command.
Stacking 17 An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported only on the 40GbE ports on the base module. Stacking is limited to six Aggregators in the same or different m1000e chassis. To configure a stack, you must use the CLI. Stacking provides a single point of management for high availability and higher throughput.
Figure 25. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. • Stack master — primary management unit • Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
• Switch failure • Inter-switch stacking link failure • Switch insertion • Switch removal If the master switch goes off line, the standby replaces it as the new master. NOTE: For the Aggregator, the entire stack has only one management IP address. Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command.
from standby to master. The lack of a standby unit triggers an election within the remaining units for a standby role. After the former master switch recovers, despite having a higher priority or MAC address, it does not recover its master role but instead takes the next available role. MAC Addressing All port interfaces in the stack use the MAC address of the management interface on the master switch.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator.
Figure 26.
Configuring a Switch Stack To configure and bring up a switch stack, follow these steps: 1. Connect the 40GbE ports on the base module of two Aggregators using 40G direct attach or QSFP fibre cables. 2. Configure each Aggregator to operate in stacking mode. 3. Reload each Aggregator, one after the other in quick succession. Stacking Prerequisites Before you cable and configure a stack of MXL 10/40GbE switches, review the following prerequisites.
The resulting ring topology allows the entire stack to function as a single switch with resilient fail-over capabilities. If you do not connect the last switch to the first switch (Step 4), the stack operates in a daisy chain topology with less resiliency. Any failure in a non-edge stack unit causes a split stack. Accessing the CLI To configure a stack, you must access the stack master in one of the following ways.
Adding a Stack Unit You can add a new unit to an existing stack both when the unit has no stacking ports (stack groups) configured and when the unit already has stacking ports configured. If the units to be added to the stack have been previously used, they are assigned the smallest available unit ID in the stack. To add a standalone Aggregator to a stack, follow these steps: 1. Power on the switch. 2.
EXEC Privilege mode • reset stack-unit unit-number Reset a stack-unit when the unit is in a problem state. EXEC Privilege mode reset stack-unitunit-number {hard} Removing an Aggregator from a Stack and Restoring Quad Mode To remove an Aggregator from a stack and return the 40GbE stacking ports to 4x10GbE quad mode follow the below steps: 1. Disconnect the stacking cables from the unit. The unit can be powered on or off and can be online or offline. 2.
After you restart the Aggregator, the 4-Port 10 GbE Ethernet modules or the 40GbE QSFP+ port that is split into four 10GbE SFP+ ports cannot be configured to be part of the same uplink LAG bundle that is set up with the uplink speed of 40 GbE. In such a condition, you can perform a hot-swap of the 4-port 10 GbE Flex IO modules with a 2-port 40 GbE Flex IO module, which causes the module to become a part of the LAG bundle that is set up with 40 GbE as the uplink speed without another reboot.
-----------------------------------------------0 10G 40G Verifying a Stack Configuration The following lists the status of a stacked switch according to the color of the System Status light emitting diodes (LEDs) on its front panel. • • • Blue indicates the switch is operating as the stack master or as a standalone unit. Off indicates the switch is a member or standby unit. Amber indicates the switch is booting or a failure condition has occurred.
Current Type : I/O-Aggregator - 34-port GE/TE (XL) Master priority : 0 Hardware Rev : Num Ports : 56 Up Time : 2 hr, 41 min FTOS Version : 8-3-17-46 Jumbo Capable : yes POE Capable : no Burned In MAC : 00:1e:c9:f1:00:9b No Of MACs : 3 -- Unit 1 -Unit Type : Standby Unit Status : online Next Boot : online Required Type : I/O-Aggregator - 34-port GE/TE (XL) Current Type : I/O-Aggregator - 34-port GE/TE (XL) Master priority : 0 Hardware Rev : Num Ports : 56 Up Time : 2 hr, 27 min FTOS Version : 8-3-17-46 Jumbo
Dell# 5 0/53 Example of the show system stack-ports (ring) Command Dell# show system stack-ports Topology: Ring Interface Connection Link Speed (Gb/s) 0/33 1/33 40 0/37 1/37 40 1/33 0/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Group up up up up Example of the show system stack-ports (daisy chain) Command Dell# show system stack-ports Topology: Daisy Chain Interface Connection Link Speed (Gb/s) 0/33 40 0/37 1/37 40 1/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Gro
Example of the show redundancy Command Dell#show redundancy -- Stack-unit Status --------------------------------------------------------Mgmt ID: 0 Stack-unit ID: 0 Stack-unit Redundancy Role: Primary Stack-unit State: Active (Indicates Master Unit.) Stack-unit SW Version: E8-3-16-46 Link to Peer: Up -- PEER Stack-unit Status --------------------------------------------------------Stack-unit State: Standby (Indicates Standby Unit.
0 throttles, 0 discarded, 0 collisions, 0 wredDrops Rate info (interval 45 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Failure Scenarios The following sections describe some of the common fault conditions that can happen in a switch stack and how they are resolved. Stack Member Fails • • Problem: A unit that is not the stack master fails in an operational stack.
The following is an example of the stack-link flapping error message. --------------------------------------MANAGMENT UNIT----------------------------------------Error: Stack Port 49 has flapped 5 times within 10 seconds.Shutting down this stack port now. Error: Please check the stack cable/module and power-cycle the stack. 10:55:20: %STKUNIT1-M:CP %KERN-2-INT: Error: Stack Port 50 has flapped 5 times within 10 seconds.Shutting down this stack port now.
4 5 Member not present Member not present Stack Unit in Card-Problem State Due to Configuration Mismatch • • Problem: A stack unit enters a Card-Problem state because there is a configuration mismatch between the logical provisioning stored for the stack-unit number on the master switch and the newly added unit with the same number. Resolution: From the master switch, reload the stack by entering thereload command in EXEC Privilege mode. When the stack comes up, the card problem will be resolved.
................................................................................ ... ................................................................................ ... 31972272 bytes successfully copied System image upgrade completed successfully.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!! Image upgraded to Stack unit 1 Dell# configure Dell(conf)# boot system stack-unit 1 primary system: A: Dell(conf)# end Dell#Jan 3 14:27:00: %STKUNIT0-M:CP %SYS-5-CONFIG_I: Configured from console Dell# write memory Jan 3 14:27:10: %STKUNIT0-M:CP %FILEMGR-5-FILESAVED: Copied running-config to startup-config in flash by default Synchronizing data to peer Stack-unit !!!! ....
Broadcast Storm Control 18 On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
System Time and Date 19 The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter. • Setting the Time for the Software Clock • Setting the Time Zone • Setting Daylight Savings Time Setting the Time for the Software Clock You can change the order of the month and day parameters to enter the time and date as time day month year.
clock timezone timezone-name offset – timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone. * a minus sign (-) then a number from 1 to 23 as the number of hours.
Setting Recurring Daylight Saving Time Set a date (and time zone) on which to convert the switch to daylight saving time on a specific day every year. If you have already set daylight saving for a one-time setting, you can set that date and time as the recurring setting with the clock summer-time time-zone recurring command. To set a recurring daylight saving time, use the following command. • Set the clock to the appropriate timezone and adjust to daylight saving time every year.
NOTE: If you enter after entering the recurring command parameter, and you have already set a one-time daylight saving time/date, the system uses that time and date as the recurring setting.
Uplink Failure Detection (UFD) 20 Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 27. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 28. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
Using UFD, you can configure the automatic recovery of downstream ports in an uplink-state group when the link status of an upstream port changes. The tracking of upstream link status does not have a major impact on central processing unit (CPU) usage. UFD and NIC Teaming To implement a rapid failover solution, you can use uplink failure detection on a switch with network adapter teaming on a server. For more information, refer to Network Interface Controller (NIC) Teaming.
– For an example of debug log message, refer to Clearing a UFD-Disabled Interface. Configuring Uplink Failure Detection (PMUX mode) To configure UFD, use the following commands. 1. Create an uplink-state group and enable the tracking of upstream links on the switch/router. CONFIGURATION mode uplink-state-group group-id • group-id: values are from 1 to 16. To delete an uplink-state group, use the no uplink-state-group group-id command. 2.
4. (Optional) Enable auto-recovery so that UFD-disabled downstream ports in the uplink-state group come up when a disabled upstream port in the group comes back up. UPLINK-STATE-GROUP mode downstream auto-recover The default is auto-recovery of UFD-disabled downstream ports is enabled. To disable auto-recovery, use the no downstream auto-recover command. 5. Specify the time (in seconds) to wait for the upstream port channel (LAG 128) to come back up before server ports are brought down.
* Where port-range and port-channel-range specify a range of ports separated by a dash (-) and/or individual ports/port channels in any order; for example: tengigabitethernet 1/1-2,5,9,11-12 port-channel 1-3,5 * A comma is required to separate each port and port-range entry. clear ufd-disable {interface interface | uplink-state-group group-id}: reenables all UFD-disabled downstream interfaces in the group. The range is from 1 to 16.
00:11:51: %STKUNIT0-M:CP %IFMGR-5-OSTATE_UP: Changed interface state to up: Te 0/5 00:11:51: %STKUNIT0-M:CP %IFMGR-5-OSTATE_UP: Changed interface state to up: Te 0/6 Displaying Uplink Failure Detection To display information on the UFD feature, use any of the following commands. • Display status information on a specified uplink-state group or all groups. EXEC mode show uplink-state-group [group-id] [detail] – group-id: The values are 1 to 16.
(Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 1 Status: Enabled, Up Upstream Interfaces : Downstream Interfaces : Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Tengig 0/46(Up) Tengig 0/47(Up) Downstream Interfaces : Te 13/0(Up) Te 13/1(Up) Te 13/3(Up) Te 13/5(Up) Te 13/6(Up) Uplink State Group : 5 Status: Enabled, Down Upstream Interfaces : Tengig 0/0(Dwn) Tengig 0/3(Dwn) Tengig 0/5(Dwn) Downstream Interfaces : Te 13/2(Dis) Te 13/4(Dis) Te 13/11(D
no enable downstream TenGigabitEthernet 0/0 upstream TenGigabitEthernet 0/1 Dell# Dell(conf-uplink-state-group-16)# show configuration ! uplink-state-group 16 no enable description test downstream disable links all downstream TengigabitEthernet 0/40 upstream TengigabitEthernet 0/41 upstream Port-channel 8 Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD on a switch/router in which you configure as follows. • Configure uplink-state group 3.
Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Up) Te 0/4(Up) Downstream Interfaces : Te 0/1(Up) Te 0/2(Up) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) < After a single uplink port fails > Dell#show uplink-state-group detail (Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Do
3. Change the default timer.
PMUX Mode of the IO Aggregator 21 This chapter describes the various configurations applicable in PMUX mode. Introduction This document provides configuration instructions and examples for the Programmable MUX (PMUX mode) for the Dell Networking M I/O Aggregator using Dell Networking OS version 9.3(0.0).
Dell#show system stack-unit 0 iom-mode Unit Boot-Mode Next-Boot -----------------------------------------------0 standalone standalone Dell# 2. Change IOA mode to PMUX mode. Dell(conf)# stack-unit 0 iom-mode programmable-mux Where stack-unit 0 defines the default stack-unit number. 3. Delete the startup configuration file. Dell# delete startup-config 4. Reboot the IOA by entering the reload command. Dell# reload 5. Repeat the above steps for each member of the IOA in PMUX mode.
Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP. 1. Bring up all the ports. Dell#configure Dell(conf)#int range tengigabitethernet 0/1 - 56 Dell(conf-if-range-te-0/1-56)#no shutdown 2. Associate the member ports into LAG-10 and 11.
4. Configure the port mode, VLAN, and so forth on the port-channel. Dell#configure Dell(conf)#int port-channel 10 Dell(conf-if-po-10)#portmode hybrid Dell(conf-if-po-10)#switchport Dell(conf-if-po-10)#vlan tagged 1000 Dell(conf-if-po-10)#link-bundle-monitor enable Dell#configure Dell(conf)#int port-channel 11 Dell(conf-if-po-11)#portmode hybrid Dell(conf-if-po-11)#switchport Dell(conf-if-po-11)#vlan tagged 1000 % Error: Same VLAN cannot be added to more than one uplink port/LAG.
The following sample commands configure multiple dynamic uplink LAGs with 40G member ports based on LACP. 1. Convert the quad mode (4x10G) ports to native 40G mode. Dell#configure Dell(conf)#no stack-unit 0 port 33 portmode quad Disabling quad mode on stack-unit 0 port 33 will make interface configs of Te 0/33 Te 0/34 Te 0/35 Te 0/36 obsolete after a save and reload. [confirm yes/no]:yes Please save and reset unit 0 for the changes to take effect.
4. Configure the port mode, VLAN, and so forth on the port-channel. Dell#configure Dell(conf)#int port-channel 20 Dell(conf-if-po-20)#portmode hybrid Dell(conf-if-po-20)#switchport Dell(conf-if-po-20)#no shut Dell(conf-if-po-20)#ex Dell(conf)#int port-channel 21 Dell(conf-if-po-21)#portmode hybrid Dell(conf-if-po-21)#switchport Dell(conf-if-po-21)#no shut Dell(conf-if-po-21)#end Dell# 5. Show the port channel status.
Uplink Failure Detection (UFD) UFD provides detection of the upstream connectivity loss and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. 1. Create the UFD group and associate the downstream and upstream ports. Dell#configure Dell(conf)#uplink-state-group 1 Dell(conf-uplink-state-group-1)# Dell(conf-uplink-state-group-1)#upstream port-channel 128 Dell(conf-uplink-state-group-1)#downstream tengigabitethernet 0/1-32 2.
For VLT operations, use the following configurations on both the primary and secondary VLT. Ensure the VLTi links are connected and administratively up. VLTi connects the VLT peers for VLT data exchange. 1. Configure VLTi. Dell#configure Dell(conf)#int port-channel 127 Dell(conf-if-po-127)# channel-member fortygige 0/33,37 Dell(conf-if-po-127)# no shutdown Dell(conf-if-po-127)# end 2. Configure the VLT domain.
5. Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unit-id.
7. Show the VLAN configurations.
NOTE: Prior to configuring the stack-group, ensure the stacking ports are connected and in 40G native mode. 1. Configure stack groups on all stack units. Dell# Dell#configure Dell(conf)#stack-unit 0 stack-group 0 Dell(conf)#00:37:46: %STKUNIT0-M:CP %IFMGR-6-STACK_PORTS_ADDED: Ports Fo 0/33 have been configured as stacking ports.
6. Apply an FCoE map on server-facing Ethernet ports. 7. Apply an FCoE Map on fabric-facing FC ports. Enabling Fibre Channel Capability on the Switch You must first enable an IOA and MXL switch with the FC Flex IO module that you want to configure as an NPG for the FC protocol. Task Command Command Mode Enable an IOA and MXL switch with the FC Flex IO module for the FC protocol.
Applying a DCB Map on Server-Facing Ethernet Ports You can apply a DCB map only on a physical Ethernet interface and can apply only one DCB map per interface. Task Command Command Mode Enter Interface Configuration mode on a server-facing port to apply a DCB map. interface {tengigabitEthernet slot/ port | fortygigabitEthernet slot/port} CONFIGURATION Apply the DCB map on an Ethernet port. Repeat this step to apply a DCB map to more than one port.
Add a text description. The maximum is 32 characters. description text FCoE MAP fc-map fc-map-value Specify the FC-MAP value used to generate a fabric-provided MAC address. You must enter a unique MAC address prefix as the FC-MAP value for each fabric. The range is from 0EFC00 to 0EFCFF. The default is None. FCoE MAP Configure the priority used by a server CNA to select the FCF for a fabric login (FLOGI). The range is from 0 to 128. The default is 128.
sending FIP multicast advertisements using the parameters in the FCoE map over server-facing Ethernet ports. A server sees the FC port with its applied FCoE map as an FCF port. Task Command Command Mode Configure a fabric-facing FC port. interface fibrechannel slot/port CONFIGURATION fabric map-name Apply the FCoE and FC fabric configurations in an FCoE map to the port. Repeat this step to apply an FCoE map to more than one FC port. INTERFACE FIBRE_CHANNEL Enable the port for FC transmission.
5. Enable an upstream FC port. Dell(config)# interface fibrechannel 0/0 Dell(config-if-fc-0)# no shutdown Dell (config-if-fc-0)# fabric SAN_FABRIC_A 6. Enable a downstream Ethernet port. Dell(config)#interface tengigabitEthernet 0/0 Dell(conf-if-te-0)# no shutdown Dell (conif-if-te-0)# fcoe-map SAN_FABRIC_A Displaying NPIV Proxy Gateway Information To display NPG operation information, use the following show commands.
Configure LLDP Configuring LLDP is a two-step process. 1. Enable LLDP globally. 2. Advertise TLVs out of an interface. Related Configuration Tasks • Viewing the LLDP Configuration • Viewing Information Advertised by Adjacent LLDP Agents • Configuring LLDPDU Intervals • Configuring a Time to Live • Debugging LLDP Important Points to Remember • LLDP is enabled by default. • Dell Networking systems support up to eight neighbors per interface.
Dell(conf-lldp)#exit Dell(conf)#interface tengigabitethernet 0/3 Dell(conf-if-te-0/3)#protocol lldp Dell(conf-if-te-0/3-lldp)#? advertise Advertise TLVs disable Disable LLDP protocol on this interface end Exit from configuration mode exit Exit from LLDP configuration mode hello LLDP hello configuration mode LLDP mode configuration (default = rx and tx) multiplier LLDP multiplier configuration no Negate a command or set its defaults show Show LLDP configuration Dell(conf-if-te-0/3-lldp)# Enabling LLDP LLDP
To advertise TLVs, use the following commands. 1. Enter LLDP mode. CONFIGURATION or INTERFACE mode protocol lldp 2. Advertise one or more TLVs. PROTOCOL LLDP mode advertise {dcbx-appln-tlv | dcbx-tlv | dot3-tlv | interface-port-desc | management-tlv | med } Include the keyword for each TLV you want to advertise. • For management TLVs: system-capabilities, system-description. • For 802.1 TLVs: port-protocol-vlan-id, port-vlan-id. • For 802.3 TLVs: max-frame-size.
Viewing the LLDP Configuration To view the LLDP configuration, use the following command. • Display the LLDP configuration.
Example of Viewing Details Advertised by Neighbors Dell#show lldp neighbors detail ======================================================================== Local Interface Te 0/4 has 1 neighbor Total Frames Out: 6547 Total Frames In: 4136 Total Neighbor information Age outs: 0 Total Frames Discarded: 0 Total In Error Frames: 0 Total Unrecognized TLVs: 0 Total TLVs Discarded: 0 Next packet will be sent after 7 seconds The neighbors are given below: ------------------------------------------------------------
hello 10 Dell(conf-lldp)# Dell(conf-lldp)#no hello Dell(conf-lldp)#show config ! protocol lldp Dell(conf-lldp)# Configuring a Time to Live The information received from a neighbor expires after a specific amount of time (measured in seconds) called a time to live (TTL). The TTL is the product of the LLDPDU transmit interval (hello) and an integer called a multiplier. The default multiplier is 4, which results in a default TTL of 120 seconds. • Adjust the TTL value. CONFIGURATION mode or INTERFACE mode.
• View a readable version of the TLVs. • debug lldp brief View a readable version of the TLVs plus a hexadecimal version of the entire LLDPDU. debug lldp detail Figure 30. The debug lldp detail Command — LLDPDU Packet Dissection Virtual Link Trunking (VLT) VLT allows physical links between two chassis to appear as a single virtual link to the network core.
NOTE: When you launch the VLT link, the VLT peer-ship is not established if any of the following is TRUE: • • • • The VLT System-MAC configured on both the VLT peers do not match. The VLT Unit-Id configured on both the VLT peers are identical. The VLT System-MAC or Unit-Id is configured only on one of the VLT peers. The VLT domain ID is not the same on both peers. If the VLT peer-ship is already established, changing the System-MAC or Unit-Id does not cause VLT peer-ship to go down.
upstream network. L2/L3 control plane protocols and system management features function normally in VLT mode. Features such as VRRP and internet group management protocol (IGMP) snooping require state information coordinating between the two VLT chassis. IGMP and VLT configurations must be identical on both sides of the trunk to ensure the same behavior on both sides. VLT Terminology The following are key VLT terms.
– ARP tables are synchronized between the VLT peer nodes. – VLT peer switches operate as separate chassis with independent control and data planes for devices attached on non-VLT ports. – One chassis in the VLT domain is assigned a primary role; the other chassis takes the secondary role. The primary and secondary roles are required for scenarios when connectivity between the chassis is lost. VLT assigns the primary chassis role according to the lowest MAC address. You can configure the primary role.
– If the link between the VLT peer switches is established, changing the VLT system MAC address or the VLT unit-id causes the link between the VLT peer switches to become disabled. However, removing the VLT system MAC address or the VLT unit-id may disable the VLT ports if you happen to configure the unit ID or system MAC address on only one VLT peer at any time.
– In a VLT domain, the following software features are supported on VLT physical ports: 802.1p, LLDP, flow control, port monitoring, and jumbo frames. • Software features not supported with VLT – In a VLT domain, the following software features are supported on non-VLT ports: 802.1x, , DHCP snooping, FRRP, IPv6 dynamic routing, ingress and egress QOS.
When the bandwidth usage drops below the 80% threshold, the system generates another syslog message (shown in the following message) and an SNMP trap. %STKUNIT0-M:CP %VLTMGR-6-VLT-LAG-ICL: Overall Bandwidth utilization of VLT-ICLLAG (port-channel 25) reaches below threshold. Bandwidth usage (74 )VLT show remote port channel status VLT and Stacking You cannot enable stacking with VLT. If you enable stacking on a unit on which you want to enable VLT, you must first remove the unit from the existing stack.
Non-VLT ARP Sync In the Dell Networking OS version 9.2(0.0), ARP entries (including ND entries) learned on other ports are synced with the VLT peer to support station move scenarios. Prior to Dell Networking OS version 9.2.(0.0), only ARP entries learned on VLT ports were synced between peers. Additionally, ARP entries resulting from station movements from VLT to non-VLT ports or to different non-VLT ports are learned on the non-VLT port and synced with the peer node.
show interfaces interface – interface: specify one of the following interface types: * Fast Ethernet: enter fastethernet slot/port. * 1-Gigabit Ethernet: enter gigabitethernet slot/port. * 10-Gigabit Ethernet: enter tengigabitethernet slot/port. * Port channel: enter port-channel {1-128}.
ICL Link Status: HeartBeat Status: VLT Peer Status: Local Unit Id: Version: Local System MAC address: Remote System MAC address: Configured System MAC address: Remote system version: Delay-Restore timer: Up Up Up 1 5(1) 00:01:e8:8a:e7:e7 00:01:e8:8a:e9:70 00:0a:0a:01:01:0a 5(1) 90 seconds Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20,
Dell_VLTpeer2# show running-config vlt ! vlt domain 30 peer-link port-channel 60 back-up destination 10.11.200.
Configure the port channel to an attached device. Dell_VLTpeer1(conf)#interface port-channel 110 Dell_VLTpeer1(conf-if-po-110)#no ip address Dell_VLTpeer1(conf-if-po-110)#switchport Dell_VLTpeer1(conf-if-po-110)#channel-member fortyGigE 0/52 Dell_VLTpeer1(conf-if-po-110)#no shutdown Dell_VLTpeer1(conf-if-po-110)#vlt-peer-lag port-channel 110 Dell_VLTpeer1(conf-if-po-110)#end Verify that the port channels used in the VLT domain are assigned to the same VLAN.
Verify that the port channels used in the VLT domain are assigned to the same VLAN.
Description Behavior at Peer Up Behavior During Run Time Action to Take Remote VLT port channel status N/A N/A Use the show vlt detail and show vlt brief commands to view the VLT port channel status information. System MAC mismatch A syslog error message and an SNMP trap are generated. A syslog error message and an SNMP trap are generated. Verify that the unit ID of VLT peers is not the same on both units and that the MAC address is the same on both units.
FC Flex IO Modules 22 This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
slots of the MXL 10/40GbE Switch and it provides four FC ports per module. If you insert only one FC Flex IO module, four ports are supported; if you insert two FC Flex IO modules, eight ports are supported. By installing an FC Flex IO module, you can enable the MXL 10/40GbE Switch and I/O Aggregator to directly connect to an existing FC SAN network.
FC Flex IO Module Capabilities and Operations The FC Flex IO module has the following characteristics: • You can install one or two FC Flex IO modules on the MXL 10/40GbE Switch or I/O Aggregator. Each module supports four FC ports. • Each port can operate in 2Gbps, 4Gbps, or 8Gbps of Fibre Channel speed. • All ports on an FC Flex IO module can function in the NPIV mode that enables connectivity to FC switches or directors, and also to multiple SAN topologies.
• With both FC Flex IO modules present in the MXL or I/O Aggregator switches, the power supply requirement and maximum thermal output are the same as these parameters needed for the M1000 chassis. • Each port on the FC Flex IO module contains status indicators to denote the link status and transmission activity. For traffic that is being transmitted, the port LED shows a blinking green light. The Link LED displays solid green when a proper link with the peer is established.
• On I/O Aggregators, uplink failure detection (UFD) is disabled if FC Flex IO module is present to allow server ports to communicate with the FC fabric even when the Ethernet upstream ports are not operationally up. • Ensure that the NPIV functionality is enabled on the upstream switches that operate as FC switches or FCoE forwarders (FCF) before you connect the FC port of the MXL or I/O Aggregator to these upstream switches.
the FCoE frames. The module directly switches any non-FCoE or non-FIP traffic, and only FCoE frames are processed and transmitted out of the Ethernet network. When the external device sends FCoE data frames to the switch that contains the FC Flex IO module, the destination MAC address represents one of the Ethernet MAC addresses assigned to FC ports. Based on the destination address, the FCoE header is removed from the incoming packet and the FC frame is transmitted out of the FC port.
Installing and Configuring Flowchart for FC Flex IO Modules FC Flex IO Modules 269
To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: • Clearance — There is adequate front and rear clearance for operator access. Allow clearance for cabling, power connections, and ventilation.
Interconnectivity of FC Flex IO Modules with Cisco MDS Switches In a network topology that contains Cisco MDS switches, FC Flex IO modules that are plugged into the MXL and I/O Aggregator switches enable interoperation for a robust, effective deployment of the NPIV proxy gateway and FCoE-FC bridging behavior.
Figure 31. Case 1: Deployment Scenario of Configuring FC Flex IO Modules Figure 32. Case 2: Deployment Scenario of Configuring FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FCoE provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
Ethernet local area network (LAN) (IP cloud) for data — as well as FC links to one or more storage area network (SAN) fabrics. FCoE works with the Ethernet enhancements provided in Data Center Bridging (DCB) to support lossless (no-drop) SAN and LAN traffic. In addition, DCB provides flexible bandwidth sharing for different traffic types, such as LAN and SAN, according to 802.1p priority classes of service. DCBx should be enabled on the system before the FIP snooping feature is enabled.
switches. This results in an increase in the required domain IDs, which may surpass the upper limit of 239 domain IDs supported in the SAN network. An NPG avoids the need for additional domain IDs because it is deployed outside the SAN and uses the domain IDs of core switches in its FCoE links. • With the introduction of 10GbE links, FCoE is being implemented for server connections to optimize performance. However, a SAN traditionally uses Fibre Channel to transmit storage traffic.
NPIV Proxy Gateway: Protocol Services An MXL 10/40GbE Switch and M I/O Aggregator with the FC Flex IO module NPG provides the following protocol services: • Fibre Channel service to create N ports and log in to an upstream FC switch. • FCoE service to perform: • Virtualization of FC N ports on an NPG so that they appear as FCoE FCFs to downstream servers. • NPIV service to perform the association and aggregation of FCoE servers to upstream F ports on core switches (through N ports on the NPG).
Term Description CNA port N-port functionality on an FCoE-enabled server port. A converged network adapter (CNA) can use one or more Ethernet ports. CNAs can encapsulate Fibre Channel frames in Ethernet for FCoE transport and deencapsulate Fibre Channel frames from FCoE to native Fibre Channel. DCB map Template used to configure DCB parameters, including priority-based flow control (PFC) and enhanced transmission selection (ETS), on CEE ports.
FCoE Maps An FCoE map is used to identify the SAN fabric to which FCoE storage traffic is sent. Using an FCoE map, an MXL 10/40GbE Switch and M I/O Aggregator with the FC Flex IO module NPG operates as an FCoEFC bridge between an FC SAN and FCoE network by providing FCoE-enabled servers and switches with the necessary parameters to log in to a SAN fabric.
4. Creating an FCoE VLAN 5. Creating an FCoE map 6. Applying an FCoE map on server-facing Ethernet ports 7. Applying an FCoE Map on fabric-facing FC ports Enabling Fibre Channel Capability on the Switch Enable the FC Flex IO module on an MXL 10/40GbE Switch or an M I/O Aggregator that you want to configure as an NPG for the Fibre Channel protocol.
Step Task Command Command Mode 3 Specify the priority group ID number to handle VLAN traffic for each dot1p class-of-service: 0 through 7. Leave a space between each priority group number. For example, priority-pgid 0 0 0 1 2 4 4 4 where dot1p priorities 0, 1, and 2 are mapped to priority group 0; dot1p priority 3 is mapped to priority group 1; dot1p priority 4 is mapped to priority group 2; dot1p priorities 5, 6, and 7 are mapped to priority group 4.
Step Task Command Command Mode Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_DCB1 Repeat this step to apply a DCB map to more than one port or port channel. Creating an FCoE VLAN Create a dedicated VLAN to send and receive Fibre Channel traffic over FCoE links between servers and a fabric over an NPG. The NPG receives FCoE traffic and forwards decapsulated FC frames over FC links to SAN switches in a specified fabric.
and VLAN ID numbers must be the same. Fabric and VLAN ID range: 2-4094. For example: fabric id 10 vlan 10 Add a text description of the settings in the FCoE map. 3 description text FCoE MAP fc-map fc-map-value FCoE MAP Maximum: 32 characters. Specify the FC-MAP value used to generate a fabric-provided MAC address, which is required to send FCoE traffic from a server on the FCoE VLAN to the FC fabric specified in Step 2. Enter a unique MAC address prefix as the FC-MAP value for each fabric.
Step Task Command Command Mode slot/port | portchannel num} 2 Apply the FCoE/FC configuration in an FCoE fcoe-map map-name map on the Ethernet port.
Dell(config)# interface range tengigabitEthernet 1/12 - 23 , tengigabitEthernet 2/24 - 35 Dell(config)# interface range fibrechannel 0/0 - 3 , fibrechannel 0/8 - 11 Enter the keywords interface range followed by an interface type and port range. A port range must contain spaces before and after the dash. Separate each interface type and port range with a space, comma, and space as shown in the preceding examples. Sample Configuration 1.
6. Enable a downstream Ethernet port: Dell(config)#interface tengigabitEthernet 0/0 Dell(conf-if-te-0)# no shutdown Displaying NPIV Proxy Gateway Information To display information on the NPG operation, use the show commands in the following table Table 18. Displaying NPIV Proxy Gateway Information Command Description show interfaces status Displays the operational status of Ethernet and Fibre Channel interfaces on an MXL 10/40GbE Switch or M I/O Aggregator with the FC Flex IO module, NPG.
Fc Fc Te Te Te Te Te Te Te Te Te Te 0/10 0/11 1/12 1/13 1/14 1/15 1/16 1/17 1/18 1/19 1/20 1/21 Down Down Down Down Down Down Down Down Down Up Down Down Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto Auto 10000 Mbit Full Auto Auto Auto Auto ------------- Table 19. show interfaces status Field Descriptions Field Description Port Server-facing 10GbE Ethernet (Te), 40GbE Ethernet (Fo), or fabricfacing Fibre Channel (Fc) port with slot/port information.
Fcf Priority Config-State Oper-State Members Fc 0/0 Te 0/14 Te 0/16 128 ACTIVE UP Table 20. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID. VLAN priority FCoE traffic uses VLAN priority 3.
Table 21. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Fabric-Map Name of the FCoE map containing the FCoE/FC configuration parameters for the server CNA-fabric connection. Login Method Method used by the server CNA to log in to the fabric; for example: FLOGI - ENode logged in using a fabric login (FLOGI). FDISC - ENode logged in using a fabric discovery (FDISC).
Field Description FCF MAC Fibre Channel forwarder MAC: MAC address of MXL 10/40GbE Switch or M I/O Aggregator with the FC Flex IO module FCF interface. Fabric Intf Fabric-facing MXL 10/40GbE Switch or M I/O Aggregator with the FC Flex IO module Fibre Channel port (slot/port) on which FCoE traffic is transmitted to the specified fabric.
23 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
24 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation. All interfaces on the Aggregator are operationally down This section describes how you can troubleshoot the scenario in which all the interfaces are down.
0/10(Up) Te 0/15(Up) Te 0/20(Dwn) Te 0/25(Dwn) Te 0/30(Dwn) 2. Te 0/11(Dwn) Te 0/12(Dwn) Te 0/13(Up) Te 0/14(Dwn) Te 0/16(Up) Te 0/17(Dwn) Te 0/18(Dwn) Te 0/19(Dwn) Te 0/21(Dwn) Te 0/22(Dwn) Te 0/23(Dwn) Te 0/24(Dwn) Te 0/26(Dwn) Te 0/27(Dwn) Te 0/28(Dwn) Te 0/29(Dwn) Te 0/31(Dwn) Te 0/32(Dwn) Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly.
i - Internal untagged, I - Internal tagged, v - VLT untagged, V VLT tagged Name: TenGigabitEthernet 0/1 802.1QTagged: Hybrid SMUX port mode: Auto VLANs enabled Vlan membership: Q Vlans U 1 T 2-4094 Native VlanId: 2. 1 Assign the port to a specified group of VLANs (vlan tagged command) and re-display the port mode status..
Build Path: /sites/sjc/work/build/buildSpaces/build05/E8-3-17/SW/SRC/Cp_src/ Tacacs st-sjc-m1000e-3-72 uptime is 17 hour(s), 1 minute(s) System Type: I/O-Aggregator Control Processor: MIPS RMI XLP with 2147483648 bytes of memory. 256M bytes of boot flash memory. 1 34-port GE/TE (XL) 56 Ten GigabitEthernet/IEEE 802.
Offline Diagnostics The offline diagnostics test suite is useful for isolating faults and debugging hardware. The diagnostics tests are grouped into three levels: • Level 0 — Level 0 diagnostics check for the presence of various components and perform essential path verifications. In addition, Level 0 diagnostics verify the identification registers of the components on the board. • Level 1 — A smaller set of diagnostic tests.
Running Offline Diagnostics To run offline diagnostics, use the following commands. For more information, refer to the examples following the steps. 1. Place the unit in the offline state. EXEC Privilege mode offline stack-unit You cannot enter this command on a MASTER or Standby stack unit. NOTE: The system reboots when the offline diagnostics complete. This is an automatic process.
Trace Logs In addition to the syslog buffer, the Dell Networking OS buffers trace messages which are continuously written by various software tasks to report hardware and software events and status information. Each trace message provides the date, time, and name of the Dell Networking OS process. All messages are stored in a ring buffer. You can save the messages to a file either manually or automatically after failover.
• show hardware stack-unit {0-5} buffer total-buffer View the modular packet buffers details per unit and the mode of allocation. EXEC Privilege mode • show hardware stack-unit {0-5} buffer unit {0-1} total-buffer View the forwarding plane statistics containing the packet buffer usage per port per stack unit. EXEC Privilege mode • show hardware stack-unit {0-5} buffer unit {0-1} port {1-64 | all} bufferinfo View the forwarding plane statistics containing the packet buffer statistics per COS per port.
• View the stack-unit internal registers for each port-pipe. EXEC Privilege mode • show hardware stack-unit {0-5} unit {0-0} register View the tables from the bShell through the CLI without going into the bShell. EXEC Privilege mode show hardware stack-unit {0-5} unit {0-0} table-dump {table name} Environmental Monitoring Aggregator components use environmental monitoring hardware to detect transmit power readings, receive power readings, and temperature updates.
SFP 49 Temp High Warning threshold SFP 49 Voltage High Warning threshold SFP 49 Bias High Warning threshold SFP 49 TX Power High Warning threshold SFP 49 RX Power High Warning threshold SFP 49 Temp Low Warning threshold SFP 49 Voltage Low Warning threshold SFP 49 Bias Low Warning threshold SFP 49 TX Power Low Warning threshold SFP 49 RX Power Low Warning threshold =================================== SFP 49 Temperature SFP 49 Voltage SFP 49 Tx Bias Current SFP 49 Tx Power SFP 49 Rx Power ====================
2. Check air flow through the system. Ensure that the air ducts are clean and that all fans are working correctly. 3. After the software has determined that the temperature levels are within normal limits, you can repower the card safely. To bring back the line card online, use the power-on command in EXEC mode. In addition, Dell Networking requires that you install blanks in all slots without a line card to control airflow for adequate system cooling.
OID String OID Name Description chSysPortXfpRecvTemp OID displays the temperature of the connected optics. Temperature .1.3.6.1.4.1.6027.3.10.1.2.5.1.7 NOTE: These OIDs only generate if you enable the enable optic-infoupdate-interval is enabled command. Hardware MIB Buffer Statistics .1.3.6.1.4.1.6027.3.16.1.1.4 fpPacketBufferTable View the modular packet buffers details per stack unit and the mode of allocation. .1.3.6.1.4.1.6027.3.16.1.1.
manager does not reallocate the buffer to an adjacent congested interface, which means that in some cases, memory is under-used. • Dynamic buffer — this pool is shared memory that is allocated as needed, up to a configured limit. Using dynamic buffers provides the benefit of statistical buffer sharing. An interface requests dynamic buffers when its dedicated buffer pool is exhausted. The buffer manager grants the request based on three conditions: – The number of used and available dynamic buffers.
Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non-default buffer settings, as tuning can significantly affect system performance. The default values work for most cases. As a guideline, consider tuning buffers if traffic is bursty (and coming from several interfaces). In this case: • • • Reduce the dedicated buffer on all queues/interfaces. Increase the dynamic buffer on all interfaces.
%S50N:0 %DIFFSERV-2-DSA_DEVICE_BUFFER_UNAVAILABLE: Unable to allocate dedicated buffers for stack-unit 0, port pipe 0, egress port 25 due to unavailability of cells. Dell Networking OS Behavior: When you remove a buffer-profile using the no buffer-profile [fp | csf] command from CONFIGURATION mode, the buffer-profile name still appears in the output of the show buffer-profile [detail | summary] command.
4 5 6 7 3.00 3.00 3.00 3.00 256 256 256 256 Example of Viewing the Buffer Profile (Linecard) Dell#show buffer-profile detail fp-uplink stack-unit 0 port-set 0 Linecard 0 Port-set 0 Buffer-profile fsqueue-hig Dynamic Buffer 1256.00 (Kilobytes) Queue# Dedicated Buffer Buffer Packets (Kilobytes) 0 3.00 256 1 3.00 256 2 3.00 256 3 3.00 256 4 3.00 256 5 3.00 256 6 3.00 256 7 3.
CONFIGURATION mode buffer-profile global [1Q|4Q] Sample Buffer Profile Configuration The two general types of network environments are sustained data transfers and voice/data. Dell Networking recommends a single-queue approach for data transfers.
Displaying Drop Counters To display drop counters, use the following commands. • Identify which stack unit, port pipe, and port is experiencing internal drops. • show hardware stack-unit 0–11 drops [unit 0 [port 0–63]] Display drop counters.
--- Egress FORWARD PROCESSOR Drops --IPv4 L3UC Aged & Drops : 0 TTL Threshold Drops : 0 INVALID VLAN CNTR Drops : 0 L2MC Drops : 0 PKT Drops of ANY Conditions : 0 Hg MacUnderflow : 0 TX Err PKT Counter : 0 Dataplane Statistics The show hardware stack-unit cpu data-plane statistics command provides insight into the packet types coming to the CPU. The command output in the following example has been augmented, providing detailed RX/ TX packet statistics on a per-queue basis.
The show hardware stack-unit cpu party-bus statistics command displays input and output statistics on the party bus, which carries inter-process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell#show hardware stack-unit 2 cpu party-bus statistics Input Statistics: 27550 packets, 2559298 bytes 0 dropped, 0 errors Output Statistics: 1649566 packets, 1935316203 bytes 0 errors Displaying Stack Port Statistics The show hardware stack-unit stack-port command displays input and outpu
Total Mmu Drops :0 Total EgMac Drops :0 Total Egress Drops :0 Dell#show hardware stack-unit 0 drops unit 0 Port# :Ingress Drops :IngMac Drops :Total Mmu Drops :EgMac Drops :Egress Drops 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 4 0 0 0 0 0 5 0 0 0 0 0 6 0 0 0 0 0 7 0 0 0 0 0 8 0 0 0 0 0 Dell#show hardware stack-unit --- Ingress Drops --Ingress Drops : IBP CBP Full Drops : PortSTPnotFwd Drops : IPv4 L3 Discards : Policy Discards : Packets dropped by FP : (L2+L3) Drops : Port bitmap zero Drops : Rx VLAN Drops : 0
Important Points to Remember • When you restore all the units in a stack, all units in the stack are placed into stand-alone mode. • When you restore a single unit in a stack, only that unit is placed in stand-alone mode. No other units in the stack are affected. • When you restore the units in stand-alone mode, the units remain in stand-alone mode after the restoration. • After the restore is complete, the units power cycle immediately.
Standards Compliance 25 This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http://tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 27.
RFC# Full Name 1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy 1542 Clarifications and Extensions for the Bootstrap Protocol 1812 Requirements for IP Version 4 Routers 2131 Dynamic Host Configuration Protocol 2338 Virtual Router Redundancy Protocol (VRRP) 3021 Using 31-Bit Prefixes on IPv4 Point-to-Point Links 3046 DHCP Relay Agent Information Option 3069 VLAN Aggregation for Efficient IP Address Allocation 3128 Protection Against a Variant of th
RFC# Full Name 2571 An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks 2572 Message Processing and Dispatching for the Simple Network Management Protocol (SNMP) 2574 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3) 2575 View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP) 2576 Coexistence Between Version 1, Version 2, and Version 3 of the Internet-standard Network Manage
RFC# Full Name 3418 Management Information Base (MIB) for the Simple Network Management Protocol (SNMP) 3434 Remote Monitoring MIB Extensions for High Capacity Alarms, High-Capacity Alarm Table (64 bits) ANSI/TIA-1057 The LLDP Management Information Base extension module for TIA-TR41.4 Media Endpoint Discovery information draft-grant-tacacs -02 The TACACS+ Protocol IEEE 802.
RFC# Full Name IEEE 802.1Qaz Management Information Base extension module for IEEE 802.1 organizationally defined discovery information (LDP-EXT-DOT1-DCBX-MIB) IEEE 802.1Qbb Priority-based Flow Control module for managing IEEE 802.1Qbb MIB Location You can find Force10 MIBs under the Force10 MIBs subhead on the Documentation page of iSupport: https://www.force10networks.com/csportal20/KnowledgeBase/Documentation.