Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.9(0.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide.............................................................................................................14 Audience........................................................................................................................................................................... 14 Conventions......................................................................................................................................................................
Supported Modes............................................................................................................................................................ 30 Ethernet Enhancements in Data Center Bridging............................................................................................................. 30 Priority-Based Flow Control........................................................................................................................................
Supported Modes.............................................................................................................................................................72 Fibre Channel over Ethernet.............................................................................................................................................72 Ensuring Robustness in a Converged Ethernet Network..................................................................................................
Displaying VLAN Membership.....................................................................................................................................97 Adding an Interface to a Tagged VLAN.......................................................................................................................97 Adding an Interface to an Untagged VLAN.................................................................................................................
LACP Example................................................................................................................................................................ 120 Link Aggregation Control Protocol (LACP)......................................................................................................................120 Configuration Tasks for Port Channel Interfaces........................................................................................................
LLDP Operation.............................................................................................................................................................. 148 Viewing the LLDP Configuration..................................................................................................................................... 148 Viewing Information Advertised by Adjacent LLDP Agents.............................................................................................
Troubleshooting SSH................................................................................................................................................ 186 Telnet..............................................................................................................................................................................186 VTY Line and Access-Class Configuration......................................................................................................................
Master Selection Criteria.......................................................................................................................................... 210 Configuring Priority and stack-group........................................................................................................................ 210 Cabling Stacked Switches.........................................................................................................................................
Sample Configuration: Uplink Failure Detection.............................................................................................................. 236 23 PMUX Mode of the IO Aggregator............................................................................ 238 I/O Aggregator (IOA) Programmable MUX (PMUX) Mode............................................................................................ 238 Configuring and Changing to PMUX Mode.....................................................
NPIV Proxy Gateway Functionality............................................................................................................................271 NPIV Proxy Gateway: Terms and Definitions............................................................................................................. 271 Configuring an NPIV Proxy Gateway..............................................................................................................................
Displaying Stack Port Statistics................................................................................................................................303 Enabling Buffer Statistics Tracking ................................................................................................................................ 303 Restoring the Factory Default Settings..........................................................................................................................
1 About this Guide This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.7(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Related Documents For more information about the Dell PowerEdge M I/O Aggregator MXL 10/40GbE Switch IO Module, refer to the following documents: • Dell Networking OS Command Line Reference Guide for the M I/O Aggregator • Dell Networking OS Getting Started Guide for the M I/O Aggregator • Release Notes for the M I/O Aggregator About this Guide 15
2 Before You Start To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, see Stacking.
• Internet small computer system interface (iSCSI)optimization. • Internet group management protocol (IGMP) snooping. • Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. • Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface.
Link Tracking By default, all server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG goes down, the aggregator loses its connectivity and is no longer operational; all server-facing ports are brought down after the specified defer-timer interval, which is 10 seconds by default. If you have configured VLAN, you can reduce the defer time by changing the defer-timer value or remove it by using the no defer-timer command.
For detailed information about how to reconfigure specific software settings, refer to the appropriate chapter. Deploying FN I/O Module This section provides design and configuration guidance for deploying the Dell PowerEdge FN I/O Module (FN IOM). By default the FN IOM is in Standalone Mode.
2. Create a LACP LAG on the upstream top of rack switch. 3. Verify the connection. By default the network ports on the PowerEdge FC-Series servers installed in the FX2 chassis is down, until the uplink port channel is operational on the FN IOM system. It is due to an Uplink Failure Detection, by that when upstream connectivity fails; the FN IOM disables the downstream links.
Pluggable media not present Interface index is 1048580 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :f8b1566efc59 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 01:26:42 Queueing strategy: fifo Input Statistics: 941 packets, 98777 bytes 83 64-byte pkts, 591 over 64-byte pkts, 267 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts
By default on the FN IOM, the external Ethernet ports are preconfigured in port channel 128 with LACP enabled. Port channel 128 is in hybrid (trunk) mode. To bring up the downstream (server) ports on the FN IOM, port channel 128 must be up. When the Port channel 128 is up, it is connected to a configured port channel on an upstream switch. To enable the Port channel 128, connect any combination of the FN IOM’s external Ethernet ports (ports TenGigabitethernet 0/9-12) to the upstream switch.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• INTERFACE submode is the mode in which you configure Layer 2 protocols and IP services specific to an interface. An interface can be physical (10 Gigabit Ethernet) or logical (Null, port channel, or virtual local area network [VLAN]). • LINE submode is the mode in which you to configure the console and virtual terminal lines. NOTE: At any time, entering a question mark (?) displays the available command options.
CLI Command Mode Prompt Access Command VIRTUAL TERMINAL Dell(config-line-vty)# line (LINE Modes) The following example shows how to change the command mode from CONFIGURATION mode to INTERFACE configuration mode. Example of Changing Command Modes Dell(conf)#interface tengigabitethernet 0/2 Dell(conf-if-te-0/2)# The do Command You can enter an EXEC mode command from any CONFIGURATION mode (CONFIGURATION, INTERFACE, and so on.
Obtaining Help Obtain a list of keywords and a brief functional description of those keywords at any CLI mode using the ? or help command: • To list the keywords available in the current mode, enter ? at the prompt or after a keyword. • Enter ? after a prompt lists all of the available keywords. The output of this command is the same for the help command.
Short-Cut Key Combination Action CNTL-N Return to more recent commands in the history buffer after recalling commands with CTRL-P or the UP arrow key. CNTL-P Recalls commands, beginning with the last command. CNTL-U Deletes the line. CNTL-W Deletes the previous word. CNTL-X Deletes the line. CNTL-Z Ends continuous scrolling of command outputs. Esc B Moves the cursor back one word. Esc F Moves the cursor forward one word. Esc D Deletes all characters from the cursor to the end of the word.
The except keyword displays text that does not match the specified text. The following example shows this command used in combination with the show linecard all command. Example of the except Keyword Dell(conf)#do show stack-unit all stack-ports all pfc details | except 0 Admin mode is On Admin is enabled Local is enabled Link Delay 65535 pause quantum Dell(conf)# The find keyword displays the output of the show command beginning from the first occurrence of specified text.
4 Data Center Bridging (DCB) On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode. Supported Modes Standalone, Stacking, PMUX, VLT Ethernet Enhancements in Data Center Bridging The following section describes DCB.
• 802.1Qbb - Priority-based Flow Control (PFC) • 802.1Qaz - Enhanced Transmission Selection (ETS) • 802.1Qau - Congestion Notification • Data Center Bridging Exchange (DCBx) protocol NOTE: In Dell Networking OS version 9.4.0.x, only the PFC, ETS, and DCBx features are supported in data center bridging.
– If the negotiation fails and PFC is enabled on the port, any user-configured PFC input policies are applied. If no PFC dcb-map has been previously applied, the PFC default setting is used (no priorities configured). If you do not enable PFC on an interface, you can enable the 802.3x link-level pause function. By default, the link-level pause is disabled, when you disable DCBx and PFC. If no PFC dcb-map has been applied on the interface, the default PFC settings are used.
Traffic Groupings Description Group bandwidth Percentage of available bandwidth allocated to a priority group. Group transmission selection algorithm (TSA) Type of queue scheduling a priority group uses. In the Dell Networking OS, ETS is implemented as follows: • ETS supports groups of 802.1p priorities that have: – PFC enabled or disabled – No bandwidth limit or no ETS processing • Bandwidth allocated by the ETS algorithm is made available after strict-priority groups are serviced.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network. DCB is disabled by default. It must be enabled to support CEE.
dcb stack-unit all pfc-buffering pfc-ports 64 pfc-queues 2 NOTE: To save the pfc buffering configuration changes, save the configuration and reboot the system. NOTE: Dell Networking OS Behavior: DCB is not supported if you enable link-level flow control on one or more interfaces. For more information, refer to Flow Control Using Ethernet Pause Frames.
• To change the ETS bandwidth allocation configured for a priority group in a DCB map, do not modify the existing DCB map configuration. Instead, first create a new DCB map with the desired PFC and ETS settings, and apply the new map to the interfaces to override the previous DCB map settings. Then, delete the original dot1p priority-priority group mapping.
Configuring Lossless Queues DCB also supports the manual configuration of lossless queues on an interface after you disable PFC mode in a DCB map and apply the map on the interface. The configuration of no-drop queues provides flexibility for ports on which PFC is not needed, but lossless traffic should egress from the interface. Lossless traffic egresses out the no-drop queues. Ingress 802.1p traffic from PFC-enabled peers is automatically mapped to the nodrop egress queues.
Data Center Bridging: Default Configuration Before you configure PFC and ETS on a switch see the priority group setting taken into account the following default settings: DCB is enabled. PFC and ETS are globally enabled by default. The default dot1p priority-queue assignments are applied as follows: Dell(conf)#do show qos dot1p-queue-mapping Dot1p Priority : 0 1 2 3 4 5 6 7 Queue : 0 0 0 1 2 3 3 3 Dell(conf)# NOTE: In Dell Networking OS we support 4 data queues.
! protocol lldp advertise management-tlv management-address system-name dcbx port-role auto-downstream no shutdown Dell# When DCB is Enabled When an interface receives a DCBx protocol packet, it automatically enables DCB and disables link-level flow control. The dcb-map and flow control configurations are removed as shown in the following example.
Disabling DCB To configure the Aggregator so that all interfaces are DCB disabled and flow control enabled, use the no dcb enable command. dcb enable auto-detect on-next-reload Command Example Dell#dcb enable auto-detect on-next-reload Configuring Priority-Based Flow Control PFC provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default when you enable DCB.
pfc mode on The default is PFC mode is on. 5. (Optional) Enter a text description of the input policy. DCB INPUT POLICY mode description text The maximum is 32 characters. 6. Exit DCB input policy configuration mode. DCB INPUT POLICY mode exit 7. Enter interface configuration mode. CONFIGURATION mode interface type slot/port 8. Apply the input policy with the PFC configuration to an ingress interface. INTERFACE mode dcb-policy input policy-name 9.
• In a switch stack, configure all stacked ports with the same PFC configuration. A DCB input policy for PFC applied to an interface may become invalid if you reconfigure dot1p-queue mapping. This situation occurs when the new dot1p-queue assignment exceeds the maximum number (2) of lossless queues supported globally on the switch. In this case, all PFC configurations received from PFC-enabled peers are removed and resynchronized with the peer devices.
For example, storage traffic is sensitive to frame loss; interprocess communication (IPC) traffic is latency-sensitive. ETS allows different traffic types to coexist without interruption in the same converged link. NOTE: The IEEE 802.1Qaz, CEE, and CIN versions of ETS are supported. ETS is implemented on an Aggregator as follows: • Traffic in priority groups is assigned to strict-queue or WERR scheduling in a dcb-map and is managed using the ETS bandwidthassignment algorithm.
* Link strict priority: Allows a flow in any priority group to increase to the maximum link bandwidth. CIN supports only the default dot1p priority-queue assignment in a priority group. Hierarchical Scheduling in ETS Output Policies ETS supports up to three levels of hierarchical scheduling. For example, you can apply ETS output policies with the following configurations: Priority group 1 Assigns traffic to one priority queue with 20% of the link bandwidth and strict-priority scheduling.
• Accepts the DCB configuration from a peer if a DCBx port is in “willing” mode to accept a peer’s DCB settings and then internally propagates the received DCB configuration to its peer ports.
NOTE: On a DCBx port, application priority TLV advertisements are handled as follows: • The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port. • On auto-upstream and auto-downstream ports: – If a configuration source is elected, the ports send an application priority TLV based on the application priority TLV received on the configuration-source port.
Propagation of DCB Information When an auto-upstream or auto-downstream port receives a DCB configuration from a peer, the port acts as a DCBx client and checks if a DCBx configuration source exists on the switch. • If a configuration source is found, the received configuration is checked against the currently configured values that are internally propagated by the configuration source.
Figure 4. DCBx Sample Topology DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
DCBx Error Messages The following syslog messages appear when an error in DCBx operation occurs. LLDP_MULTIPLE_PEER_DETECTED: DCBx is operationally disabled after detecting more than one DCBx peer on the port interface. LLDP_PEER_AGE_OUT: DCBx is disabled as a result of LLDP timing out on a DCBx peer interface. DSM_DCBx_PEER_VERSION_CONFLICT: A local port expected to receive the IEEE, CIN, or CEE version in a DCBx TLV from a remote peer but received a different, conflicting DCBx version.
Command Output show interface port-type slot/port pfc statistics Displays counters for the PFC frames received and transmitted (by dot1p priority class) on an interface. show interface port-type slot/port pfc {summary | detail} Displays the PFC configuration applied to ingress traffic on an interface, including priorities and link delay. To clear PFC TLV counters, use the clear pfc counters {stack-unit unit-number | tengigabitethernet slot/port} command.
Example of the show interfaces pfc summary Command Dell# show interfaces tengigabitethernet 0/4 pfc summary Interface TenGigabitEthernet 0/4 Admin mode is on Admin is enabled Remote is enabled, Priority list is 4 Remote Willing Status is enabled Local is enabled Oper status is Recommended PFC DCBx Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quantams Application Priority TLV Parameters : -------------------------------------FCOE TLV Tx Status is disable
Fields Description • Init: Local PFC configuration parameters were exchanged with peer. • Recommend: Remote PFC configuration parameters were received from peer. • Internally propagated: PFC configuration parameters were received from configuration source. PFC DCBx Oper status Operational status for exchange of PFC configuration on local port: match (up) or mismatch (down).
Fields Description Error Appln Priority TLV pkts Number of Application Priority error packets received.
Example of the show interface ets detail Command Dell# show interfaces tengigabitethernet Interface TenGigabitEthernet 0/4 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on Admin Parameters : -----------------Admin is enabled TC-grp Priority# Bandwidth 0 0,1,2,3,4,5,6,7 100% 1 0% 2 0% 3 0% 4 0% 5 0% 6 0% 7 0% 0/4 ets detail TSA ETS ETS ETS ETS ETS ETS ETS ETS Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled PG-grp Prio
Field Description Remote Parameters ETS configuration on remote peer port, including Admin mode (enabled if a valid TLV was received or disabled), priority groups, assigned dot1p priorities, and bandwidth allocation. If the ETS Admin mode is enabled on the remote port for DCBx exchange, the Willing bit received in ETS TLVs from the remote peer is included.
Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Stack unit 1 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA -----------------------------------------------0 0,1,2,3,4,5,6,7 100% ETS 1 2 3 4 5 6 7 8 Example of the show interface DCBx detai
Peer DCBX Status: ---------------DCBX Operational Version is 0 DCBX Max Version Supported is 255 Sequence Number: 2 Acknowledgment Number: 2 2 Input PFC TLV pkts, 3 Output PFC TLV pkts, 0 Error PFC pkts, 0 PFC Pause Tx pkts, 0 Pause Rx pkts 2 Input PG TLV Pkts, 3 Output PG TLV Pkts, 0 Error PG TLV Pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts Total DCBX Frames transmitted 27 Total DCBX Frames received 6 Total DCBX Frame errors 0 Total DCBX Frames unr
Field Description Peer DCBx Status: DCBx Max Version Supported Highest DCBx version supported in Control TLVs received from peer device. Peer DCBx Status: Sequence Number Sequence number transmitted in Control TLVs received from peer device. Peer DCBx Status: Acknowledgment Number Acknowledgement number transmitted in Control TLVs received from peer device. Total DCBx Frames transmitted Number of DCBx frames sent from local port.
Field Description Appln Priority TLV Pkts QoS dot1p Traffic Classification and Queue Assignment DCB supports PFC, ETS, and DCBx to handle converged Ethernet traffic that is assigned to an egress queue according to the following QoS methods: Honor dot1p dot1p priorities in ingress traffic are used at the port or global switch level. Layer 2 class maps dot1p priorities are used to classify traffic in a class map and apply a service policy to an ingress port to map traffic to egress queues.
Reason Description LLDP Rx/Tx is disabled LLDP is disabled (Admin Mode set to rx or tx only) globally or on the interface. Waiting for Peer Waiting for peer or detected peer connection has aged out. Multiple Peer Detected Multiple peer connections detected on the interface. Version Conflict DCBx version on peer version is different than the local or globally configured DCBx version.
Reason Description • Incompatible TC TSA. Configuring the Dynamic Buffer Method To configure the dynamic buffer capability, perform the following steps: 1. Enable the DCB application. By default, DCB is enabled and link-level flow control is disabled on all interfaces. CONFIGURATION mode S6000-109-Dell(conf)#dcb enable 2. Configure the shared PFC buffer size and the total buffer size. A maximum of 4 lossless queues are supported.
Dell (conf-qos-policy-buffer)# queue 4 pause no-drop buffer-size 128000 pause-threshold 103360 resume-threshold 83520 62 Data Center Bridging (DCB)
5 Dynamic Host Configuration Protocol (DHCP) The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
DHCPINFORM A client uses this message to request configuration parameters when it assigned an IP address manually rather than with DHCP. The server responds by unicast. DHCPNAK A server sends this message to the client if it is not able to fulfill a DHCPREQUEST; for example, if the requested address is already in use. In this case, the client starts the configuration process over by sending a DHCPDISCOVER.
EXEC Privilege [no] debug ip dhcp client events [interface type slot/port] The following example shows the packet- and event-level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface.
Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: DHCP DISCOVER sent in Interface Ma 0/0 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_PKT: Received DHCPOFFER packet in Interface Ma 0/0 with Lease-Ip
You can also manually configure an IP address for the VLAN 1 default management interface using the CLI. If no user-configured IP address exists for the default VLAN management interface exists and if the default VLAN IP address is not in the startup configuration, the Aggregator will automatically obtain it using DHCP. • The default VLAN 1 with all ports configured as members is the only L3 interface on the Aggregator.
NOTE: Management routes added by the DHCP client include the specific routes to reach a DHCP server in a different subnet and the management route. DHCP Client on a VLAN The following conditions apply on a VLAN that operates as a DHCP client: • The default VLAN 1 with all ports auto-configured as members is the only L3 interface on the Aggregator.
Option Number and Description DHCP Message Type Option 53 • 1: DHCPDISCOVER • 2: DHCPOFFER • 3: DHCPREQUEST • 4: DHCPDECLINE • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request List Option 55 Renewal Time Option 58 Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code.
• Insert Option 82 into DHCP packets. CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option. Releasing and Renewing DHCP-based IP Addresses On an Aggregator configured as a DHCP client, you can release a dynamically-assigned IP address without removing the DHCP client operation on the interface. To manually acquire a new IP address from the DHCP server, use the following command.
Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ================ Ma 0/0 0.0.0.0/0 0.0.0.0 0.0.0.0 INIT -----NA--------NA---Vl 1 10.1.1.254/24 0.0.0.0 Renew Time ========== ----NA---08-26-2011 16:21:50 10.1.1.
6 FIP Snooping This chapter describes about the FIP snooping concepts and configuration procedures. Supported Modes Standalone, PMUX, VLT Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
• FIP discovery: FCoE end-devices and FCFs are automatically discovered. • Initialization: FCoE devices perform fabric login (FLOGI) and fabric discovery (FDISC) to create a virtual link with an FCoE switch. • Maintenance: A valid virtual link between an FCoE device and an FCoE switch is maintained and the link termination logout (LOGO) functions properly. Figure 7.
• Global ACLs are applied on server-facing ENode ports. • Port-based ACLs are applied on ports directly connected to an FCF and on server-facing ENode ports. • Port-based ACLs take precedence over global ACLs. • FCoE-generated ACLs take precedence over user-configured ACLs. A user-configured ACL entry cannot deny FCoE and FIP snooping frames. The below illustration depicts an Aggregator used as a FIP snooping bridge in a converged Ethernet network. The ToR switch operates as an FCF for FCoE traffic.
The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis. • Set the FCoE MAC address prefix (FC-MAP) value used by an FCF to assign a MAC address to an ECoE end-device (server ENode or storage device) after a server successfully logs in.
FIP Snooping Prerequisites On an Aggregator, FIP snooping requires the following conditions: • • A FIP snooping bridge requires DCBX and PFC to be enabled on the switch for lossless Ethernet connections (refer to Data Center Bridging (DCB)). Dell recommends that you also enable ETS; ETS is recommended but not required. DCBX and PFC mode are auto-configured on Aggregator ports and FIP snooping is operational on the port.
4. Enter interface configuration mode to configure the port for FIP snooping links. CONFIGURATION mode interface port-type slot/port By default, a port is configured for bridge-to-ENode links. 5. Configure the port for bridge-to-FCF links. INTERFACE or CONFIGURATION mode fip-snooping port-mode fcf NOTE: All these configurations are available only in PMUX mode.
show fip-snooping system Display information on the status of FIP snooping on the switch (enabled or disabled), including the number of FCoE VLANs, FCFs, ENodes, and currently active sessions. show fip-snooping vlan Display information on the FCoE VLANs on which FIP snooping is enabled.
Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. VLAN VLAN ID number used by the session. FC-ID Fibre Channel session ID assigned by the FCF. show fip-snooping fcf Command Example Dell# show fip-snooping fcf FCF MAC FCF Interface ------------------54:7f:ee:37:34:40 Po 22 2 VLAN ---100 FC-MAP -----0e:fc:00 FKA_ADV_PERIOD -------------4000 No.
Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number Number of of of of of of of of of of of of of of of of of FLOGI FDISC FLOGO Enode Keep Alive VN Port Keep Alive Multicast Discovery Advertisement Unicast Discovery Advertisement FLOGI Accepts FLOGI Rejects FDISC Accepts FDISC Rejects FLOGO Accepts FLOGO Rejects CVL FCF Discovery Timeouts VN Port Session Timeouts Session failures due to Hardware Config :1 :16 :0 :4416 :3136 :0 :0 :0 :0 :0 :0 :0
Field Description Number of VN Port Keep Alives Number of FIP-snooped VN port keep-alive frames received on the interface. Number of Multicast Number of FIP-snooped multicast discovery advertisements received on the interface. Discovery Advertisements Number of Unicast Discovery Advertisements Number of FIP-snooped unicast discovery advertisements received on the interface. Number of FLOGI Accepts Number of FIP FLOGI accept frames received on the interface.
FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
Debugging FIP Snooping To enable debug messages for FIP snooping events, enter the debug fip-snooping command.. 1. Enable FIP snooping debugging on for all or a specified event type, where: • all enables all debugging options. • acl enables debugging only for ACL-specific events. • error enables debugging only for error conditions. • ifm enables debugging only for IFM events. • info enables debugging only for information events. • ipc enables debugging only for IPC events.
7 Internet Group Management Protocol (IGMP) On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
– One router on a subnet is elected as the querier. The querier periodically multicasts (to all-multicast-systems address 224.0.0.1) a general query to all hosts on the subnet. – A host that wants to join a multicast group responds with an IGMP membership report that contains the multicast address of the group it wants to join (the packet is addressed to the same group).
Figure 12. IGMP version 3 Membership Report Packet Format Joining and Filtering Groups and Sources The below illustration shows how multicast routers maintain the group and source information from unsolicited reports. • The first unsolicited report from the host indicates that it wants to receive traffic for group 224.1.1.1. • The host’s second report indicates that it is only interested in traffic from group 224.1.1.1, source 10.11.1.1.
Leaving and Staying in Groups The below illustration shows how multicast routers track and refreshes the state change in response to group-and-specific and general queries. • Host 1 sends a message indicating it is leaving group 224.1.1.1 and that the included filter for 10.11.1.1 and 10.11.1.2 are no longer necessary.
Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode. When multicast flooding is disabled, unregistered multicast data traffic is forwarded to only multicast router ports on all VLANs.
Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 INCLUDE IS_INCL Interface Group Uptime Expires Router mode Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 Dell# Vlan 1600 226.0.0.1 00:00:04 Never INCLUDE 1.1.1.
8 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• • All interfaces are auto-configured as members of all (4094) VLANs and untagged VLAN 1. All VLANs are up and can send or receive layer 2 traffic. You can use the Command Line Interface (CLI) or CMC interface to configure only the required VLANs on a port interface. Aggregator ports are numbered 1 to 56. Ports 1 to 32 are internal server-facing interfaces. Ports 33 to 56 are external ports numbered from the bottom to the top of the Aggregator.
0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 14856 packets, 2349010 bytes, 0 underruns 0 64-byte pkts, 4357 over 64-byte pkts, 8323 over 127-byte pkts 2176 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 12551 Multicasts, 2305 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 0 packets/sec, 0.00% of line-rate Output 00.00 Mbits/sec, 0 packets/sec, 0.
To confirm that the interface is enabled, use the show config command in INTERFACE mode. To leave INTERFACE mode, use the exit command or end command. You cannot delete a physical interface. The management IP address on the D-fabric provides a dedicated management access to the system. The switch interfaces support Layer 2 traffic over the 10-Gigabit Ethernet interfaces. These interfaces can also become part of virtual interfaces such as VLANs or port channels.
The Aggregator supports the management ethernet interface as well as the standard interface on any front-end port. You can use either method to connect to the system. Configuring a Management Interface On the Aggregator, the dedicated management interface provides management access to the system.You can configure this interface with Dell Networking OS, but the configuration options on this interface are limited.
To view the configured static routes for the management port, use the show ip management-route command in EXEC privilege mode. Dell#show ip management-route all Destination ----------1.1.1.0/24 172.16.1.0/24 172.31.1.0/24 Gateway ------172.31.1.250 172.31.1.
VLANs and Port Tagging To add an interface to a VLAN, it must be in Layer 2 mode. After you place an interface in Layer 2 mode, it is automatically placed in the default VLAN. Dell Networking OS supports IEEE 802.1Q tagging at the interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses. The information that is preserved as the frame moves through the network.
vlan-id specifies an untagged VLAN number. Range: 2-4094 vlan-range specifies a range of untagged VLANs. Separate VLAN IDs with a comma; specify a VLAN range with a dash; for example, vlan tagged 3,5-7. When you delete a VLAN (using the no vlan vlan-id command), any interfaces assigned to the VLAN are assigned to the default VLAN as untagged interfaces. If you configure additional VLAN membership and save it to the startup configuration, the new VLAN configuration is activated following a system reboot.
T Po128(Te 0/50-51) T Te 1/7 Dell(conf-if-te-1/7) Except for hybrid ports, only a tagged interface can be a member of multiple VLANs. You can assign hybrid ports to two VLANs if the port is untagged in one VLAN and tagged in all others. NOTE: When you remove a tagged interface from a VLAN (using the no vlan tagged command), it remains tagged only if it is a tagged interface in another VLAN.
vlan untagged 20 no shutdown Dell(conf-if-te-0/1)#end Dell# 4. Initialize the port-channel with configurations such as admin up, portmode, and switchport. Dell#configure Dell(conf)#int port-channel 128 Dell(conf-if-po-128)#portmode hybrid Dell(conf-if-po-128)#switchport 5. Configure the tagged VLANs 10 through 15 and untagged VLAN 20 on this port-channel. Dell(conf-if-po-128)#vlan tagged 10-15 Dell(conf-if-po-128)# Dell(conf-if-po-128)#vlan untagged 20 6.
Port Channel Interfaces On an Aggregator, port channels are auto-configured as follows: • All 10GbE uplink interfaces (ports 33 to 56) are auto-configured to belong to the same 10GbE port channel (LAG 128). • Server-facing interfaces (ports 1 to 32) auto-configure in LAGs (1 to 127) according to the NIC teaming configuration on the connected servers. Port channel interfaces support link aggregation, as described in IEEE Standard 802.3ad. .
operational interface in the port channel is a TenGigabit Ethernet interface, all interfaces at 1000 Mbps are kept up, and all 100/1000/10000 interfaces that are not set to 1000 Mbps speed or auto negotiate are disabled. 1GbE and 10GbE Interfaces in Port Channels When both Gigabit and TenGigabitEthernet interfaces are added to a port channel, the interfaces must share a common speed. When interfaces have a configured speed different from the port channel speed, the software disables those interfaces.
Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag1001ec9f10358 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 50000 Mbit Members in this channel: Te 1/2(U) Te 1/3(U) Te 1/4(U) Te 1/5(U) Te 1/7(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:13:56 Queueing strategy: fifo Input Statistics: 836 packets, 108679 bytes 412 64-byte pkts, 157 over 64-byte pkts, 135 over 127-byte pkts 132 ove
The interface range prompt offers the interface (with slot and port information) for valid interfaces. The maximum size of an interface range prompt is 32. If the prompt size exceeds this maximum, it displays (...) at the end of the output. NOTE: Non-existing interfaces are excluded from interface range prompt. NOTE: When creating an interface range, interfaces appear in the order they were entered and are not sorted.
Commas The example below shows how to use commas to add different interface types to the range, enabling all Ten Gigabit Ethernet interfaces in the range 0/1 to 0/23 and both Ten Gigabit Ethernet interfaces 1/1 and 1/2.
Input overrun: Output underruns: Output throttles: m l T q - 0 0 0 Change mode Page up Increase refresh interval Quit 0 pps 0 pps 0 pps 0 0 0 c - Clear screen a - Page down t - Decrease refresh interval Maintenance Using TDR The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers. TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs.
The globally assigned 48-bit Multicast address 01-80-C2-00-00-01 is used to send and receive pause frames. To allow full duplex flow control, stations implementing the pause operation instruct the MAC to enable reception of frames with a destination address equal to this multicast address. The pause frame is defined by IEEE 802.3x and uses MAC Control frames to carry the pause commands. Ethernet pause frames are supported on full duplex only.
The table below lists out the various Layer 2 overheads found in Dell Networking OS and the number of bytes. Table 8. Difference between Link MTU and IP MTU Layer 2 Overhead Difference between Link MTU and IP MTU Ethernet (untagged) 18 bytes VLAN Tag 22 bytes Untagged Packet with VLAN-Stack Header 22 bytes Tagged Packet with VLAN-Stack Header 26 bytes Link MTU and IP MTU considerations for port channels and VLANS are as follows.
show interfaces [interface] status 2. Determine the remote interface status. EXEC mode EXEC Privilege mode [Use the command on the remote system that is equivalent to the above command.] 3. Access CONFIGURATION mode. EXEC Privilege mode config 4. Access the port. CONFIGURATION mode interface interface slot/port 5. Set the local port speed. INTERFACE mode speed {100 | 1000 | 10000 | auto} 6. Optionally, set full- or half-duplex. INTERFACE mode duplex {half | full} 7.
In the above example, several ports display “Auto” in the speed field, including port 0/1. Now, in the below example, the speed of port 0/1 is set to 100 Mb and then its auto-negotiation is disabled.
duplex half interfaceconfig mode Supported CLI not available CLI not available Invalid Input error- CLI not available duplex full interfaceconfig mode Supported CLI not available CLI not available Invalid Input error-CLI not available Setting Auto-Negotiation Options: Dell(conf)# int tengig 0/1 Dell(conf-if-te-0/1)#neg auto Dell(conf-if-autoneg)# ? end Exit from configuration mode exit Exit from autoneg configuration mode mode Specify autoneg mode no Negate a command or set its defaults show Sho
Name: TenGigabitEthernet 13/3 802.1QTagged: True Vlan membership: Vlan 2 --More-- Clearing Interface Counters The counters in the show interfaces command are reset by the clear counters command. This command does not clear the counters captured by any SNMP program. To clear the counters, use the following command in EXEC Privilege mode: 1. Clear the counters used in the show interface commands for all VLANs, and physical interfaces or selected ones.
You can use the following CLI commands to enable or disable processing of received RFI events: Dell(conf-if-te-1/3)#remote-fault-signaling rx ? on Enable off Disable The default is "remote-fault-signaling rx on".
9 iSCSI Optimization An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
The following figure shows iSCSI optimization between servers in an M1000e enclosure and a storage array in which an Aggregator connects installed servers (iSCSI initiators) to a storage array (iSCSI targets) in a SAN network. iSCSI optimization running on the Aggregator is configured to use dot1p priority-queue assignments to ensure that iSCSI traffic in these sessions receives priority treatment when forwarded on Aggregator hardware. Figure 16.
• Target’s TCP Port If no iSCSI traffic is detected for a session during a user-configurable aging period, the session data clears. Detection and Auto configuration for Dell EqualLogic Arrays The iSCSI optimization feature includes auto-provisioning support with the ability to detect directly connected Dell EqualLogic storage arrays and automatically reconfigure the switch to enhance storage traffic flows.
show iscsi sessions detailed [session isid] Displays detailed information on active iSCSI sessions on the switch. To display detailed information on specified iSCSi session, enter the session’s iSCSi ID. show run iscsi Displays all globally-configured non-default iSCSI settings in the current Dell Networking OS session.
10 Isolated Networks for Aggregators An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a nonisolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
11 Link Aggregation Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator auto-configures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128).
Uplink LAG When the Aggregator power is on, all uplink ports are configured in a single LAG (LAG 128). Server-Facing LAGs Server-facing ports are configured as individual ports by default. If you configure a server NIC in standalone, stacking, or VLT mode for LACP-based NIC teaming, server-facing ports are automatically configured as part of dynamic LAGs. The LAG range 1 to 127 is reserved for server-facing LAGs.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Configuration Tasks for Port Channel Interfaces To configure a port channel (LAG), use the commands similar to those found in physical interfaces. By default, no port channels are configured in the startup configuration. In VLT mode, port channel configurations are allowed in the startup configuration.
To add a physical interface to a port, use the following commands. 1. Add the interface to a port channel. INTERFACE PORT-CHANNEL mode channel-member interface This command is applicable only in PMUX mode. The interface variable is the physical interface type and slot/port information. 2. Double check that the interface was added to the port channel.
When more than one interface is added to a Layer 2-port channel, Dell Networking OS selects one of the active interfaces in the port channel to be the primary port. The primary port replies to flooding and sends protocol data units (PDUs). An asterisk in the show interfaces port-channel brief command indicates the primary port. As soon as a physical interface is added to a port channel, the properties of the port channel determine the properties of the physical interface.
shutdown Dell(conf-if-po-3)# Configuring the Minimum Oper Up Links in a Port Channel You can configure the minimum links in a port channel (LAG) that must be in “oper up” status to consider the port channel to be in “oper up” status. To set the “oper up” status of your links, use the following command. • Enter the number of links in a LAG that must be in “oper up” status. INTERFACE mode minimum-links number The default is 1.
• Delete a port channel. CONFIGURATION mode no interface portchannel channel-number • Disable a port channel. shutdown When you disable a port channel, all interfaces within the port channel are operationally down also. Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled. This functionality is supported on the Aggregator in Standalone, Stacking, and VLT modes. To configure auto LAG, use the following commands: 1.
Current address is f8:b1:56:07:1d:8e Server Port AdminState is Up Pluggable media not present Interface index is 15274753 Internet address is not set Mode of IPv4 Address Assignment : NONE DHCP Client-ID :f8b156071d8e MTU 12000 bytes, IP MTU 11982 bytes LineSpeed auto Auto-lag is disabled Flowcontrol rx on tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:53 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127
the uplink LAG bundle to be activated only if a certain number of member interface links is also in the up state. If you enable this setting, the uplink LAG bundle is brought up only when the specified minimum number of links are up and the LAG bundle is moved to the down state when the number of active links in the LAG becomes less than the specified number of interfaces. By default, the uplink LAG 128 interface is activated when at least one member interface is up.
Preserving LAG and Port Channel Settings in Nonvolatile Storage Use the write memory command on an I/O Aggregator, which operates in either standalone or PMUX modes, to save the LAG port channel configuration parameters. This behavior enables the port channels to be brought up because the configured interface attributes are available in the system database during the booting of the device.
Table 11.
92 64-byte pkts, 0 over 64-byte pkts, 90 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 182 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 2999 packets, 383916 bytes, 0 underruns 5 64-byte pkts, 214 over 64-byte pkts, 2727 over 127-byte pkts 53 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 2904 Multicasts, 95 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddrops Rate info (i
Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/47 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/48 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not presen
Hardware address is 00:01:e8:e1:e1:c1, Current address is 00:01:e8:e1:e1:c1 Interface index is 1107755009 Minimum number of links to bring Port-channel up is 1 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :lag10001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 10000 Mbit Members in this channel: Te 0/12(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:41 Queueing strategy: fifo Input Statistics: 112 packets, 18161 bytes 0 64-
Multiple Uplink LAGs with 10G Member Ports The following sample commands configure multiple dynamic uplink LAGs with 10G member ports based on LACP. 1. Bring up all the ports. Dell#configure Dell(conf)#int range tengigabitethernet 0/1 - 56 Dell(conf-if-range-te-0/1-56)#no shutdown 2. Associate the member ports into LAG-10 and 11.
1001 Dell# 5. Active T Po11(Te 0/6) Show LAG member ports utilization.
Dell(conf)#int fortygige 0/49 Dell(conf-if-fo-0/49)#port-channel-protocol lacp Dell(conf-if-fo-0/49-lacp)#port-channel 21 mode active Dell(conf-if-fo-0/49-lacp)# Dell(conf-if-fo-0/49)#no shut 4. Configure the port mode, VLAN, and so forth on the port-channel.
12 Layer 2 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
• vlan: deletes all entries for the specified VLAN. Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode. show mac-address-table [address | aging-time [vlan vlan-id]| count | dynamic | interface | static | vlan] – address: displays the specified entry. – aging-time: displays the configured aging-time.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and reassociated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves. Figure 19.
MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
13 Link Layer Discovery Protocol (LLDP) Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs. You can configure the inclusion of individual Optional TLVs. Table 12. Type, Length, Value (TLV) Types Type TLV Description 0 End of LLDPDU Marks the end of an LLDPDU. 1 Chassis ID The Chassis ID TLV is a mandatory TLV that identifies the chassis containing the IEEE 802 LAN station associated with the transmitting LLDP agent.
CONFIGURATION versus INTERFACE Configurations All LLDP configuration commands are available in PROTOCOL LLDP mode, which is a sub-mode of the CONFIGURATION mode and INTERFACE mode. • Configurations made at the CONFIGURATION level are global; that is, they affect all interfaces on the system. • Configurations made at the INTERFACE level affect only the specific interface; they override CONFIGURATION level configurations.
To undo an LLDP configuration, precede the relevant command with the keyword no. Advertising TLVs You can configure the system to advertise TLVs out of all interfaces or out of specific interfaces. • If you configure the system globally, all interfaces send LLDPDUs with the specified TLVs. • If you configure an interface, only the interface sends LLDPDUs with the specified TLVs.
Figure 22. Configuring LLDP Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
Type TLV Description 5 System name A user-defined alphanumeric string that identifies the system. 6 System description A user-defined alphanumeric string that identifies the system. 7 System capabilities Identifies the chassis as one or more of the following: repeater, bridge, WLAN Access Point, Router, Telephone, DOCSIS cable device, end station only, or other. 8 Management address Indicates the network address of the management interface.
LLDP-MED Capabilities TLV The LLDP-MED capabilities TLV communicates the types of TLVs that the endpoint device and the network connectivity device support. LLDP-MED network connectivity devices must transmit the Network Policies TLV. • The value of the LLDP-MED capabilities field in the TLV is a 2–octet bitmap, each bit represents an LLDP-MED capability (as shown in the following table). • The possible values of the LLDP-MED device type are shown in the following.
• DSCP value An integer represents the application type (the Type integer shown in the following table), which indicates a device function for which a unique network policy is defined. An individual LLDP-MED network policy TLV is generated for each application type that you specify with the CLI (XXAdvertising TLVs).
• Power Source — there are two possible power sources: primary and backup. The Dell Networking system is a primary power source, which corresponds to a value of 1, based on the TIA-1057 specification. • Power Priority — there are three possible priorities: Low, High, and Critical. On Dell Networking systems, the default power priority is High, which corresponds to a value of 2 based on the TIA-1057 specification. You can configure a different power priority through the CLI.
no shutdown R1(conf-if-te-0/3)#protocol lldp R1(conf-if-te-0/3-lldp)#show config ! protocol lldp R1(conf-if-te-0/3-lldp)# Viewing Information Advertised by Adjacent LLDP Agents To view brief information about adjacent devices or to view all the information that neighbors are advertising, use the following commands. • • Display brief information about adjacent devices. show lldp neighbors Display all of the information that neighbors are advertising.
Next packet will be sent after 4 seconds The neighbors are given below: ----------------------------------------------------------------------Remote Chassis ID Subtype: Mac address (4) Remote Chassis ID: 00:00:c9:ad:f6:12 Remote Port Subtype: Mac address (3) Remote Port ID: 00:00:c9:ad:f6:12 Local Port ID: TenGigabitEthernet 0/3 Configuring LLDPDU Intervals LLDPDUs are transmitted periodically; the default interval is 30 seconds. To configure LLDPDU intervals, use the following command.
no disable R1(conf-lldp)#multiplier ? <2-10> Multiplier (default=4) R1(conf-lldp)#multiplier 5 R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description multiplier 5 no disable R1(conf-lldp)#no multiplier R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilit
Figure 27. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • Received and transmitted TLVs • LLDP configuration on the local agent • IEEE 802.1AB Organizationally Specific TLVs • Received and transmitted LLDP-MED TLVs Table 17.
MIB Object Category LLDP Statistics LLDP Variable LLDP MIB Object Description mibMgmtAddrInstanceTxEnable lldpManAddrPortsTxEnable The management addresses defined for the system and the ports through which they are enabled for transmission. statsAgeoutsTotal lldpStatsRxPortAgeoutsTotal Total number of times that a neighbor’s information is deleted on the local system due to an rxInfoTTL timer expiration.
TLV Type TLV Name TLV Variable management address length management address subtype management address interface numbering subtype interface number OID System LLDP MIB Object Remote lldpRemSysCapEnabled Local lldpLocManAddrLen Remote lldpRemManAddrLen Local lldpLocManAddrSubtype Remote lldpRemManAddrSubtype Local lldpLocManAddr Remote lldpRemManAddr Local lldpLocManAddrIfSubtype Remote lldpRemManAddrIfSubtyp e Local lldpLocManAddrIfId Remote lldpRemManAddrIfId Local lldpLoc
Table 20.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object 4 Extended Power via MDI Power Device Type Local lldpXMedLocXPoEDevice Type Remote lldpXMedRemXPoEDevice Type Local lldpXMedLocXPoEPSEPo werSource Power Source lldpXMedLocXPoEPDPow erSource Remote lldpXMedRemXPoEPSEP owerSource lldpXMedRemXPoEPDPo werSource Power Priority Local lldpXMedLocXPoEPDPow erPriority lldpXMedLocXPoEPSEPor tPDPriority Remote lldpXMedRemXPoEPSEP owerPriority lldpXMedRemXPoEPDPo werPriority Power Value
14 Object Tracking IPv4 or IPv6 object tracking is available on Dell Networking OS. Object tracking allows the Dell Networking OS client processes, such as virtual router redundancy protocol (VRRP), to monitor tracked objects (for example, interface or link status) and take appropriate action when the state of an object changes. NOTE: In Dell Networking OS release version 9.7(0.0), object tracking is supported only on VRRP.
Figure 28. Object Tracking Example When you configure a tracked object, such as an IPv4/IPv6 a route or interface, you specify an object number to identify the object. Optionally, you can also specify: • UP and DOWN thresholds used to report changes in a route metric. • A time delay before changes in a tracked object’s state are reported to a client. Track Layer 2 Interfaces You can create an object to track the line-protocol state of a Layer 2 interface.
A tracked route matches a route in the routing table only if the exact address and prefix length match an entry in the routing table. For example, when configured as a tracked route, 10.0.0.0/24 does not match the routing table entry 10.0.0.0/8. If no route-table entry has the exact address and prefix length, the tracked route is considered to be DOWN.
To remove object tracking on a Layer 2 interface, use the no track object-id command. To configure object tracking on the status of a Layer 2 interface, use the following commands. 1. Configure object tracking on the line-protocol state of a Layer 2 interface. CONFIGURATION mode track object-id interface interface line-protocol Valid object IDs are from 1 to 65535. 2. (Optional) Configure the time delay used before communicating a change in the status of a tracked interface.
• The status of an IPv6 interface is UP only if the Layer 2 status of the interface is UP and the interface has a valid IPv6 address. • The Layer 3 status of an IPv6 interface goes DOWN when its Layer 2 status goes down (for a Layer 3 VLAN, all VLAN ports must be down) or the IPv6 address is removed from the routing table. To remove object tracking on a Layer 3 IPv4/IPv6 interface, use the no track object-id command.
Track an IPv4/IPv6 Route You can create an object that tracks the reachability or metric of an IPv4 or IPv6 route. You specify the route to be tracked by its address and prefix-length values. Optionally, for an IPv4 route, you can enter a VRF instance name if the route is part of a VPN routing and forwarding (VRF) table. The next-hop address is not part of the definition of a tracked IPv4/IPv6 route.
• Display the configuration and status of currently tracked Layer 2 or Layer 3 interfaces, IPv4 or IPv6 routes instance. show track [object-id [brief] | interface [brief] | ip route [brief] | resolution | brief] • Use the show running-config track command to display the tracking configuration of a specified object or all objects that are currently configured on the router. show running-config track [object-id] Examples of Viewing Tracked Objects Dell#show track Track 1 IP route 23.0.0.
15 Port Monitoring The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Supported Modes Standalone, PMUX, VLT, Stacking Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
In the following example, the host and server are exchanging traffic which passes through the uplink interface 1/1. Port 1/1 is the monitored port and port 1/42 is the destination port, which is configured to only monitor traffic received on tengigabitethernet 1/1 (host-originated traffic). Figure 29. Port Monitoring Example Important Points to Remember • Port monitoring is supported on physical ports only; virtual local area network (VLAN) and port-channel interfaces do not support port monitoring.
NOTE: There is no limit to the number of monitoring sessions per system, provided that there are only four destination ports per port-pipe. If each monitoring session has a unique destination port, the maximum number of session is four per port-pipe. Port Monitoring The Aggregator supports multiple source-destination statements in a monitor session, but there may only be one destination port in a monitoring session.
In the example below, 0/25 and 0/26 belong to Port-pipe 1. This port-pipe has the same restriction of only four destination ports, new or used.
16 Security The Aggregator provides many security features. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, see the Security chapter in the Dell PowerEdge Command Line Reference Guide for the M I/O Aggregator . Supported Modes Standalone, PMUX, VLT, Stacking NOTE: You can also perform some of the configurations using the Web GUI - Dell Blade IO Manager.
AAA Authentication Dell Networking OS supports a distributed client/server system implemented through authentication, authorization, and accounting (AAA) to help secure networks against unauthorized access.
2. • none: no authentication. • radius: use the RADIUS servers configured with the radius-server host command. • tacacs+: use the TACACS+ servers configured with the tacacs-server host command. Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [... end-number]} 3. Assign a method-list-name or the default list to the terminal line.
Example of Enabling Authentication from the RADIUS Server Dell(config)# aaa authentication enable default radius tacacs Radius and TACACS server has to be properly setup for this. Dell(config)# radius-server host x.x.x.x key Dell(config)# tacacs-server host x.x.x.x key To use local authentication for enable secret on the console, while using remote authentication on VTY lines, issue the following commands.
You can configure passwords to control access to the box and assign different privilege levels to users. The Dell Networking OS supports the use of passwords when you log in to the system and when you enter the enable command. If you move between privilege levels, you are prompted for a password if you move to a higher privilege level. Configuration Task List for Privilege Levels The following list has the configuration tasks for privilege levels and passwords.
– password: Enter a string. To change only the password for the enable command, configure only the password parameter. To view the configuration for the enable secret command, use the show running-config command in EXEC Privilege mode. In custom-configured privilege levels, the enable command is always available. No matter what privilege level you entered, you can enter the enable 15 command to access and configure all CLIs.
• reset: return the command to its default privilege mode. To view the configuration, use the show running-config command in EXEC Privilege mode. The following example shows a configuration to allow a user john to view only EXEC mode commands and all snmp-server commands. Because the snmp-server commands are enable level commands and, by default, found in CONFIGURATION mode, also assign the launch command for CONFIGURATION mode, configure, to the same privilege level as the snmp-server commands.
Specifying LINE Mode Password and Privilege You can specify a password authentication of all users on different terminal lines. The user’s privilege level is the same as the privilege level assigned to the terminal line, unless a more specific privilege level is assigned to the user. To specify a password for the terminal line, use the following commands. • Configure a custom privilege level for the terminal lines. LINE mode privilege level level • – level level: The range is from 0 to 15.
Transactions between the RADIUS server and the client are encrypted (the users’ passwords are not sent in plain text). RADIUS uses UDP as the transport protocol between the RADIUS server host and the client. For more information about RADIUS, refer to RFC 2865, Remote Authentication Dial-in User Service.
• Set a privilege level. privilege level Configuration Task List for RADIUS To authenticate users using RADIUS, you must specify at least one RADIUS server so that the system can communicate with and configure RADIUS as one of your authentication methods. The following list includes the configuration tasks for RADIUS.
• This procedure is mandatory if you are not using default lists. To use the method list. CONFIGURATION mode authorization exec methodlist Specifying a RADIUS Server Host When configuring a RADIUS server host, you can set different communication parameters, such as the UDP port, the key password, the number of retries, and the timeout. To specify a RADIUS server host and configure its communication parameters, use the following command. • Enter the host name or IP address of the RADIUS server host.
– encryption-type: enter 7 to encrypt the password. Enter 0 to keep the password as plain text. • – key: enter a string. The key can be up to 42 characters long. You cannot use spaces in the key. Configure the number of times Dell Networking OS retransmits RADIUS requests. CONFIGURATION mode radius-server retransmit retries • – retries: the range is from 0 to 100. Default is 3 retries. Configure the time interval the system waits for a RADIUS server host response.
Use this command multiple times to configure multiple TACACS+ server hosts. 2. Enter a text string (up to 16 characters long) as the name of the method list you wish to use with the TACAS+ authentication method. CONFIGURATION mode aaa authentication login {method-list-name | default} tacacs+ [...method3] The TACACS+ method must not be the last method specified. 3. Enter LINE mode. CONFIGURATION mode line {aux 0 | console 0 | vty number [end-number]} 4. Assign the method-list to the terminal line.
TACACS+ Remote Authentication The system takes the access class from the TACACS+ server. Access class is the class of service that restricts Telnet access and packet sizes. If you have configured remote authorization, the system ignores the access class you have configured for the VTY line and gets this access class information from the TACACS+ server. The system must know the username and password of the incoming user before it can fetch the access class from the server.
Enabling SCP and SSH Secure shell (SSH) is a protocol for secure remote login and other secure network services over an insecure network. Dell Networking OS is compatible with SSH versions 1.5 and 2, in both the client and server modes. SSH sessions are encrypted and use authentication. SSH is enabled by default. For details about the command syntax, refer to the Security chapter in the Dell Networking OS Command Line Interface Reference Guide.
ip ssh server port number 2. On Switch 1, enable SSH. CONFIGURATION MODE copy ssh server enable 3. On Switch 2, invoke SCP. CONFIGURATION MODE copy scp: flash: 4. On Switch 2, in response to prompts, enter the path to the desired file and enter the port number specified in Step 1. EXEC Privilege Mode 5. On the chassis, invoke SCP.
• Enable SSH password authentication. CONFIGURATION mode ip ssh password-authentication enable Example of Enabling SSH Password Authentication To view your SSH configuration, use the show ip ssh command from EXEC Privilege mode. Dell(conf)#ip ssh server enable Dell(conf)#ip ssh password-authentication enable Dell# show ip ssh SSH server : enabled. SSH server version : v1 and v2. SSH server vrf : default. SSH server ciphers : 3des-cbc,aes128-cbc,aes192-cbc,aes256-cbc,aes128-ctr,aes192ctr,aes256-ctr.
Configuring Host-Based SSH Authentication Authenticate a particular host. This method uses SSH version 2. To configure host-based authentication, use the following commands. 1. Configure RSA Authentication. Refer to Using RSA Authentication of SSH. 2. Create shosts by copying the public RSA key to the file shosts in the directory .ssh, and write the IP address of the host to the file. cp /etc/ssh/ssh_host_rsa_key.pub /.ssh/shosts Refer to the first example. 3.
Using Client-Based SSH Authentication To SSH from the chassis to the SSH client, use the following command. This method uses SSH version 1 or version 2. If the SSH port is a non-default value, use the ip ssh server port number command to change the default port number. You may only change the port number when SSH is disabled. Then use the -p option with the ssh command. • SSH from the chassis to the SSH client. ssh ip_address Example of Client-Based SSH Authentication Dell#ssh 10.16.127.
Dell Networking OS provides several ways to configure access classes for VTY lines, including: • • VTY Line Local Authentication and Authorization VTY Line Remote Authentication and Authorization VTY Line Local Authentication and Authorization Dell Networking OS retrieves the access class from the local database. To use this feature: 1. Create a username. 2. Enter a password. 3. Assign an access class. 4. Enter a privilege level.
Dell(config-line-vty)#access-class deny10 Dell(config-line-vty)#end (same applies for radius and line authentication) VTY MAC-SA Filter Support Dell Networking OS supports MAC access lists which permit or deny users based on their source MAC address. With this approach, you can implement a security policy based on the source MAC address. To apply a MAC ACL on a VTY line, use the same access-class command as IP ACLs. The following example shows how to deny incoming connections from subnet 10.0.0.
17 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
agents and managers that are allowed to interact. Communities are necessary to secure communication between SNMP managers and agents; SNMP agents do not respond to requests from management stations that are not part of the community. The Dell Networking OS enables SNMP automatically when you create an SNMP community and displays the following message. You must specify whether members of the community may retrieve values in Read-Only mode. Read-write access is not supported.
• Configure an SNMPv3 view. CONFIGURATION mode snmp-server view view-name 3 noauth {included | excluded} NOTE: To give a user read and write privileges, repeat this step for each privilege type. • Configure an SNMP group (with password or privacy privileges). CONFIGURATION mode snmp-server group group-name {oid-tree} priv read name write name • Configure the user with a secure authorization password and privacy password.
Enable all Dell Networking enterprise-specific and RFC-defined traps using the snmp-server enable traps command from CONFIGURATION mode. Enable all of the RFC-defined traps using the snmp-server enable traps snmp command from CONFIGURATION mode. 3. Specify the interfaces out of which Dell Networking OS sends SNMP traps.
MAJOR_PS: Major alarm: insufficient power %s MAJOR_PS_CLR: major alarm cleared: sufficient power MINOR_PS: Minor alarm: power supply non-redundant MINOR_PS_CLR: Minor alarm cleared: power supply redundant envmon temperature MINOR_TEMP: Minor alarm: chassis temperature MINOR_TEMP_CLR: Minor alarm cleared: chassis temperature normal (%s %d temperature is within threshold of %dC) MAJOR_TEMP: Major alarm: chassis temperature high (%s temperature reaches or exceeds threshold of %dC) MAJOR_TEMP_CLR: Major alarm c
%RPM0-P:CP %SNMP-4-RMON_FALLING_THRESHOLD: STACKUNIT0 falling threshold alarm from SNMP OID %RPM0-P:CP %SNMP-4-RMON_HC_RISING_THRESHOLD: STACKUNIT0 high-capacity rising threshold alarm from SNMP OID Reading Managed Object Values You may only retrieve (read) managed object values if your management station is a member of the same community as the SNMP agent.
Address is 00:01:e8:cc:cc:ce, Current address is 00:01:e8:cc:cc:ce Interface index is 1107787786 Internet address is not set MTU 1554 bytes, IP MTU 1500 bytes LineSpeed auto ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:42 Queueing strategy: fifo Time since last interface status change: 00:12:42 To display the ports in a VLAN, send an snmpget request for the object dot1qStaticEgressPorts using the interface index as the instance number, as shown in the following examp
Fetching Dynamic MAC Entries using SNMP The Aggregator supports the RFC 1493 dot1d table for the default VLAN and the dot1q table for all other VLANs. NOTE: The table contains none of the other information provided by the show vlan command, such as port speed or whether the ports are tagged or untagged. NOTE: The 802.1q Q-BRIDGE MIB defines VLANs regarding 802.1d, as 802.1d itself does not define them.
In the following example, TenGigabitEthernet 0/7 is moved to VLAN 1000, a non-default VLAN. To fetch the MAC addresses learned on non-default VLANs, use the object dot1qTpFdbTable. The instance number is the VLAN number concatenated with the decimal conversion of the MAC address.
Hardware is Dell Force10Eth, address is 00:01:e8:0d:b7:4e Current address is 00:01:e8:0d:b7:4e Interface index is 72925242 [output omitted] Monitor Port-Channels To check the status of a Layer 2 port-channel, use f10LinkAggMib (.1.3.6.1.4.1.6027.3.2). In the following example, Po 1 is a switchport and Po 2 is in Layer 3 mode. NOTE: The interface index does not change if the interface reloads or fails over. If the unit is renumbered (for any reason) the interface index changes during a reload.
SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500932) 23:36:49.32 SNMPv2-MIB::snmpTrapOID.0 = OID: IF-MIB::linkUp IF-MIB::ifIndex.33865785 = INTEGER: 33865785 SNMPv2-SMI::enterprises. 6027.3.1.1.4.1.2 = STRING: "OSTATE_UP: Changed interface state to up: Tengig 0/1" 2010-02-10 14:22:40 10.16.130.4 [10.16.130.4]: SNMPv2-MIB::sysUpTime.0 = Timeticks: (8500934) 23:36:49.34 SNMPv2-MIB::snmpTrapOID.0 = OID: IF-MIB::linkUp IF-MIB::ifIndex.1107755009 = INTEGER: 1107755009 SNMPv2-SMI::enterprises.6027.3.1.1.4.1.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.30 = STRING: "Unit: 0 Port 27 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.31 = STRING: "Unit: 0 Port 28 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.32 = STRING: "Unit: 0 Port 29 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.33 = STRING: "Unit: 0 Port 30 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.34 = STRING: "Unit: 0 Port 31 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.35 = STRING: "Unit: 0 Port 32 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.36 = STRING: "40G QSFP+ port" SNMPv2-SMI::mib-2.
Standard VLAN MIB When the Aggregator is in Standalone mode, where all the 4000 VLANs are part of all server side interfaces as well as the single uplink LAG, it takes a long time (30 seconds or more) for external management entities to discover the entire VLAN membership table of all the ports. Support for current status OID in the standard VLAN MIB is expected to simplify and speed up this process.
In standalone mode, there are 4000 VLANs, by default. The SNMP output will display for all 4000 VLANs. To view a particular VLAN, issue the snmp query with VLAN interface ID. Dell#show interface vlan 1010 | grep “Interface index” Interface index is 1107526642 Use the output of the above command in the snmp query. snmpwalk -Os -c public -v 1 10.16.151.151 1.3.6.1.2.1.17.7.1.4.2.1.4.0.1107526642 mib-2.17.7.1.4.2.1.4.0.
MIB Object OID Description chSysCoresTimeCreated 1.3.6.1.4.1.6027.3.19.1.2.9.1.3 Contains the time at which core files are created. chSysCoresStackUnitNumber 1.3.6.1.4.1.6027.3.19.1.2.9.1.4 Contains information that includes which stack unit or processor the core file was originated from. chSysCoresProcess 1.3.6.1.4.1.6027.3.19.1.2.9.1.5 Contains information that includes the process names that generated each core file.
18 Stacking An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported only on the 40GbE ports on the base module. Stacking is limited to six Aggregators in the same or different m1000e chassis. To configure a stack, you must use the CLI. Stacking provides a single point of management for high availability and higher throughput.
Figure 30. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. • Stack master — primary management unit • Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
• Switch removal If the master switch goes off line, the standby replaces it as the new master. NOTE: For the Aggregator, the entire stack has only one management IP address. Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command. The following example shows sample output from an established stack.
Stacking LAG When you use multiple links between stack units, Dell Networking Operating System automatically bundles them in a stacking link aggregation group (LAG) to provide aggregated throughput and redundancy. The stacking LAG is established automatically and transparently by operating system (without user configuration) after peering is detected and behaves as follows: • The stacking LAG dynamically aggregates; it can lose link members or gain new links.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator. Figure 31.
Stacking in PMUX Mode PMUX stacking allows the stacking of two or more IOA units. This allows grouping of multiple units for high availability. IOA supports a maximum of six stacking units. NOTE: Prior to configuring the stack-group, ensure the stacking ports are connected and in 40G native mode. 1. Configure stack groups on all stack units.
• A maximum of four stack groups (40GbE ports) is supported on a stacked Aggregator. • Interconnect the stack units by following the instructions in Cabling Stacked Switches. • You cannot stack a Standalone IOA and a PMUX. Master Selection Criteria A Master is elected or re-elected based on the following considerations, in order: 1. The switch with the highest priority at boot time. 2. The switch with the highest MAC address at boot time. 3.
3. Continue to run the stack-unit 0 stack-group <0-3> command to add additional stack ports to the switch, using the stack-group mapping. Cabling Stacked Switches Before you configure MXL switches in a stack, connect the 40G direct attach or QSFP cables and transceivers to connect 40GbE ports on two Aggregators in the same or different chassis.
3. Configure the Aggregator to operate in stacking mode. CONFIGURATION mode stack-unit 0 iom-mode stack 4. Repeat Steps 1 to 3 on the second Aggregator in the stack. 5. Log on to the CLI and reboot each switch, one after another, in as short a time as possible. EXEC PRIVILEGE mode reload NOTE: If the stacked switches all reboot at approximately the same time, the switch with the highest MAC address is automatically elected as the master switch.
Resetting a Unit on a Stack Use the following reset commands to reload any of the member units or the standby in a stack. If you try to reset the stack master, the following error message is displayed: % Error: Reset of master unit is not allowed. To rest a unit on a stack, use the following commands: • Reload a stack-unit from the master switch. EXEC Privilege mode reset stack-unit unit-number • Reset a stack-unit when the unit is in a problem state.
After you restart the Aggregator, the 4-Port 10 GbE Ethernet modules or the 40GbE QSFP+ port that is split into four 10GbE SFP+ ports cannot be configured to be part of the same uplink LAG bundle that is set up with the uplink speed of 40 GbE. In such a condition, you can perform a hot-swap of the 4-port 10 GbE Flex IO modules with a 2-port 40 GbE Flex IO module, which causes the module to become a part of the LAG bundle that is set up with 40 GbE as the uplink speed without another reboot.
3. Completely cable the stacking connections, making sure the redundant link is also in place. Two operational stacks can also be merged by reconnecting stack cables without powering down units in either stack. Connecting a powered-up standalone unit to an existing stack leads to same behavior as when merging two operational stacks. In such cases, Manager re-election is done and the Manager with the higher MAC address wins the election. The losing stack manager resets itself and all its member units.
Status : online Next Boot : online Required Type : I/O-Aggregator - 34-port GE/TE (XL) Current Type : I/O-Aggregator - 34-port GE/TE (XL) Master priority : 0 Hardware Rev : Num Ports : 56 Up Time : 2 hr, 41 min FTOS Version : 8-3-17-46 Jumbo Capable : yes POE Capable : no Burned In MAC : 00:1e:c9:f1:00:9b No Of MACs : 3 -- Unit 1 -Unit Type : Standby Unit Status : online Next Boot : online Required Type : I/O-Aggregator - 34-port GE/TE (XL) Current Type : I/O-Aggregator - 34-port GE/TE (XL) Master priority
Example of the show system stack-ports (ring) Command Dell# show system stack-ports Topology: Ring Interface Connection Link Speed (Gb/s) 0/33 1/33 40 0/37 1/37 40 1/33 0/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Group up up up up Example of the show system stack-ports (daisy chain) Command Dell# show system stack-ports Topology: Daisy Chain Interface Connection Link Speed (Gb/s) 0/33 40 0/37 1/37 40 1/33 40 1/37 0/37 40 Admin Status up up up up Link Trunk Status Group down up down
Stack-unit SW Version: Link to Peer: E8-3-16-46 Up -- PEER Stack-unit Status --------------------------------------------------------Stack-unit State: Standby (Indicates Standby Unit.) Peer stack-unit ID: 1 Stack-unit SW Version: E8-3-16-46 -- Stack-unit Redundancy Configuration ---------------------------------------------------------Primary Stack-unit: mgmt-id 0 Auto Data Sync: Full Hot (Failover Failover type with redundancy.
The following syslog messages are generated when a member unit fails: Dell#May 31 01:46:17: %STKUNIT3-M:CP %IPC-2-STATUS: target stack unit 4 not responding May 31 01:46:17: %STKUNIT3-M:CP %CHMGR-2-STACKUNIT_DOWN: Major alarm: Stack unit 4 down IPC timeout Dell#May 31 01:46:17: %STKUNIT3-M:CP %IFMGR-1-DEL_PORT: Removed port: Te 4/1-32,41-48, Fo 4/ 49,53 Dell#May 31 01:46:18: %STKUNIT5-S:CP %IFMGR-1-DEL_PORT: Removed port: Te 4/1-32,41-48, Fo 4/ 49,53 Unplugged Stacking Cable • Problem: A stacking cable is
Stack Unit in Card-Problem State Due to Incorrect Dell Networking OS Version • • Problem: A stack unit enters a Card-Problem state because the switch has a different Dell Networking OS version than the master unit. The switch does not come online as a stack unit. Resolution: To restore a stack unit with an incorrect Dell Networking OS version as a member unit, disconnect the stacking cables on the switch and install the correct Dell Networking OS version.
4. Save the configuration. EXEC Privilege write memory 5. Reload the stack unit to activate the new Dell Networking OS version. CONFIGURATION mode reload Example of Upgrading all Stacked Switches The following example shows how to upgrade all switches in a stack, including the master switch. Dell# upgrade system ftp: A: Address or name of remote host []: 10.11.200.241 Source file name []: //FTOS-XL-8.3.17.0.
EXEC Privilege mode power-cycle stack-unit unit-number Example of Upgrading a Single Stack Unit The following example shows how to upgrade an individual stack unit.
19 Storm Control The storm control feature allows you to control unknown-unicast, muticast, and broadcast control traffic on Layer 2 and Layer 3 physical interfaces. Dell Networking OS Behavior: The Dell Networking OS supports broadcast control (the storm-control broadcast command) for Layer 2 and Layer 3 traffic. The minimum number of packets per second (PPS) that storm control can limit is two.
CONFIGURATION mode storm-control multicast packets_per_second in • Configure the packets per second of unknown-unicast traffic allowed in or out of the network. CONFIGURATION mode storm-control unknown-unicast [packets_per_second in] Configuring Storm Control from INTERFACE Mode To configure storm control, use the following command. You can only configure storm control for ingress traffic in INTERFACE mode.
20 Broadcast Storm Control On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcaststorm control operation by using the show io-aggregator broadcast storm-control status command.
21 System Time and Date The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
– timezone-name: Enter the name of the timezone. Do not use spaces. – offset: Enter one of the following: * a number from 1 to 23 as the number of hours in addition to UTC for the timezone. * a minus sign (-) then a number from 1 to 23 as the number of hours. Example of the clock timezone Command Dell#conf Dell(conf)#clock timezone Pacific -8 Dell# Setting Daylight Savings Time Dell Networking OS supports setting the system to daylight savings time once or on a recurring basis every year.
clock summer-time time-zone recurring start-week start-day start-month start-time endweek end-day end-month end-time [offset] – time-zone: Enter the three-letter name for the time zone. This name displays in the show clock output. – start-week: (OPTIONAL) Enter one of the following as the week that daylight saving begins and then enter values for start-day through end-time: * week-number: Enter a number from 1 to 4 as the number of the week in the month to start daylight saving time.
22 Uplink Failure Detection (UFD) Supported Modes Standalone, PMUX, VLT, Stacking Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
By default, if all upstream interfaces in an uplink-state group go down, all downstream interfaces in the same uplink-state group are put into a Link-Down state. Using UFD, you can configure the automatic recovery of downstream ports in an uplink-state group when the link status of an upstream port changes. The tracking of upstream link status does not have a major impact on central processing unit (CPU) usage.
1. View the Uplink status group. EXEC Privilege mode show uplink-state-group Dell#show uplink-state-group Uplink State Group: 1 Status: Enabled, Down 2. Enable the uplink group tracking. UPLINK-STATE-GROUP mode enable Dell(conf)#uplink-state-group 1 Dell(conf-uplink-state-group-1)#enable To disable the uplink group tracking, use the no enable command. 3. Change the default timer.
• A comma is required to separate each port and port-range entry. To delete an interface from the group, use the no {upstream | downstream} interface command. 3. (Optional) Configure the number of downstream links in the uplink-state group that will be disabled (Oper Down state) if one upstream link in the group goes down. UPLINK-STATE-GROUP mode downstream disable links {number | all} • number: specifies the number of downstream links to be brought down. The range is from 1 to 1024.
clear ufd-disable {interface interface | uplink-state-group group-id} For interface, enter one of the following interface types: – 10 Gigabit Ethernet: enter tengigabitethernet {slot/port | slot/port-range} – 40 Gigabit Ethernet: enter fortygigabitethernet {slot/port | slot/port-range} – Port channel: enter port-channel {1-512 | port-channel-range} * Where port-range and port-channel-range specify a range of ports separated by a dash (-) and/or individual ports/port channels in any order; for example: tengi
• – detail: displays additional status information on the upstream and downstream interfaces in each group. Display the current status of a port or port-channel interface assigned to an uplink-state group. EXEC mode show interfaces interface interface specifies one of the following interface types: – 10 Gigabit Ethernet: enter tengigabitethernet slot/port. – 40 Gigabit Ethernet: enter fortygigabitethernet slot/port. – Port channel: enter port-channel {1-512}.
Example of Viewing Interface Status with UFD Information (S50) Dell#show interfaces gigabitethernet 7/45 TenGigabitEthernet 7/45 is up, line protocol is down (error-disabled[UFD]) Hardware is Force10Eth, address is 00:01:e8:32:7a:47 Current address is 00:01:e8:32:7a:47 Interface index is 280544512 Internet address is not set MTU 1554 bytes, IP MTU 1500 bytes LineSpeed 1000 Mbit, Mode auto Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:25:46 Queue
00:23:52: %STKUNIT0-M:CP %IFMGR-5-ASTATE_UP: Changed uplink state group Admin state to up: Group 3 Dell(conf-uplink-state-group-3)#downstream tengigabitethernet 0/1-2,5,9,11-12 Dell(conf-uplink-state-group-3)#downstream disable links 2 Dell(conf-uplink-state-group-3)#upstream tengigabitethernet 0/3-4 Dell(conf-uplink-state-group-3)#description Testing UFD feature Dell(conf-uplink-state-group-3)#show config ! uplink-state-group 3 description Testing UFD feature downstream disable links 2 downstream TenGigabi
23 PMUX Mode of the IO Aggregator This chapter provides an overview of the PMUX mode. I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability. This involves creating multiple LAGs, configuring VLANs on uplinks and the server side, configuring data center bridging (DCB) parameters, and so forth. By default, IOA starts up in IOA Standalone mode.
Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile. The user profile you defined to access and log in to the switch is sufficient to configure the PMUX mode commands. The IOA PMUX Mode CLI Commands section lists the PMUX mode CLI commands that you can now configure without a separate user account.
• Assures high availability. CAUTION: Dell Networking does not recommend enabling Stacking and VLT simultaneously. If you enable both features at the same time, unexpected behavior occurs. As shown in the following example, VLT presents a single logical Layer 2 domain from the perspective of attached devices that have a virtual link trunk terminating on separate chassis in the VLT domain. However, the two VLT chassis are independent Layer2/Layer3 (L2/L3) switches for devices in the upstream network.
EXEC mode Dell# show interfaces port brief Codes: L - LACP Port-channel O - OpenFlow Controller Port-channel LAG L 127 Mode L2 Status up Uptime 00:18:22 128 L2 up 00:00:00 Ports Fo 0/33 Fo 0/37 Fo 0/41 (Up)<<<<<<<
Delay-Restore Abort Threshold Peer-Routing Peer-Routing-Timeout timer Multicast peer-routing timeout Dell# 5. : 60 seconds : Disabled : 0 seconds : 150 seconds Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unit-id.
13 Active 14 Active 15 Active 20 Active Dell T T T T T T T U U Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 Po128(Te Te 0/1 0/41-42) 0/41-42) 0/41-42) 0/41-42) You can remove the inactive VLANs that have no member ports using the following command: Dell#configure Dell(conf)#no interface vlan ->vlan-id - Inactive VLAN with no member ports You can remove the tagged VLANs using the no vlan tagged command.
– A VLT domain consists of the two core chassis, the interconnect trunk, backup link, and the LAG members connected to attached devices. – Each VLT domain has a unique MAC address that you create or VLT creates automatically. – ARP tables are synchronized between the VLT peer nodes. – VLT peer switches operate as separate chassis with independent control and data planes for devices attached on non-VLT ports.
– If the link between VLT peer switches is established, any change to the VLT system MAC address or unit-id fails if the changes made create a mismatch by causing the VLT unit-ID to be the same on both peers and/or the VLT system MAC address does not match on both peers. – If you replace a VLT peer node, preconfigure the switch with the VLT system MAC address, unit-id, and other VLT parameters before connecting it to the existing VLT peer switch using the VLTi connection.
secondary switch disables its VLT port channels. If keepalive messages from the peer are not being received, the peer continues to forward traffic, assuming that it is the last device available in the network. In either case, after recovery of the peer link or reestablishment of message forwarding across the interconnect trunk, the two VLT peers resynchronize any MAC addresses learned while communication was interrupted and the VLT system continues normal data forwarding.
If you enable IGMP snooping, IGMP queries are also sent out on the VLT ports at this time allowing any receivers to respond to the queries and update the multicast table on the new node. This delay in bringing up the VLT ports also applies when the VLTi link recovers from a failure that caused the VLT ports on the secondary VLT peer node to be disabled. VLT Routing VLT routing is supported on the Aggregator. Layer 2 protocols from the ToR to the server are intra-rack and inter-rack.
EXEC mode show vlt statistics • Display the current status of a port or port-channel interface used in the VLT domain. EXEC mode show interfaces interface – interface: specify one of the following interface types: * Fast Ethernet: enter fastethernet slot/port. * 1-Gigabit Ethernet: enter gigabitethernet slot/port. * 10-Gigabit Ethernet: enter tengigabitethernet slot/port. * Port channel: enter port-channel {1-128}.
Example of the show vlt detail Command Dell_VLTpeer1# show vlt detail Local LAG Id -----------100 127 Peer LAG Id ----------100 2 Local Status Peer Status Active VLANs ------------ ----------- ------------UP UP 10, 20, 30 UP UP 20, 30 Dell_VLTpeer2# show vlt detail Local LAG Id -----------2 100 Peer LAG Id ----------127 100 Local Status -----------UP UP Peer Status ----------UP UP Active VLANs ------------20, 30 10, 20, 30 Example of the show vlt role Command Dell_VLTpeer1# show vlt role VLT Role --
---------------HeartBeat Messages Sent: HeartBeat Messages Received: ICL Hello's Sent: ICL Hello's Received: 994 978 89 89 Additional VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT domain, configure a backup link and interconnect trunk, and connect the peer switches in a VLT domain to an attached access device (switch or server). Review the following examples of VLT configurations.
Description Behavior at Peer Up Behavior During Run Time above the 80% threshold and when it drops below 80%. the VLTi bandwidth usage goes above its threshold. The VLT peer does not boot up. The VLTi is forced to a down state. The VLT peer does not boot up. The VLTi is forced to a down state. A syslog error message and an SNMP trap are generated. A syslog error message and an SNMP trap are generated. Dell Networking OS Version mismatch A syslog error message is generated.
24 FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
In a typical Fibre Channel storage network topology, separate network interface cards (NICs) and host bus adapters (HBAs) on each server (two each for redundancy purposes) are connected to LAN and SAN networks respectively. These deployments typically include a ToR SAN switch in addition to a ToR LAN switch. By employing converged network adapters (CNAs) that the FC Flex IO module supports, CNAs are used to transmit FCoE traffic from the server instead of separate NIC and HBA devices.
• If the switch contains FC Flex modules, you cannot create a stack, and a log message states that stacking is not supported unless the switches contain only FC Flex modules. Guidelines for Working with FC Flex IO Modules The following guidelines apply to the FC Flex IO module: • All the ports of FC Flex IO modules operate in FC mode, and do not support Ethernet mode. • FC Flex IO modules are not supported in the chassis management controller (CMC) GUI.
• With FC Flex IO modules on an IOA, the following DCB maps are applied on all of the ENode facing ports. • dcb-map: SAN_DCB_MAP • priority-group 0 bandwidth 30 pfc off • priority-group 1 bandwidth 30 pfc off • priority-group 2 bandwidth 40 pfc on • priority-pgid 0 0 0 2 1 0 0 0 • On I/O Aggregators, uplink failure detection (UFD) is disabled if FC Flex IO module is present to allow server ports to communicate with the FC fabric even when the Ethernet upstream ports are not operationally up.
Operation of the FIP Application The NPIV proxy gateway terminates the FIP sessions and responses to FIP messages. The FIP packets are intercepted by the FC Flex IO module and sent to the Dell Networking OS for further analysis. The FIP application responds to the FIP VLAN discovery request from the host based on the configured FCoE VLANs. For every ENode and VN_Port that is logged in, the FIP application responds to keepalive messages for the virtual channel.
Figure 35.
To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: • Clearance — There is adequate front and rear clearance for operator access. Allow clearance for cabling, power connections, and ventilation.
• Configure the NPIV-related commands on I/O Aggregator. After you perform the preceding procedure, the following operations take place: • A physical link is established between the FC Flex I/O module and the Cisco MDS switch. • The FC Flex I/O module sends a proxy FLOGI request to the upstream F_Port of the FC switch or the MDS switch. The F_port accepts the proxy FLOGI request for the FC Flex IO virtual N_Port. The converged network adapters (CNAs) are brought online and the FIP application is run.
Figure 37. Case 2: Deployment Scenario of Configuring FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FCoE provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames. The Fibre Channel (FC) Flex IO module is supported on Dell Networking Operating System (OS) I/O Aggregator (IOA).
25 FC FLEXIO FPORT FC FlexIO FPort is now supported on the Dell Networking OS. FC FLEXIO FPORT The switch is a blade switch which is plugged into the Dell M1000 Blade server chassis.The blade module contains two slots for pluggable flexible module. With single FC Flex IO module, 4 ports are supported, whereas 8 ports are supported with both FC Flex IO modules. Each port can operate in 2G, 4G or 8G Fiber Channel speed. The topology-wise, FC Flex IOM is directly connected to a FC Storage.
Configuring Switch Mode to FCF Port Mode To configure switch mode to Fabric services, use the following commands. 1. Configure Switch mode to FCF Port. CONFIGURATION mode feature fc fport domain id 2 NOTE: Enable remote-fault-signaling rx off command in FCF FPort mode on interfaces connected to the Compellent and MDF storage devices. 2. Create an FCoE map with the parameters used in the communication between servers and a SAN fabric. CONFIGURATION mode fcoe-map map-name 3.
• N_Port sends a Port Login (PLOGI) to inform the Fabric Name Server of its personality and capabilities, this includes WWNN, WWPN. • N_Port sends PLOGI to address 0xFFFFFC to register this address with the name server. Command Description show fc ns switch Display all the devices in name server database of the switch. show fc ns switch brief Displays the local name server entries — brief version.
The values for the FCoE VLAN, fabric ID, and FC-MAP must be unique. Apply an FCoE map on downstream server-facing Ethernet ports and upstream fabric-facing Fibre Channel ports. 1. Create an FCoE map which contains parameters used in the communication between servers and a SAN fabric. CONFIGURATION mode fcoe-map map-name 2. Configure the association between the dedicated VLAN and the fabric where the desired storage arrays are installed.
The default is 8 seconds. Zoning The zoning configurations are supported for Fabric FCF Port mode operation on the MXL. In FCF Port mode, the fcoe-map fabric map-name has the default Zone mode set to deny. This setting denies all the fabric connections unless included in an active zoneset. To change this setting, use the default-zone-allow command. Changing this setting to all allows all the fabric connections without zoning.
Example of Creating a Zone Alias and Adding Members Dell(conf)#fc alias al1 Dell(conf-fc-alias-al1)#member 030303 Dell(conf-fc-alias-al1)#exit Dell(conf)#fc zone z1 Dell(conf-fc-zone-z1)#member al1 Dell(conf-fc-zone-z1)#exit Creating Zonesets A zoneset is a grouping or configuration of zones. To create a zoneset and zones into the zoneset, use the following steps. 1. Create a zoneset. CONFIGURATION mode fc zoneset zoneset_name 2. Add zones into a zoneset.
Command Description show config Displays the fabric parameters. show fcoe-map Displays the fcoe-map. show fc ns switch Display all the devices in name server database of the switch. show fc ns switch brief Display all the devices in name server database of the switch - brief version. show fc zoneset Displays the zoneset. show fc zoneset active Displays the active zoneset. show fc zone Displays the configured zone. show fc alias Displays the configured alias.
Switch Name Domain Id Switch Port FC-Id Port Name Node Name Class of Service Symbolic Port Name Symbolic Node Name Port Type 28:4e:55:4c:4c:29:00:00 2 4 02:04:03 20:01:d4:ae:52:44:37:b2 20:00:d4:ae:52:44:37:b2 8 Broadcom Port0 pWWN 20:01:d4:ae:52:44:37:b2 Broadcom BCM57810 FCoE 7.6.3.0 7.6.59.
Switch WWN : 10:00:aa:00:00:00:00:ac Dell(conf)# FC FLEXIO FPORT 269
26 NPIV Proxy Gateway The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the Aggregator, allowing server CNAs to communicate with SAN fabrics over the Aggregator.
Servers use CNA ports to connect over FCoE to an Ethernet port in ENode mode on the NPIV proxy gateway. FCoE transit with FIP snooping is automatically enabled and configured on the FX2 gateway to prevent unauthorized access and data transmission to the SAN network. FIP is used by server CNAs to discover an FCoE switch operating as an FCoE forwarder (FCF).
Term Description N port Port mode of an Aggregator with the FC port that connects to an F port on an FC switch in a SAN fabric. On an Aggregator with the NPIV proxy gateway, an N port also functions as a proxy for multiple server CNA-port connections. ENode port Port mode of a server-facing Aggregator with the Ethernet port that provides access to FCF functionality on a fabric. CNA port N-port functionality on an FCoE-enabled server port.
An FCoE map applies the following parameters on server-facing Ethernet and fabric-facing FC ports on the Aggregator: • The dedicated FCoE VLAN used to transport FCoE storage traffic. • The FC-MAP value used to generate a fabric-provided MAC address. • The association between the FCoE VLAN ID and FC fabric ID where the desired storage arrays are installed. Each Fibre Channel fabric serves as an isolated SAN topology within the same physical network.
-------------------PG:0 TSA:ETS BW:30 PFC:OFF Priorities:0 1 2 5 6 7 PG:1 TSA:ETS Priorities:4 BW:30 PFC:OFF PG:2 TSA:ETS Priorities:3 BW:40 PFC:ON Default FCoE map Dell(conf)#do show fcoe-map Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------------Dell(
2. Configure the PFC setting (on or off) and the ETS bandwidth percentage allocated to traffic in each priority group. Configure whether the priority group traffic should be handled with strict-priority scheduling. The sum of all allocated bandwidth percentages must be 100 percent. Strict-priority traffic is serviced first. Afterward, bandwidth allocated to other priority groups is made available and allocated according to the specified percentages.
INTERFACE mode dcb-map name Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_DCB1 Creating an FCoE VLAN Create a dedicated VLAN to send and receive Fibre Channel traffic over FCoE links between servers and a fabric over an NPG. The NPG receives FCoE traffic and forwards decapsulated FC frames over FC links to SAN switches in a specified fabric. 1. Create the dedicated VLAN for FCoE traffic. Range: 2–4094. VLAN 1002 is commonly used to transmit FCoE traffic.
4. Specify the FC-MAP value used to generate a fabric-provided MAC address, which is required to send FCoE traffic from a server on the FCoE VLAN to the FC fabric specified in Step 2. Enter a unique MAC address prefix as the FC-MAP value for each fabric. Range: 0EFC00–0EFCFF. Default: None. FCoE MAP mode fc-map fc-map-value 5. Configure the priority used by a server CNA to select the FCF for a fabric login (FLOGI). Range: 1–255. Default: 128. FCoE MAP mode fcf-priority priority 6.
no shutdown Applying an FCoE Map on Fabric-facing FC Ports The Aggregator, with the FC ports, are configured by default to operate in N port mode to connect to an F port on an FC switch in a fabric. You can apply only one FCoE map on an FC port. When you apply an FCoE map on a fabric-facing FC port, the FC port becomes part of the FCoE fabric, whose settings in the FCoE map are configured on the port and exported to downstream server CNA ports.
Dell(conf-dcbx-name)# priority-pgid 0 0 0 1 2 4 4 4 2. Apply the DCB map on a downstream (server-facing) Ethernet port: Dell(config-if-te-0/0)#dcb-map SAN_DCB_MAP 3. Create the dedicated VLAN to be used for FCoE traffic: Dell(conf)#interface vlan 1002 4.
Command Description show npiv devices [brief] Displays information on FCoE and FC devices currently logged in to the NPG. show fc switch Displays the FC mode of operation and worldwide node (WWN) name of an Aggregator.
Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/11 Te 0/12 1003 1003 3 0efc03 8 128 ACTIVE UP Table 30. show fcoe-map Field Descriptions Field Description Fabric-Name Name of a SAN fabric. Fabric ID The ID number of the SAN fabric to which FC traffic is forwarded. VLAN ID The dedicated VLAN used to transport FCoE storage traffic between servers and a fabric over the NPG. The configured VLAN ID must be the same as the fabric ID.
Table 31. show qos dcb-map Field Descriptions Field Description State Complete: All mandatory DCB parameters are correctly configured. In progress: The DCB map configuration is not complete. Some mandatory parameters are not configured. PFC Mode PFC configuration in the DCB map: On (enabled) or Off. PG Priority group configured in the DCB map. TSA Transmission scheduling algorithm used in the DCB map: Enhanced Transmission Selection (ETS).
Field Description Status Operational status of the link between a server CNA port and a SAN fabric: Logged In Server has logged in to the fabric and is able to transmit FCoE traffic.
Field Description Enode WWNN Worldwide node name of the server CNA. FCoE MAC Fabric-provided MAC address (FPMA). The FPMA consists of the FC-MAP value in the FCoE map and the FC-ID provided by the fabric after a successful FLOGI. In the FPMA, the most significant bytes are the FC-MAP; the least significant bytes are the FC-ID. FC-ID FC port ID provided by the fabric. LoginMethod Method used by the server CNA to log in to the fabric; for example, FLOGI or FDISC.
27 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
28 Debugging and Diagnostics This chapter contains the following sections:. • • • • • Debugging Aggregator Operation Software Show Commands Offline Diagnostics Trace Logs Show Hardware Commands Supported Modes Standalone, PMUX, VLT Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation.
Te 0/11(Dwn) Te 0/12(Dwn) Te 0/13(Up) Te 0/14(Dwn) Te 0/15(Up) Te 0/16(Up) Te 0/17(Dwn) Te 0/18(Dwn) Te 0/19(Dwn) Te 0/20(Dwn) Te 0/21(Dwn) Te 0/22(Dwn) Te 0/23(Dwn) Te 0/24(Dwn) Te 0/25(Dwn) Te 0/26(Dwn) Te 0/27(Dwn) Te 0/28(Dwn) Te 0/29(Dwn) Te 0/30(Dwn) 2. Te 0/31(Dwn) Te 0/32(Dwn) Verify that the downstream port channel in the top-of-rack switch that connect to the Aggregator is configured correctly.
2. Assign the port to a specified group of VLANs (vlan tagged command) and re-display the port mode status.. Dell(conf)#interface tengigabitethernet 0/1 Dell(conf-if-te-0/1)#vlan tagged 2-5,100,4010 Dell(conf-if-te-0/1)# Dell#show interfaces switchport tengigabitethernet 0/1 Codes: U x G i - Untagged, T - Tagged Dot1x untagged, X - Dot1x tagged GVRP tagged, M - Trunk, H - VSN tagged Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged Name: TenGigabitEthernet 0/1 802.
Hardware Rev : Num Ports : 56 Up Time : 17 hr, 8 min FTOS Version : 8-3-17-15 Jumbo Capable : yes POE Capable : no Boot Flash : A: 4.0.1.0 [booted] B: 4.0.1.0bt Boot Selector : 4.0.0.
Running Offline Diagnostics To run offline diagnostics, use the following commands. For more information, refer to the examples following the steps. 1. Place the unit in the offline state. EXEC Privilege mode offline stack-unit You cannot enter this command on a MASTER or Standby stack unit. NOTE: The system reboots when the offline diagnostics complete. This is an automatic process.
Auto Save on Crash or Rollover Exception information for MASTER or standby units is stored in the flash:/TRACE_LOG_DIR directory. This directory contains files that save trace information when there has been a task crash or timeout. • On a MASTER unit, you can reach the TRACE_LOG_DIR files by FTP or by using the show file command from the flash:// TRACE_LOG_DIR directory.
• View input and output statistics on the party bus, which carries inter-process communication traffic between CPUs. EXEC Privilege mode • show hardware stack-unit {0-5} cpu party-bus statistics View the ingress and egress internal packet-drop counters, MAC counters drop, and FP packet drops for the stack unit on per port basis. EXEC Privilege mode show hardware stack-unit {0-5} drops unit {0-0} port {33–56} • This view helps identifying the stack unit/port pipe/port that may experience internal drops.
SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP SFP 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 49 Serial Base ID fields Id = 0x03 Ext Id = 0x04 Connector = 0x07 Transceiver Code = 0x00 0x00 0x00 0x01 0x20 0x40 0x0c 0x01 Encoding = 0x01 BR Nominal = 0x0c Length(9um) Km = 0x00 Length(9um) 100m = 0x00 Length(50um) 10m = 0x37 Length(62.
When the system detects a genuine over-temperature condition, it powers off the card. To recognize this condition, look for the following system messages: CHMGR-2-MAJOR_TEMP: Major alarm: chassis temperature high (temperature reaches or exceeds threshold of [value]C) CHMGR-2-TEMP_SHUTDOWN_WARN: WARNING! temperature is [value]C; approaching shutdown threshold of [value]C To view the programmed alarm thresholds levels, including the shutdown value, use the show alarms threshold command.
Troubleshoot an Under-Voltage Condition To troubleshoot an under-voltage condition, check that the correct number of power supplies are installed and their Status light emitting diodes (LEDs) are lit. The following table lists information for SNMP traps and OIDs, which provide information about environmental monitoring hardware and hardware components. Table 36. SNMP Traps and OIDs OID String OID Name Description chSysPortXfpRecvPower OID displays the receiving power of the connected optics.
Physical memory is organized into cells of 128 bytes. The cells are organized into two buffer pools — the dedicated buffer and the dynamic buffer. • • Dedicated buffer — this pool is reserved memory that other interfaces cannot use on the same ASIC or by other queues on the same interface. This buffer is always allocated, and no dynamic re-carving takes place based on changes in interface status. Dedicated buffers introduce a trade-off.
• Reduce the dedicated buffer on all queues/interfaces. • Increase the dynamic buffer on all interfaces. • Increase the cell pointers on a queue that you are expecting will receive the largest number of packets. To define, change, and apply buffers, use the following commands. • Define a buffer profile for the FP queues. CONFIGURATION mode buffer-profile fp fsqueue • Define a buffer profile for the CSF queues.
To display the default buffer profile, use the show buffer-profile {summary | detail} command from EXEC Privilege mode. Example of Viewing the Default Buffer Profile Dell#show buffer-profile detail interface tengigabitethernet 0/1 Interface tengig 0/1 Buffer-profile Dynamic buffer 194.88 (Kilobytes) Queue# Dedicated Buffer Buffer Packets (Kilobytes) 0 2.50 256 1 2.50 256 2 2.50 256 3 2.50 256 4 9.38 256 5 9.38 256 6 9.38 256 7 9.
Using a Pre-Defined Buffer Profile The Dell Networking OS provides two pre-defined buffer profiles, one for single-queue (for example, non-quality-of-service [QoS]) applications, and one for four-queue (for example, QoS) applications. You must reload the system for the global buffer profile to take effect, a message similar to the following displays: % Info: For the global pre-defined buffer profile to take effect, please save the config and reload the system..
Troubleshooting Packet Loss The show hardware stack-unit command is intended primarily to troubleshoot packet loss. To troubleshoot packet loss, use the following commands.
3 4 5 6 7 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Example of show hardware drops interface interface Dell#show hardware drops interface tengigabitethernet 2/1 Drops in Interface Te 2/1: --- Ingress Drops --Ingress Drops IBP CBP Full Drops PortSTPnotFwd Drops IPv4 L3 Discards Policy Discards Packets dropped by FP (L2+L3) Drops Port bitmap zero Drops Rx VLAN Drops --- Ingress MAC counters--Ingress FCSDrops Ingress MTUExceeds --- MMU Drops --Ingress MMU Drops HOL DROPS(TOTAL) HOL DR
Dataplane Statistics The show hardware stack-unit cpu data-plane statistics command provides insight into the packet types coming to the CPU. The command output in the following example has been augmented, providing detailed RX/ TX packet statistics on a per-queue basis. The objective is to see whether CPU-bound traffic is internal (so-called party bus or IPC traffic) or network control traffic, which the CPU must process.
Displaying Stack Port Statistics The show hardware stack-unit stack-port command displays input and output statistics for a stack-port interface.
--------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 5 (interface Fo 1/148) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 9 (interface Fo 1/152) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 13 (interface Fo 1/156) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS -------------------------
UCAST UCAST UCAST UCAST UCAST UCAST UCAST UCAST UCAST UCAST UCAST MCAST MCAST MCAST MCAST MCAST MCAST MCAST MCAST MCAST 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Restoring the Factory Default Settings Restoring factory defaults deletes the existing NVRAM settings, startup configuration and all configured settings such as stacking or fanout.
29 Standards Compliance This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http:// tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 37.
RFC# Full Name 2131 Dynamic Host Configuration Protocol 2338 Virtual Router Redundancy Protocol (VRRP) 3021 Using 31-Bit Prefixes on IPv4 Point-to-Point Links 3046 DHCP Relay Agent Information Option 3069 VLAN Aggregation for Efficient IP Address Allocation 3128 Protection Against a Variant of the Tiny Fragment Attack Network Management The following table lists the Dell Networking OS support per platform for network management protocol. Table 39.
RFC# Full Name 2576 Coexistence Between Version 1, Version 2, and Version 3 of the Internetstandard Network Management Framework 2578 Structure of Management Information Version 2 (SMIv2) 2579 Textual Conventions for SMIv2 2580 Conformance Statements for SMIv2 2618 RADIUS Authentication Client MIB, except the following four counters: radiusAuthClientInvalidServerAddresses radiusAuthClientMalformedAccessResponses radiusAuthClientUnknownTypes radiusAuthClientPacketsDropped 3635 Definitions of Man
RFC# Full Name sFlow.org sFlow Version 5 sFlow.