Dell PowerEdge Configuration Guide for the M I/O Aggregator 9.8(0.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws.
Contents 1 About this Guide..................................................................................................14 Audience.............................................................................................................................................. 14 Conventions.........................................................................................................................................14 Information Symbols............................................................
Ethernet Enhancements in Data Center Bridging............................................................................. 30 Priority-Based Flow Control................................................................................................................31 Enhanced Transmission Selection......................................................................................................32 Data Center Bridging Exchange Protocol (DCBx)...............................................................
6 FIP Snooping........................................................................................................79 Supported Modes................................................................................................................................ 79 Fibre Channel over Ethernet...............................................................................................................79 Ensuring Robustness in a Converged Ethernet Network....................................................
Configuring a Static Route for a Management Interface...........................................................107 VLAN Membership............................................................................................................................ 108 Default VLAN .............................................................................................................................. 108 Port-Based VLANs............................................................................................
10 Isolated Networks for Aggregators.............................................................134 Configuring and Verifying Isolated Network Settings..................................................................... 134 11 Link Aggregation.............................................................................................135 Supported Modes..............................................................................................................................
13 Link Layer Discovery Protocol (LLDP).........................................................161 Supported Modes.............................................................................................................................. 161 Protocol Data Units........................................................................................................................... 161 Configure LLDP.........................................................................................................
RADIUS Authentication............................................................................................................... 196 Configuration Task List for RADIUS............................................................................................196 TACACS+...........................................................................................................................................199 Configuration Task List for TACACS+............................................................
Stack Master Election..................................................................................................................226 Failover Roles.............................................................................................................................. 226 MAC Addressing.......................................................................................................................... 227 Stacking LAG...........................................................................
Supported Modes..............................................................................................................................253 Feature Description...........................................................................................................................253 How Uplink Failure Detection Works...............................................................................................254 UFD and NIC Teaming.....................................................................
Zoning............................................................................................................................................... 299 Creating Zone and Adding Members...............................................................................................299 Creating Zone Alias and Adding Members...................................................................................... 300 Creating Zonesets.................................................................................
Offline Diagnostics............................................................................................................................327 Important Points to Remember..................................................................................................327 Running Offline Diagnostics.......................................................................................................328 Trace Logs...........................................................................................
1 About this Guide This guide describes the supported protocols and software features, and provides configuration instructions and examples, for the Dell Networking M I/O Aggregator running Dell Networking OS version 9.7(0.0). The MI/O Aggregator is installed in a Dell PowerEdge M I/O Aggregator. For information about how to install and perform the initial switch configuration, refer to the Getting Started Guides on the Dell Support website at http://www.dell.
Information Symbols This book uses the following information symbols. NOTE: The Note icon signals important operational information. CAUTION: The Caution icon signals information about situations that could result in equipment damage or loss of data. WARNING: The Warning icon signals information about hardware handling that could result in injury. * (Exception). This symbol is a note associated with additional text on the page that is marked with an asterisk.
Before You Start 2 To install the Aggregator in a Dell PowerEdge M1000e Enclosure, use the instructions in the Dell PowerEdge M I/O Aggregator Getting Started Guide that is shipped with the product.The I/O Aggregator (also known as Aggregator) installs with zero-touch configuration. After you power it on, an Aggregator boots up with default settings and auto-configures with software features enabled.
Dell(conf)#stack-unit 0 iom-mode programmable-mux Select this mode to configure PMUX mode CLI commands. For more information on the PMUX mode, refer to PMUX Mode of the IO Aggregator. Stacking mode stack-unit unit iom-mode stack CONFIGURATION mode Dell(conf)#stack-unit 0 iom-mode stack Select this mode to configure Stacking mode CLI commands. For more information on the Stacking mode, refer to Stacking.
– The base-module ports operate in standalone 4x10GbE mode. You can configure these ports to operate in 40GbE stacking mode. When configured for stacking, you cannot use 40GbE basemodule ports for uplinks. – Ports on the 2-Port 40-GbE QSFP+ module operate only in 4x10GbE mode. You cannot use them for stacking. – Ports on the 4-Port 10-GbE SFP+ and 4-Port 10GBASE-T modules operate only in 10GbE mode. • • • • • • • • • • For more information about how ports are numbered, refer to Port Numbering.
On an Aggregator, the internal ports support FCoE connectivity and connects to the converged network adapter (CNA) in servers. FCoE allows Fiber Channel to use 10-Gigabit Ethernet networks while preserving the Fiber Channel protocol. The Aggregator also provides zero-touch configuration for FCoE connectivity. The Aggregator autoconfigures to match the FCoE settings used in the switches to which it connects through its uplink ports. FIP snooping is automatically configured on an Aggregator.
Configuring VLANs By default, in Standalone mode, all aggregator ports belong to all 4094 VLANs and are members of untagged VLAN 1. To configure only the required VLANs on a port, use the CLI or CMC interface. You can configure VLANs only on server ports. The uplink LAG will automatically get the VLANs, based on the server ports VLAN configuration.
For detailed information about how to reconfigure specific software settings, refer to the appropriate chapter.
3 Configuration Fundamentals The Dell Networking Operating System (OS) command line interface (CLI) is a text-based interface you can use to configure interfaces and protocols. The CLI is structured in modes for security and management purposes. Different sets of commands are available in each mode, and you can limit user access to modes using privilege levels. In Dell Networking OS, after you enable a command, it is entered into the running configuration file.
• EXEC mode is the default mode and has a privilege level of 1, which is the most restricted level. Only a limited selection of commands is available, notably the show commands, which allow you to view system information. • EXEC Privilege mode has commands to view configurations, clear counters, manage configuration files, run diagnostics, and enable or disable debug operations. The privilege level is 15, which is unrestricted. You can configure a password for this mode.
Table 1. Dell Command Modes CLI Command Mode Prompt Access Command EXEC Dell> Access the router through the console or Telnet. EXEC Privilege Dell# • • CONFIGURATION Dell(conf)# • • From EXEC mode, enter the enable command. From any other mode, use the end command. From EXEC privilege mode, enter the configure command. From every mode except EXEC and EXEC Privilege, enter the exit command. NOTE: Access all of the following modes from CONFIGURATION mode.
--------------------------------------------------------------------------------------0 Member not present 1 Management online I/O-Aggregator I/O-Aggregator 8-3-17-38 56 2 Member not present 3 Member not present 4 Member not present 5 Member not present Dell(conf)# Undoing Commands When you enter a command, the command line is added to the running configuration file (runningconfig). To disable a command and remove it from the running-config, enter the no command, then the original command.
clear clock configure copy --More-- Reset functions Manage the system clock Configuring from terminal Copy from one file to another • Enter ? after a partial keyword lists all of the keywords that begin with the specified letters. Dell(conf)#cl? clock Dell(conf)#cl • Enter [space]? after a keyword lists all of the keywords that can follow the specified keyword.
Short-Cut Key Combination Action CNTL-U Deletes the line. CNTL-W Deletes the previous word. CNTL-X Deletes the line. CNTL-Z Ends continuous scrolling of command outputs. Esc B Moves the cursor back one word. Esc F Moves the cursor forward one word. Esc D Deletes all characters from the cursor to the end of the word. Command History Dell Networking OS maintains a history of previously-entered commands for each mode.
0 Pause Tx pkts, 0 Pause Rx pkts 0 Pause Tx pkts, 0 Pause Rx pkts NOTE: Dell accepts a space or no space before and after the pipe. To filter a phrase with spaces, underscores, or ranges, enclose the phrase with double quotation marks. The except keyword displays text that does not match the specified text. The following example shows this command used in combination with the show linecard all command.
• On the system that telnets into the switch, this message appears: % Warning: The following users are currently configuring the system: User "" on line console0 • On the system that is connected over the console, this message appears: % Warning: User "" on line vty0 "10.11.130.
4 Data Center Bridging (DCB) On an I/O Aggregator, data center bridging (DCB) features are auto-configured in standalone mode. You can display information on DCB operation by using show commands. NOTE: DCB features are not supported on an Aggregator in stacking mode. Supported Modes Standalone, Stacking, PMUX, VLT Ethernet Enhancements in Data Center Bridging The following section describes DCB.
• • Storage traffic based on Fibre Channel media uses the SCSI protocol for data transfer. This traffic typically consists of large data packets with a payload of 2K bytes that cannot recover from frame loss. To successfully transport storage traffic, data center Ethernet must provide no-drop service with lossless links. Servers use InterProcess Communication (IPC) traffic within high-performance computing clusters to share information. Server traffic is extremely sensitive to latency requirements.
• PFC is supported on specified 802.1p priority traffic (dot1p 0 to 7) and is configured per interface. However, only two lossless queues are supported on an interface: one for Fibre Channel over Ethernet (FCoE) converged traffic and one for Internet Small Computer System Interface (iSCSI) storage traffic. Configure the same lossless queues on all ports.
The following figure shows how ETS allows you to allocate bandwidth when different traffic types are classed according to 802.1p priority and mapped to priority groups. Figure 2. Enhanced Transmission Selection The following table lists the traffic groupings ETS uses to select multiprotocol traffic for transmission. Table 2. ETS Traffic Groupings Traffic Groupings Description Priority group A group of 802.1p priorities used for bandwidth allocation and queue scheduling. All 802.
– Strict priority shaping – ETS shaping – (Credit-based shaping is not supported) • ETS uses the DCB MIB IEEE 802.1azd2.5. Data Center Bridging Exchange Protocol (DCBx) The data center bridging exchange (DCBx) protocol is disabled by default on any switch on which PFC or ETS are enabled. DCBx allows a switch to automatically discover DCB-enabled peers and exchange configuration information. PFC and ETS use DCBx to exchange and negotiate parameters with peer devices.
Data Center Bridging in a Traffic Flow The following figure shows how DCB handles a traffic flow on an interface. Figure 3. DCB PFC and ETS Traffic Handling Enabling Data Center Bridging DCB is automatically configured when you configure FCoE or iSCSI optimization. Data center bridging supports converged enhanced Ethernet (CEE) in a data center network. DCB is disabled by default. It must be enabled to support CEE.
To enable DCB with PFC buffers on a switch, enter the following commands, save the configuration, and reboot the system to allow the changes to take effect. 1. Enable DCB. CONFIGURATION mode dcb enable 2. Set PFC buffering on the DCB stack unit. CONFIGURATION mode dcb stack-unit all pfc-buffering pfc-ports 64 pfc-queues 2 NOTE: To save the pfc buffering configuration changes, save the configuration and reboot the system.
Step Task Command Command Mode priority-pgid dot1p0_group_num dot1p1_group_num dot1p2_group_num dot1p3_group_num dot1p4_group_num dot1p5_group_num dot1p6_group_num dot1p7_group_num DCB MAP priority-group 2 bandwidth 20 pfc on priority-group 4 strict-priority pfc off Repeat this step to configure PFC and ETS traffic handling for each priority group. Specify the dot1p priority-to-priority group mapping for each priority. Priority-group range: 0 to 7.
Step Task Command Command Mode fortygigabitEthernet slot/port} 2 Apply the DCB map on the Ethernet port to configure it with the PFC and ETS settings in the map; for example: INTERFACE dcb-map name Dell# interface tengigabitEthernet 0/0 Dell(config-if-te-0/0)# dcb-map SAN_A_dcb_map1 Repeat Steps 1 and 2 to apply a DCB map to more than one port.
Configuring Lossless Queues DCB also supports the manual configuration of lossless queues on an interface after you disable PFC mode in a DCB map and apply the map on the interface. The configuration of no-drop queues provides flexibility for ports on which PFC is not needed, but lossless traffic should egress from the interface. Lossless traffic egresses out the no-drop queues. Ingress 802.1p traffic from PFC-enabled peers is automatically mapped to the no-drop egress queues.
Data Center Bridging: Default Configuration Before you configure PFC and ETS on a switch see the priority group setting taken into account the following default settings: DCB is enabled. PFC and ETS are globally enabled by default. The default dot1p priority-queue assignments are applied as follows: Dell(conf)#do show qos dot1p-queue-mapping Dot1p Priority : 0 1 2 3 4 5 6 7 Queue : 0 0 0 1 2 3 3 3 Dell(conf)# PFC is not applied on specific dot1p priorities.
• Flow control enabled on input interfaces. • A DCB-MAP policy is applied with PFC disabled. The following example shows a default interface configuration with DCB disabled and link-level flow control enabled.
Enabling DCB on Next Reload To configure the Aggregator so that all interfaces come up with DCB enabled and flow control disabled, use the dcb enable on-next-reload command. Internal PFC buffers are automatically configured. Task Command Command Mode Globally enable DCB on all interfaces after next switch reload.
To configure PFC and apply a PFC input policy to an interface, follow these steps. 1. Create a DCB input policy to apply pause or flow control for specified priorities using a configured delay time. CONFIGURATION mode dcb-input policy-name The maximum is 32 alphanumeric characters. 2. Configure the link delay used to pause specified priority traffic. DCB INPUT POLICY mode pfc link-delay value One quantum is equal to a 512-bit transmission. The range (in quanta) is from 712 to 65535.
CONFIGURATION mode interface type slot/port 8. Apply the input policy with the PFC configuration to an ingress interface. INTERFACE mode dcb-policy input policy-name 9. Repeat Steps 1 to 8 on all PFC-enabled peer interfaces to ensure lossless traffic service. Dell Networking OS Behavior: As soon as you apply a DCB policy with PFC enabled on an interface, DCBx starts exchanging information with PFC-enabled peers. The IEEE802.1Qbb, CEE, and CIN versions of PFC Type, Length, Value (TLV) are supported.
Traffic may be interrupted when you reconfigure PFC no-drop priorities in an input policy or reapply the policy to an interface. How Priority-Based Flow Control is Implemented Priority-based flow control provides a flow control mechanism based on the 802.1p priorities in converged Ethernet traffic received on an interface and is enabled by default.
2. Create a priority group of 802.1p traffic classes. 3. Configure a DCB output policy in which you associate a priority group with a QoS ETS output policy. 4. Apply the DCB output policy to an interface. How Enhanced Transmission Selection is Implemented Enhanced transmission selection (ETS) provides a way to optimize bandwidth allocation to outbound 802.1p classes of converged Ethernet traffic. Different traffic types have different service needs. Using ETS, groups within an 802.
– ETS is enabled by default with the default ETS configuration applied (all dot1p priorities in the same group with equal bandwidth allocation). ETS Operation with DCBx In DCBx negotiation with peer ETS devices, ETS configuration is handled as follows: • ETS TLVs are supported in DCBx versions CIN, CEE, and IEEE2.5. • ETS operational parameters are determined by the DCBX port-role configurations. • ETS configurations received from TLVs from a peer are validated.
Strict-priority groups: If two priority groups have strict-priority scheduling, traffic assigned from the priority group with the higher priority-queue number is scheduled first. However, when three priority groups are used and two groups have strict-priority scheduling (such as groups 1 and 3 in the example), the strict priority group whose traffic is mapped to one queue takes precedence over the strict priority group whose traffic is mapped to two queues.
propagates the configuration to other auto-upstream and auto-downstream ports. A port that receives an internally propagated configuration overwrites its local configuration with the new parameter values.
NOTE: On a DCBx port, application priority TLV advertisements are handled as follows: • The application priority TLV is transmitted only if the priorities in the advertisement match the configured PFC priorities on the port. • On auto-upstream and auto-downstream ports: – If a configuration source is elected, the ports send an application priority TLV based on the application priority TLV received on the configuration-source port.
– The port role is auto-upstream. – The port is enabled with link up and DCBx enabled. – The port has performed a DCBx exchange with a DCBx peer. – The switch is capable of supporting the received DCB configuration values through either a symmetric or asymmetric parameter exchange. A newly elected configuration source propagates configuration changes received from a peer to the other auto-configuration ports.
DCBx Example The following figure shows how DCBx is used on an Aggregator installed in a Dell PowerEdge M I/O Aggregator chassis in which servers are also installed. The external 40GbE ports on the base module (ports 33 and 37) of two switches are used for uplinks configured as DCBx auto-upstream ports. The Aggregator is connected to third-party, top-of-rack (ToR) switches through 40GbE uplinks. The ToR switches are part of a Fibre Channel storage network.
Figure 4. DCBx Sample Topology DCBx Prerequisites and Restrictions The following prerequisites and restrictions apply when you configure DCBx operation on a port: • DCBx requires LLDP in both send (TX) and receive (RX) modes to be enabled on a port interface. If multiple DCBx peer ports are detected on a local DCBx interface, LLDP is shut down.
DCBx Error Messages The following syslog messages appear when an error in DCBx operation occurs. LLDP_MULTIPLE_PEER_DETECTED: DCBx is operationally disabled after detecting more than one DCBx peer on the port interface. LLDP_PEER_AGE_OUT: DCBx is disabled as a result of LLDP timing out on a DCBx peer interface. DSM_DCBx_PEER_VERSION_CONFLICT: A local port expected to receive the IEEE, CIN, or CEE version in a DCBx TLV from a remote peer but received a different, conflicting DCBx version.
Verifying the DCB Configuration To display DCB configurations, use the following show commands. Table 3. Displaying DCB Configurations Command Output show qos dot1p-queue mapping Displays the current 802.1p priority-queue mapping. show qos dcb-map map-name Displays the DCB parameters configured in a specified DCB map. show dcb [stack-unit unit-number] Displays the data center bridging status, number of PFC-enabled ports, and number of PFC-enabled queues.
Example of the show dcb Command Dell# show dcb stack-unit 0 port-set 0 DCB Status PFC Queue Count Total Buffer[lossy + lossless] (in KB) PFC Total Buffer (in KB) PFC Shared Buffer (in KB) PFC Available Buffer (in KB) : : : : : : Enabled 2 3822 1912 832 1080 Example of the show interface pfc statistics Command Dell#show interfaces tengigabitethernet 0/3 pfc statistics Interface TenGigabitEthernet 0/3 Priority Rx XOFF Frames Rx Total Frames Tx Total Frames --------------------------------------------------
PFC Link Delay 45556 pause quanta Application Priority TLV Parameters : -------------------------------------FCOE TLV Tx Status is disabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x10 Remote FCOE PriorityMap is 0x8 Remote ISCSI PriorityMap is 0x8 0 Input TLV pkts, 1 Output TLV pkts, 0 Error pkts, 0 Pause Tx pkts, 0 Pause Rx pkts 2 Input Appln Priority TLV pkts, 0 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts The following table describes th
Fields Description • Symmetric: for an IEEE version TLV Tx Status Status of PFC TLV advertisements: enabled or disabled. PFC Link Delay Link delay (in quanta) used to pause specified priority traffic. Application Priority TLV: FCOE TLV Tx Status Status of FCoE advertisements in application priority TLVs from local DCBx port: enabled or disabled. Application Priority TLV: ISCSI TLV Tx Status Status of ISCSI advertisements in application priority TLVs from local DCBx port: enabled or disabled.
TC-grp 0 1 2 3 4 5 6 7 Priority# 0,1,2,3,4,5,6,7 Priority# Bandwidth TSA 0 1 2 3 4 5 6 7 Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled TC-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS 13% 13% 13% 13% 12% 12% 12% 12% ETS ETS ETS ETS ETS ETS ETS ETS Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Priority# Bandwidth 0 13% 1 13% 2 13% 3 13% 4 12
1 2 3 4 5 6 7 0% 0% 0% 0% 0% 0% 0% ETS ETS ETS ETS ETS ETS ETS Bandwidth 100% 0% 0% 0% 0% 0% 0% 0% TSA ETS ETS ETS ETS ETS ETS ETS ETS Remote Parameters: ------------------Remote is disabled Local Parameters : -----------------Local is enabled PG-grp Priority# 0 0,1,2,3,4,5,6,7 1 2 3 4 5 6 7 Oper status is init ETS DCBX Oper status is Down State Machine Type is Asymmetric Conf TLV Tx Status is enabled Reco TLV Tx Status is enabled 0 Input Conf TLV Pkts, 0 Output Conf TLV Pkts, 0 Error Conf TLV Pkts 0
Field Description Admin mode is enabled on the remote port for DCBx exchange, the Willing bit received in ETS TLVs from the remote peer is included. Local Parameters ETS configuration on local port, including Admin mode (enabled when a valid TLV is received from a peer), priority groups, assigned dot1p priorities, and bandwidth allocation. Operational status (local port) Port state for current operational ETS configuration: • • • Init: Local ETS configuration parameters were exchanged with peer.
Admin mode is On Admin is enabled, Priority list is 4-5 Local is enabled, Priority list is 4-5 Link Delay 45556 pause quantum 0 Pause Tx pkts, 0 Pause Rx pkts Example of the show stack-unit all stack-ports all ets details Command Dell# show stack-unit all stack-ports all ets details Stack unit 0 stack port all Max Supported TC Groups is 4 Number of Traffic Classes is 1 Admin mode is on Admin Parameters: -------------------Admin is enabled TC-grp Priority# Bandwidth TSA --------------------------------------
-Interface TenGigabitEthernet 0/4 Remote Mac Address 00:00:00:00:00:11 Port Role is Auto-Upstream DCBX Operational Status is Enabled Is Configuration Source? TRUE Local DCBX Compatibility mode is CEE Local DCBX Configured mode is CEE Peer Operating version is CEE Local DCBX TLVs Transmitted: ErPfi Local DCBX Status ----------------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 2 Acknowledgment Number: 2 Protocol State: In-Sync Peer DCBX Status: ---------------DCBX Operational
Field Description Local DCBx Compatibility mode DCBx version accepted in a DCB configuration as compatible. In auto-upstream mode, a port can only received a DCBx version supported on the remote peer. Local DCBx Configured mode DCBx version configured on the port: CEE, CIN, IEEE v2.5, or Auto (port auto-configures to use the DCBx version received from a peer). Peer Operating version DCBx version that the peer uses to exchange DCB parameters.
Field Description PG TLV Statistics: Output PG TLV Pkts Number of PG TLVs transmitted. PG TLV Statistics: Error PG TLV Pkts Number of PG error packets received. Application Priority TLV Statistics: Input Appln Priority TLV pkts Number of Application TLVs received. Application Priority TLV Statistics: Output Appln Priority TLV pkts Number of Application TLVs transmitted. Application Priority TLV Statistics: Error Appln Priority TLV Pkts Number of Application TLV error packets received.
Troubleshooting PFC, ETS, and DCBx Operation In the show interfaces pfc | ets | dcbx output, the DCBx operational status may be down for any of the reasons described in the following table. When DCBx is down, the following values display in the show output field for DCBx Oper status: • PFC DCBx Oper status: Down • ETS DCBx Oper status: Down • DCBx Oper status: Disabled. Reason Description Port Shutdown Port is shut down. All other reasons for DCBx inoperation, if any, are ignored.
Reason Description • New dot1p-to-queue mapping violates the allowed system limit for PFC Enable status per priority ETS is down (show One of the following ETS-specific errors occurred in ETS validation: interfaces ets • Unsupported PGID output) • A priority group exceeds the maximum number of supported priorities. • COSQ is mapped to more than one priority group. • COSQ is mapped to more than one priority group. - Invalid or unsupported transmission selection algorithm (TSA).
Dynamic Host Configuration Protocol (DHCP) 5 The Aggregator is auto-configured to operate as a dynamic host configuration protocol (DHCP) client. The DHCP server, DHCP relay agent, and secure DHCP features are not supported. The DHCP is an application layer protocol that dynamically assigns IP addresses and other configuration parameters to network end-stations (hosts) based on configuration policies determined by network administrators.
4. After receiving a DHCPREQUEST, the server binds the clients’ unique identifier (the hardware address plus IP address) to the accepted configuration parameters and stores the data in a database called a binding table. The server then broadcasts a DHCPACK message, which signals to the client that it may begin using the assigned parameters. There are additional messages that are used in case the DHCP negotiation deviates from the process previously described and shown in the illustration below.
Figure 5. Assigning Network Parameters using DHCP Dell Networking OS Behavior: DHCP is implemented in Dell Networking OS based on RFC 2131 and 3046. Debugging DHCP Client Operation To enable debug messages for DHCP client operation, enter the following debug commands: • Enable the display of log messages for all DHCP packets sent and received on DHCP client interfaces.
[no] debug ip dhcp client events [interface type slot/port] The following example shows the packet- and event-level debug messages displayed for the packet transmissions and state transitions on a DHCP client interface.
Ma 0/0 :Transitioned to state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP IP RELEASED CMD sent to FTOS in state STOPPED Dell# renew dhcp int Ma 0/0 Dell#1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :DHCP RENEW CMD Received in state STOPPED 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_DBG_EVT: Interface Ma 0/0 :Transitioned to state SELECTING 1w2d23h: %STKUNIT0-M:CP %DHCLIENT-5-DHCLIENT-LOG: DHCLIENT_
DHCP Client An Aggregator is auto-configured to operate as a DHCP client. The DHCP client functionality is enabled only on the default VLAN and the management interface. A DHCP client is a network device that requests an IP address and configuration parameters from a DHCP server.
Important: To verify the currently configured dynamic IP address on an interface, enter the show ip dhcp lease command. The show running-configuration command output only displays ip address dhcp; the currently assigned dynamic IP address is not displayed. DHCP Client on a Management Interface These conditions apply when you enable a management interface to operate as a DHCP client. • The management default route is added with the gateway as the router IP address received in the DHCP ACK packet.
DHCP Packet Format and Options DHCP uses the user datagram protocol (UDP) as its transport protocol. The server listens on port 67 and transmits to port 68; the client listens on port 68 and transmits to port 67. The configuration parameters are carried as options in the DHCP packet in Type, Length, Value (TLV) format; many options are specified in RFC 2132.
Option Number and Description • 4: DHCPDECLINE • 5: DHCPACK • 6: DHCPNACK • 7: DHCPRELEASE • 8: DHCPINFORM Parameter Request Option 55 List Clients use this option to tell the server which parameters it requires. It is a series of octets where each octet is DHCP option code. Renewal Time Option 58 Specifies the amount of time after the IP address is granted that the client attempts to renew its lease with the original server.
The relay agent strips Option 82 from DHCP responses before forwarding them to the client. To insert Option 82 into DHCP packets, follow this step. • Insert Option 82 into DHCP packets. CONFIGURATION mode int ma 0/0 ip add dhcp relay information-option remote-id For routers between the relay agent and the DHCP server, enter the trust-downstream option.
Example of the show ip dhcp client statistics Command Dell#show ip dhcp client statistics interface managementethernet 0/0 Interface Name Ma 0/0 Message Received DHCPOFFER 0 DHCPACK 0 DHCPNAK 0 Message Sent DHCPDISCOVER 1626 DHCPREQUEST 0 DHCPDECLINE 0 DHCPRELEASE 0 DHCPREBIND 0 DHCPRENEW 0 DHCPINFORM 0 Dell# Example of the show ip dhcp lease Command Dell# show ip dhcp Interface Lease-IP Def-Router ServerId State Lease Obtnd At Lease Expires At ========= ======== ========= ======== ===== ============== ====
FIP Snooping 6 This chapter describes about the FIP snooping concepts and configuration procedures. Supported Modes Standalone, PMUX, VLT Fibre Channel over Ethernet Fibre Channel over Ethernet (FCoE) provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
FIP enables FCoE devices to discover one another, initialize and maintain virtual links over an Ethernet network, and access storage devices in a storage area network. FIP satisfies the Fibre Channel requirement for point-to-point connections by creating a unique virtual link for each connection between an FCoE end-device and an FCF via a transit switch. FIP provides a functionality for discovering and logging in to an FCF.
Figure 7. FIP discovery and login between an ENode and an FCF FIP Snooping on Ethernet Bridges In a converged Ethernet network, intermediate Ethernet bridges can snoop on FIP packets during the login process on an FCF. Then, using ACLs, a transit bridge can permit only authorized FCoE traffic to be transmitted between an FCoE end-device and an FCF. An Ethernet bridge that provides these functions is called a FIP snooping bridge (FSB).
• Global ACLs are applied on server-facing ENode ports. • Port-based ACLs are applied on ports directly connected to an FCF and on server-facing ENode ports. • Port-based ACLs take precedence over global ACLs. • FCoE-generated ACLs take precedence over user-configured ACLs. A user-configured ACL entry cannot deny FCoE and FIP snooping frames. The below illustration depicts an Aggregator used as a FIP snooping bridge in a converged Ethernet network. The ToR switch operates as an FCF for FCoE traffic.
Figure 8. FIP Snooping on an Aggregator The following sections describes how to configure the FIP snooping feature on a switch that functions as a FIP snooping bridge so that it can perform the following functions: • Performs FIP snooping (allowing and parsing FIP frames) globally on all VLANs or on a per-VLAN basis. • Set the FCoE MAC address prefix (FC-MAP) value used by an FCF to assign a MAC address to an ECoE end-device (server ENode or storage device) after a server successfully logs in.
• Process FIP VLAN discovery requests and responses, advertisements, solicitations, FLOGI/FDISC requests and responses, FLOGO requests and responses, keep-alive packets, and clear virtual-link messages. How FIP Snooping is Implemented As soon as the Aggregator is activated in an M1000e chassis as a switch-bridge, existing VLAN-specific and FIP snooping auto-configurations are applied. The Aggregator snoops FIP packets on VLANs enabled for FIP snooping and allows legitimate sessions.
• MTU auto-configuration: MTU size is set to mini-jumbo (2500 bytes) when a port is in Switchport mode, the FIP snooping feature is enabled on the switch, and the FIP snooping is enabled on all or individual VLANs. • Link aggregation group (LAG): FIP snooping is supported on port channels on ports on which PFC mode is on (PFC is operationally up).
To enable FCoE transit on the switch and configure the FCoE transit parameters on ports, follow these steps. 1. Enable the FCoE transit feature on a switch. CONFIGURATION mode. feature fip-snooping 2. Enable FIP snooping on all VLANs or on a specified VLAN. CONFIGURATION mode or VLAN INTERFACE mode. fip-snooping enable By default, FIP snooping is disabled on all VLANs. 3. Configure the FC-MAP value used by FIP snooping on all VLANs.
number (FC-ID), worldwide node name (WWNN) and the worldwide port name (WWPN). Information on NPIV sessions is also displayed. show fip-snooping config Displays the FIP snooping status and configured FC-MAP values. show fip-snooping enode [enodemac-address] Displays information on the ENodes in FIP-snooped sessions, including the ENode interface and MAC address, FCF MAC address, VLAN ID and FC-ID.
show fip-snooping sessions Command Description Field Description ENode MAC MAC address of the ENode. ENode Interface Slot/ port number of the interface connected to the ENode. FCF MAC MAC address of the FCF. FCF Interface Slot/ port number of the interface to which the FCF is connected. VLAN VLAN ID number used by the session. FCoE MAC MAC address of the FCoE session assigned by the FCF. FC-ID Fibre Channel ID assigned by the FCF. Port WWPN Worldwide port name of the CNA port.
FC-ID Fibre Channel session ID assigned by the FCF. show fip-snooping fcf Command Example Dell# show fip-snooping fcf FCF MAC FCF Interface No. of Enodes ------------------------------54:7f:ee:37:34:40 Po 22 2 VLAN FC-MAP FKA_ADV_PERIOD ---- ------ -------------- 100 0e:fc:00 4000 show fip-snooping fcf Command Description Field Description FCF MAC MAC address of the FCF. FCF Interface Slot/port number of the interface to which the FCF is connected. VLAN VLAN ID number used by the session.
Number of Session failures due to Hardware Config Dell(conf)# :0 Dell# show fip-snooping statistics int tengigabitethernet 0/11 Number of Vlan Requests :1 Number of Vlan Notifications :0 Number of Multicast Discovery Solicits :1 Number of Unicast Discovery Solicits :0 Number of FLOGI :1 Number of FDISC :16 Number of FLOGO :0 Number of Enode Keep Alive :4416 Number of VN Port Keep Alive :3136 Number of Multicast Discovery Advertisement :0 Number of Unicast Discovery Advertisement :0 Number of FLOGI Accepts
Number of Multicast Discovery Solicits Number of FIP-snooped multicast discovery solicit frames received on the interface. Number of Unicast Discovery Solicits Number of FIP-snooped unicast discovery solicit frames received on the interface. Number of FLOGI Number of FIP-snooped FLOGI request frames received on the interface. Number of FDISC Number of FIP-snooped FDISC request frames received on the interface. Number of FLOGO Number of FIP-snooped FLOGO frames received on the interface.
FCFs Enodes Sessions : 1 : 2 : 17 NOTE: NPIV sessions are included in the number of FIP-snooped sessions displayed. show fip-snooping vlan Command Example Dell# show fip-snooping vlan * = Default VLAN VLAN ---*1 100 FC-MAP -----0X0EFC00 FCFs ---1 Enodes -----2 Sessions -------17 NOTE: NPIV sessions are included in the number of FIP-snooped sessions displayed.
FIP Snooping Example The below illustration shows an Aggregator used as a FIP snooping bridge for FCoE traffic between an ENode (server blade) and an FCF (ToR switch). The ToR switch operates as an FCF and FCoE gateway. Figure 9. FIP Snooping on an Aggregator In tbe above figure, DCBX and PFC are enabled on the Aggregator (FIP snooping bridge) and on the FCF ToR switch. On the FIP snooping bridge, DCBX is configured as follows: • A server-facing port is configured for DCBX in an auto-downstream role.
The DCBX configuration on the FCF-facing port is detected by the server-facing port and the DCB PFC configuration on both ports is synchronized. For more information about how to configure DCBX and PFC on a port, refer to FIP Snooping After FIP packets are exchanged between the ENode and the switch, a FIP snooping session is established. ACLS are dynamically generated for FIP snooping on the FIP snooping bridge/switch.
Internet Group Management Protocol (IGMP) 7 On an Aggregator, IGMP snooping is auto-configured. You can display information on IGMP by using show ip igmp command. Multicast is based on identifying many hosts by a single destination IP address. Hosts represented by the same IP address are a multicast group. The internet group management protocol (IGMP) is a Layer 3 multicast protocol that hosts use to join or leave a multicast group.
Figure 10. IGMP Version 2 Packet Format Joining a Multicast Group There are two ways that a host may join a multicast group: it may respond to a general query from its querier, or it may send an unsolicited report to its querier. • Responding to an IGMP Query. – One router on a subnet is elected as the querier. The querier periodically multicasts (to allmulticast-systems address 224.0.0.1) a general query to all hosts on the subnet.
IGMP Version 3 Conceptually, IGMP version 3 behaves the same as version 2. However, there are differences: • Version 3 adds the ability to filter by multicast source, which helps the multicast routing protocols avoid forwarding traffic to subnets where there are no interested receivers. • To enable filtering, routers must keep track of more state information, that is, the list of sources that must be filtered.
Joining and Filtering Groups and Sources The below illustration shows how multicast routers maintain the group and source information from unsolicited reports. • The first unsolicited report from the host indicates that it wants to receive traffic for group 224.1.1.1. • The host’s second report indicates that it is only interested in traffic from group 224.1.1.1, source 10.11.1.1.
• The querier, before making any state changes, sends a group-and-source query to see if any other host is interested in these two sources; queries for state-changes are retransmitted multiple times. If any are interested, they respond with their current state information and the querier refreshes the relevant state information. • Separately in the below figure, the querier sends a general query to 224.0.0.1.
• Reports and Leaves are flooded by default to the uplink LAG irrespective of whether it is an mrouter port or not. Disabling Multicast Flooding If the switch receives a multicast packet that has an IP address of a group it has not learned (unregistered frame), the switch floods that packet out of all ports on the VLAN. To disable multicast flooding on all VLAN ports, enter the no ip igmp snooping flood command in global configuration mode.
Uptime Expires Router mode Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 00:00:21 Never INCLUDE 1.1.1.2 INCLUDE IS_INCL Interface Group Uptime Expires Router mode Last reporter Last reporter mode Last report received Group source list Source address 1.1.1.2 Member Ports: Po 1 Dell# Vlan 1600 226.0.0.1 00:00:04 Never INCLUDE 1.1.1.
8 Interfaces This chapter describes 100/1000/10000 Mbps Ethernet, 10 Gigabit Ethernet, and 40 Gigabit Ethernet interface types, both physical and logical, and how to configure them with the Dell Networking Operating Software (OS).
• • – The tagged Virtual Local Area Network (VLAN) membership of the uplink LAG is automatically configured based on the VLAN configuration of all server-facing ports (ports 1 to 32). The untagged VLAN used for the uplink LAG is always the default VLAN 1. – The tagged VLAN membership of a server-facing LAG is automatically configured based on the server-facing ports that are members of the LAG.
Server Port AdminState is Up Pluggable media not present Interface index is 71635713 Internet address is not set Mode of IP Address Assignment : NONE DHCP Client-ID :tenG2730001e800ab01 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 1000 Mbit Flowcontrol rx off tx off ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 11:04:02 Queueing strategy: fifo Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byt
Disabling and Re-enabling a Physical Interface By default, all port interfaces on an Aggregator are operationally enabled (no shutdown) to send and receive Layer 2 traffic. You can reconfigure a physical interface to shut it down by entering the shutdown command. To re-enable the interface, enter the no shutdown command. Step Command Syntax Command Mode Purpose 1. interface interface CONFIGURATION Enter the keyword interface followed by the type of interface and slot/port information: 2.
auto vlan ! protocol lldp advertise management-tlv system-name dcbx port-role auto-downstream no shutdown Dell(conf-if-te-0/1)# To view the interfaces in Layer 2 mode, use the show interfaces switchport command in EXEC mode. Management Interfaces An Aggregator auto-configures with a DHCP-based IP address for in-band management on VLAN 1 and remote out-of-band (OOB) management. The Aggregator management interface has both a public IP and private IP address on the internal Fabric D interface.
Command Syntax Command Mode Purpose interface Managementethernet interface CONFIGURATION Enter the slot and the port (0). Slot range: 0-0 To configure an IP address on a management interface, use either of the following commands in MANAGEMENT INTERFACE mode: Command Syntax Command Mode Purpose ip address ip-address mask INTERFACE Configure an IP address and mask on the interface. • ip address dhcp INTERFACE ip-address mask: enter an address in dotted-decimal format (A.B.C.
172.31.1.0/24 ManagementEthernet 1/0 Connected Dell# VLAN Membership A virtual LAN (VLANs) is a logical broadcast domain or logical grouping of interfaces in a LAN in which all data received is kept locally and broadcast to all members of the group. In Layer 2 mode, VLANs move traffic at wire speed and can span multiple devices. Dell Networking OS supports up to 4093 port-based VLANs and one default VLAN, as specified in IEEE 802.1Q.
Interfaces within a port-based VLAN must be in Layer 2 mode and can be tagged or untagged in the VLAN ID. VLANs and Port Tagging To add an interface to a VLAN, it must be in Layer 2 mode. After you place an interface in Layer 2 mode, it is automatically placed in the default VLAN. Dell Networking OS supports IEEE 802.1Q tagging at the interface level to filter traffic. When you enable tagging, a tag header is added to the frame after the destination and source MAC addresses.
Command Syntax Command Mode Purpose vlan tagged {vlan-id | vlanrange} INTERFACE Add the interface as a tagged member of one or more VLANs, where: vlan-id specifies a tagged VLAN number. Range: 2-4094 vlan-range specifies a range of tagged VLANs. Separate VLAN IDs with a comma; specify a VLAN range with a dash; for example, vlan tagged 3,5-7.
i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged * NUM 1 20 1002 Dell# Status Inactive Active Description Q Ports U Po32() U Te 0/3,5,13,53-56 T Te 0/3,13,55-56 Active NOTE: A VLAN is active only if the VLAN contains interfaces and those interfaces are operationally up. In the above example, VLAN 1 is inactive because it does not contain any interfaces. The other VLANs listed contain enabled interfaces and are active.
Dell(conf)# exit Dell#00:23:49: %STKUNIT0-M:CP %SYS-5-CONFIG_I: Configured from console Dell# show vlan id 4 Codes: * - Default VLAN, G - GVRP VLANs, R - Remote Port Mirroring VLANs, P Primary, C - Community, I - Isolated Q: U - Untagged, T - Tagged x - Dot1x untagged, X - Dot1x tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged, C - CMC tagged NUM 4 Status Active Description Dell# Q U T T Ports Po1(Te 0/16) Po128(Te 0/3
Dell(conf-if-po-128)#portmode hybrid Dell(conf-if-po-128)#switchport 5. Configure the tagged VLANs 10 through 15 and untagged VLAN 20 on this port-channel. Dell(conf-if-po-128)#vlan tagged 10-15 Dell(conf-if-po-128)# Dell(conf-if-po-128)#vlan untagged 20 6. Show the running configurations on this port-channel. Dell(conf-if-po-128)#show config ! interface Port-channel 128 portmode hybrid switchport vlan tagged 10-15 vlan untagged 20 shutdown Dell(conf-if-po-128)#end Dell# 7.
Port Channel Interfaces On an Aggregator, port channels are auto-configured as follows: • • All 10GbE uplink interfaces (ports 33 to 56) are auto-configured to belong to the same 10GbE port channel (LAG 128). Server-facing interfaces (ports 1 to 32) auto-configure in LAGs (1 to 127) according to the NIC teaming configuration on the connected servers. Port channel interfaces support link aggregation, as described in IEEE Standard 802.3ad. .
Member ports of a LAG are added and programmed into hardware in a predictable order based on the port ID, instead of in the order in which the ports come up. With this implementation, load balancing yields predictable results across switch resets and chassis reloads. A physical interface can belong to only one port channel at a time. Each port channel must contain interfaces of the same interface type/speed. Port channels can contain a mix of 1000 or 10000 Mbps Ethernet interfaces .
Server-Facing Port Channel: VLAN Membership The tagged VLAN membership of a server-facing LAG is automatically configured based on the serverfacing ports that are members of the LAG. The untagged VLAN of a server-facing LAG is auto-configured based on the untagged VLAN to which the lowest numbered server-facing port in the LAG belongs.
Rate info (interval 299 seconds): Input 00.00 Mbits/sec, 1 packets/sec, 0.00% of line-rate Output 34.00 Mbits/sec, 12318 packets/sec, 0.
To display the running configuration only for interfaces that are part of interface range, use the show configuration command in Interface Range mode.
Dell(conf-if-range-te-0/1-23)# no shutdown Dell(conf-if-range-te-0/1-23)# Monitor and Maintain Interfaces You can display interface statistics with the monitor interface command. This command displays an ongoing list of the interface status (up/down), number of packets, traffic statistics, etc. Command Syntax Command Mode Purpose monitor interface interface EXEC Privilege View interface statistics.
Input IP checksum: Input overrun: Output underruns: Output throttles: m l T q - 0 0 0 0 0 0 0 0 Change mode Page up Increase refresh interval Quit pps pps pps pps 0 0 0 0 c - Clear screen a - Page down t - Decrease refresh interval Maintenance Using TDR The time domain reflectometer (TDR) is supported on all Dell Networking switch/routers. TDR is an assistance tool to resolve link issues that helps detect obvious open or short conditions within any of the four copper pairs.
• When DCB is disabled on an interface, PFC, ETS, and DCBX are also disabled. • When DCBX protocol packets are received, interfaces automatically enable DCB and disable link level flow control. • DCB is required for PFC, ETS, DCBX, and FCoE initialization protocol (FIP) snooping to operate. Link-level flow control uses Ethernet pause frames to signal the other end of the connection to pause data transmission for a certain amount of time as specified in the frame.
– rx off: enter the keywords rx off to ignore the received flow control frames on this port. – tx on: enter the keywords tx on to send control frames from this port to the connected device when a higher rate of traffic is received. – tx off: enter the keywords tx off so that flow control frames are not sent from this port to the connected device when a higher rate of traffic is received. – negotiate: enable pause-negotiation with the egress port of the peer device.
VLANs: • All members of a VLAN must have the same IP MTU value. • Members can have different link MTU values. Tagged members must have a link MTU 4 bytes higher than untagged members to account for the packet tag. • The VLAN link MTU and IP MTU must be less than or equal to the link MTU and IP MTU values configured on the VLAN members. For example, the VLAN contains tagged members with a link MTU of 1522 and an IP MTU of 1500 and untagged members with a link MTU of 1518 and an IP MTU of 1500.
7. Disable auto-negotiation on the port. If the speed is set to 1000, you do not need to disable autonegotiation no negotiation auto INTERFACE 8. Verify configuration changes. show config INTERFACE NOTE: The show interfaces status command displays link status, but not administrative status. For link and administrative status, use the show ip interface [interface | brief] [configuration] command.
Auto-Negotiation, Speed, and Duplex Settings on Different Optics: Command Mode 10GbaseT 10G SFP+ module optics 1G SFP optics Copper SFP Comments - 1000baseT speed 100 interfaceconfig mode Supported Not supported( Error message is thrown) (% Error: Speed 100 not supported on this interface, config ignored Te 0/49) Not supported(Error message is thrown) (% Error: Speed 100 not supported on this interface, config ignored Te 0/49) % Error: Speed 100 not supported on this interface, speed auto interfa
Setting Auto-Negotiation Options: Dell(conf)# int tengig 0/1 Dell(conf-if-te-0/1)#neg auto Dell(conf-if-autoneg)# ? end Exit from configuration mode exit Exit from autoneg configuration mode mode Specify autoneg mode no Negate a command or set its defaults show Show autoneg configuration information Dell(conf-if-autoneg)#mode ? forced-master Force port to master mode forced-slave Force port to slave mode Dell(conf-if-autoneg)# Viewing Interface Information Displaying Non-Default Configurations.
Name: TenGigabitEthernet 13/3 802.1QTagged: True Vlan membership: Vlan 2 --More-- Clearing Interface Counters The counters in the show interfaces command are reset by the clear counters command. This command does not clear the counters captured by any SNMP program.
running-configuration command to verify that this TLV is advertised on all the configured interfaces and the show lldp neighbors detail command to view the value of this TLV. Enhanced Validation of Interface Ranges You can avoid specifying spaces between the range of interfaces, separated by commas, that you configure by using the interface range command. For example, if you enter a list of interface ranges, such as interface range fo 2/0-1,te 10/0,gi 3/0,fa 0/0, this configuration is considered valid.
iSCSI Optimization 9 An Aggregator enables internet small computer system interface (iSCSI) optimization with default iSCSI parameter settings(Default iSCSI Optimization Values) and is auto-provisioned to support: iSCSI Optimization: Operation To display information on iSCSI configuration and sessions, use show commands. iSCSI optimization enables quality-of-service (QoS) treatment for iSCSI traffic.
• • • • If you configured flow-control, iSCSI uses the current configuration. If you did not configure flowcontrol, iSCSI auto-configures flow control. iSCSI monitoring sessions — the switch monitors and tracks active iSCSI sessions in connections on the switch, including port information and iSCSI session information. iSCSI QoS — A user-configured iSCSI class of service (CoS) profile is applied to all iSCSI traffic.
You can configure the switch to monitor traffic for additional port numbers or a combination of port number and target IP address, and you can remove the well-known port numbers from monitoring.
iSCSI Optimization: Operation When the Aggregator auto-configures with iSCSI enabled, the following occurs: • Link-level flow control is enabled on PFC disabled interfaces. • iSCSI session snooping is enabled. • iSCSI LLDP monitoring starts to automatically detect EqualLogic arrays. iSCSI optimization requires LLDP to be enabled. LLDP is enabled by default when an Aggregator autoconfigures.
3260 860 show iscsi sessions Command Example Dell# show iscsi sessions Session 0: ---------------------------------------------------------------------------------------Target: iqn.2001-05.com.equallogic:0-8a0906-0e70c2002-10a0018426a48c94-iom010 Initiator: iqn.1991-05.com.microsoft:win-x9l8v27yajg ISID: 400001370000 Session 1: ---------------------------------------------------------------------------------------Target: iqn.2001-05.com.equallogic:0-8a0906-0f60c2002-0360018428d48c94-iom011 Initiator: iqn.
Isolated Networks for Aggregators 10 An Isolated Network is an environment in which servers can only communicate with the uplink interfaces and not with each other even though they are part of same VLAN. If the servers in the same chassis need to communicate with each other, it requires a non-isolated network connectivity between them or it needs to be routed in the TOR. Isolated Networks can be enabled on per VLAN basis.
Link Aggregation 11 Unlike IOA Automated modes (Standalone and VLT modes), the IOA Programmable MUX (PMUX) can support multiple uplink LAGs. You can provision multiple uplink LAGs. The I/O Aggregator autoconfigures with link aggregation groups (LAGs) as follows: • All uplink ports are automatically configured in a single port channel (LAG 128).
NOTE: In Standalone, VLT, and Stacking modes, you can configure a maximum of 16 members in port-channel 128. In PMUX mode, you can have multiple port-channels with up to 16 members per channel. Uplink LAG When the Aggregator power is on, all uplink ports are configured in a single LAG (LAG 128). Server-Facing LAGs Server-facing ports are configured as individual ports by default.
Auto-Configured LACP Timeout LACP PDUs are exchanged between port channel (LAG) interfaces to maintain LACP sessions. LACP PDUs are transmitted at a slow or fast transmission rate, depending on the LACP timeout value configured on the partner system. The timeout value is the amount of time that a LAG interface waits for a PDU from the partner system before bringing the LACP session down. The default timeout is long-timeout (30 seconds) and is not user-configurable on the Aggregator.
LACP Example The below illustration shows how the LACP operates in an Aggregator stack by auto-configuring the uplink LAG 128 for the connection to a top of rack (ToR) switch and a server-facing LAG for the connection to an installed server that you configured for LACP-based NIC teaming. Figure 17.
Link Aggregation Control Protocol (LACP) The commands for Dell Networks’s implementation of the link aggregation control protocol (LACP) for creating dynamic link aggregation groups (LAGs) — known as port-channels in the Dell Networking OS — are provided in the following sections. NOTE: For static LAG commands, refer to the Interfaces chapter), based on the standards specified in the IEEE 802.3 Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications.
Adding a Physical Interface to a Port Channel The physical interfaces in a port channel can be on any line card in the chassis, but must be the same physical type. NOTE: Port channels can contain a mix of Gigabit Ethernet and 10/100/1000 Ethernet interfaces, but Dell Networking OS disables the interfaces that are not the same speed of the first channel member in the port channel. You can add any physical interface to a port channel if the interface configuration is minimal.
Dell# Te 0/10 (Up) Te 0/11 (Up) The following example shows the port channel’s mode (L2 for Layer 2 and L3 for Layer 3 and L2L3 for a Layer 2-port channel assigned to a routed VLAN), the status, and the number of interfaces belonging to the port channel. Example of the show interface port-channel Command Dell>show interface port-channel 20 Port-channel 20 is up, line protocol is up Hardware address is 00:01:e8:01:46:fa Internet address is 1.1.120.
no shutdown link-bundle-monitor enable Dell(conf-if-po-128)# Reassigning an Interface to a New Port Channel An interface can be a member of only one port channel. If the interface is a member of a port channel, remove it from the first port channel and then add it to the second port channel. Each time you add or remove a channel member from a port channel, Dell Networking OS recalculates the hash algorithm for the port channel. To reassign an interface to a new port channel, use the following commands. 1.
• Enter the number of links in a LAG that must be in “oper up” status. INTERFACE mode minimum-links number The default is 1. Example of Configuring the Minimum Oper Up Links in a Port Channel Dell#config t Dell(conf)#int po 1 Dell(conf-if-po-1)#minimum-links 5 Dell(conf-if-po-1)# Configuring VLAN Tags for Member Interfaces To configure and verify VLAN tags for individual members of a port channel, perform the following: 1.
Deleting or Disabling a Port Channel To delete or disable a port channel, use the following commands. • Delete a port channel. CONFIGURATION mode • no interface portchannel channel-number Disable a port channel. shutdown When you disable a port channel, all interfaces within the port channel are operationally down also. Configuring Auto LAG You can enable or disable auto LAG on the server-facing interfaces. By default, auto LAG is enabled.
show io-aggregator auto-lag status Dell# show io-aggregator auto-lag status Auto LAG creation on server port(s) is enabled For the interface level auto LAG configurations, use the show interface command.
Dell(config-if-te-0/1)# show config ! interface TenGigabitEthernet 0/1 mtu 12000 portmode hybrid switchport no auto-lag enable ! protocol lldp advertise management-tlv management-address system-name dcbx port-role auto-downstream no shutdown Dell# Configuring the Minimum Number of Links to be Up for Uplink LAGs to be Active You can activate the LAG bundle for uplink interfaces or ports (the uplink port-channel is LAG 128) on the I/O Aggregator only when a minimum number of member interfaces of the LAG bund
Input Statistics: 0 packets, 0 bytes 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts 0 runts, 0 giants, 0 throttles 0 CRC, 0 overrun, 0 discarded Output Statistics: 0 packets, 0 bytes, 0 underruns 0 64-byte pkts, 0 over 64-byte pkts, 0 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 0 Multicasts, 0 Broadcasts, 0 Unicasts 0 throttles, 0 discarded, 0 collisions, 0 wreddro
The following log message appears when LACP link fallback is enabled: Feb 26 15:53:32: %STKUNIT0-M:CP IFMGR-5-NO_LACP_PDU_RECEIVED_FROM_PEER: Connectivity to PEER is restricted because LACP PDU's are not received.
Table 8.
Mode of IP Address Assignment : NONE DHCP Client-ID :lag1280001e8e1e1c1 MTU 12000 bytes, IP MTU 11982 bytes LineSpeed 40000 Mbit Members in this channel: Te 0/41(U) Te 0/42(U) Te 0/43(U) Te 0/44(U) ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:11:50 Queueing strategy: fifo Input Statistics: 182 packets, 17408 bytes 92 64-byte pkts, 0 over 64-byte pkts, 90 over 127-byte pkts 0 over 255-byte pkts, 0 over 511-byte pkts, 0 over 1023-byte pkts 182 Multicasts, 0 Broadcasts 0 r
Port Te 0/44 is enabled, LACP is enabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEGIKNP Key 128 Priority 32768 Partner Admin: State BDFHJLMP Key 0 Priority 0 Oper: State ACEGIKNP Key 128 Priority 32768 Port Te 0/45 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/46 is disabled, LACP is disabled and mode
Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/55 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present Port Te 0/56 is disabled, LACP is disabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 128 Priority 32768 Oper: State ADEHJLMP Key 128 Priority 32768 Partner is not present show interfaces port-channel 1 Command Example Dell
E I L O - Aggregatable Link, F - Individual Link, G - IN_SYNC, H - OUT_OF_SYNC Collection enabled, J - Collection disabled, K - Distribution enabled Distribution disabled, M - Partner Defaulted, N - Partner Non-defaulted, Receiver is in expired state, P - Receiver is not in expired state Port Te 0/12 is enabled, LACP is enabled and mode is lacp Port State: Bundle Actor Admin: State ADEHJLMP Key 1 Priority 32768 Oper: State ADEGIKNP Key 1 Priority 32768 Partner Admin: State BDFHJLMP Key 0 Priority 0 Oper:
0/42 (Up) L 11 L3 Dell# 4. up 00:00:01 Te 0/43 (Up) Configure the port mode, VLAN, and so forth on the port-channel.
The following sample commands configure multiple dynamic uplink LAGs with 40G member ports based on LACP. 1. Convert the quad mode (4x10G) ports to native 40G mode. Dell#configure Dell(conf)#no stack-unit 0 port 33 portmode quad Disabling quad mode on stack-unit 0 port 33 will make interface configs of Te 0/33 Te 0/34 Te 0/35 Te 0/36 obsolete after a save and reload. [confirm yes/no]:yes Please save and reset unit 0 for the changes to take effect.
Dell(conf-if-po-21)#switchport Dell(conf-if-po-21)#no shut Dell(conf-if-po-21)#end Dell# 5. Show the port channel status.
Layer 2 12 The Aggregator supports CLI commands to manage the MAC address table: • Clearing the MAC Address Entries • Displaying the MAC Address Table The Aggregator auto-configures with support for Network Interface Controller (NIC) Teaming. NOTE: On an Aggregator, all ports are configured by default as members of all (4094) VLANs, including the default VLAN. All VLANs operate in Layer 2 mode.
clear mac-address-table dynamic {address | all | interfaces | vlan} • address: deletes the specified entry. • all: deletes all dynamic entries. • interface: deletes all entries for the specified interface. • vlan: deletes all entries for the specified VLAN. Displaying the MAC Address Table To display the MAC address table, use the following command. • Display the contents of the MAC address table. EXEC Privilege mode NOTE: This command is available only in PMUX mode.
Figure 18. Redundant NOCs with NIC Teaming MAC Address Station Move When you use NIC teaming, consider that the server MAC address is originally learned on Port 0/1 of the switch (see figure below). If the NIC fails, the same MAC address is learned on Port 0/5 of the switch. The MAC address is disassociated with one port and re-associated with another in the ARP table; in other words, the ARP entry is “moved”. The Aggregator is auto-configured to support MAC Address station moves.
Figure 19. MAC Address Station Move MAC Move Optimization Station-move detection takes 5000ms because this is the interval at which the detection algorithm runs.
Link Layer Discovery Protocol (LLDP) 13 Link layer discovery protocol (LLDP) advertises connectivity and management from the local station to the adjacent stations on an IEEE 802 LAN. LLDP facilitates multi-vendor interoperability by using standard management tools to discover and make available a physical topology for network management. The Dell Networking operating software implementation of LLDP is based on IEEE standard 801.1ab.
Figure 20. Type, Length, Value (TLV) Segment TLVs are encapsulated in a frame called an LLDP data unit (LLDPDU), which is transmitted from one LLDP-enabled device to its LLDP-enabled neighbors. LLDP is a one-way protocol. LLDP-enabled devices (LLDP agents) can transmit and/or receive advertisements, but they cannot solicit and do not respond to advertisements. There are five types of TLVs (as shown in the below table). All types are mandatory in the construction of an LLDPDU except Optional TLVs.
Figure 21. LLDPDU Frame Configure LLDP Configuring LLDP is a two-step process. 1. Enable LLDP globally. 2. Advertise TLVs out of an interface. Related Configuration Tasks • Viewing the LLDP Configuration • Viewing Information Advertised by Adjacent LLDP Agents • Configuring LLDPDU Intervals • Configuring a Time to Live • Debugging LLDP Important Points to Remember • LLDP is enabled by default. • Dell Networking systems support up to eight neighbors per interface.
advertise disable end exit hello mode multiplier no show Advertise TLVs Disable LLDP protocol globally Exit from configuration mode Exit from LLDP configuration mode LLDP hello configuration LLDP mode configuration (default = rx and tx) LLDP multiplier configuration Negate a command or set its defaults Show LLDP configuration Dell(conf-lldp)#exit Dell(conf)#interface tengigabitethernet 0/3 Dell(conf-if-te-0/3)#protocol lldp Dell(conf-if-te-0/3-lldp)#? advertise Advertise TLVs disable Disable LLDP protocol
Advertising TLVs You can configure the system to advertise TLVs out of all interfaces or out of specific interfaces. • If you configure the system globally, all interfaces send LLDPDUs with the specified TLVs. • If you configure an interface, only the interface sends LLDPDUs with the specified TLVs. • If you configure LLDP both globally and at interface level, the interface level configuration overrides the global configuration. To advertise TLVs, use the following commands. 1. Enter LLDP mode.
Figure 22. Configuring LLDP Optional TLVs The Dell Networking Operating System (OS) supports the following optional TLVs: Management TLVs, IEEE 802.1 and 802.3 organizationally specific TLVs, and TIA-1057 organizationally specific TLVs. Management TLVs A management TLV is an optional TLVs sub-type. This kind of TLV contains essential management information about the sender. Organizationally Specific TLVs A professional organization or a vendor can define organizationally specific TLVs.
IEEE Organizationally Specific TLVs Eight TLV types have been defined by the IEEE 802.1 and 802.3 working groups as a basic part of LLDP; the IEEE OUI is 00-80-C2. You can configure the Dell Networking system to advertise any or all of these TLVs. Table 9. Optional TLV Types Type TLV Description 4 Port description A user-defined alphanumeric string that describes the port. The Dell Networking OS does not currently support this TLV.
Type TLV Description Networking OS does not currently support this TLV. IEEE 802.3 Organizationally Specific TLVs 127 MAC/PHY Configuration/Status Indicates the capability and current setting of the duplex status and bit rate, and whether the current settings are the result of auto-negotiation. This TLV is not available in the Dell Networking OS implementation of LLDP, but is available and mandatory (non-configurable) in the LLDP-MED implementation.
Figure 24. LLDP-MED Capabilities TLV Table 10. Dell Networking OS LLDP-MED Capabilities Bit Position TLV Dell Networking OS Support 0 LLDP-MED Capabilities Yes 1 Network Policy Yes 2 Location Identification Yes 3 Extended Power via MDI-PSE Yes 4 Extended Power via MDI-PD No 5 Inventory No 6–15 reserved No Table 11.
NOTE: As shown in the following table, signaling is a series of control packets that are exchanged between an endpoint device and a network connectivity device to establish and maintain a connection. These signal packets might require a different network policy than the media packets for which a connection is made. In this case, configure the signaling application. Table 12.
Extended Power via MDI TLV The extended power via MDI TLV enables advanced PoE management between LLDP-MED endpoints and network connectivity devices. Advertise the extended power via MDI on all ports that are connected to an 802.3af powered, LLDP-MED endpoint device. • Power Type — there are two possible power types: power source entity (PSE) or power device (PD). The Dell Networking system is a PSE, which corresponds to a value of 0, based on the TIA-1057 specification.
Viewing the LLDP Configuration To view the LLDP configuration, use the following command. • Display the LLDP configuration.
------------------------------------------------------------------------Te 0/2 00:00:c9:b1:3b:82 00:00:c9:b1:3b:82 Te 0/3 00:00:c9:ad:f6:12 00:00:c9:ad:f6:12 Dell#show lldp neighbors detail ======================================================================== Local Interface Te 0/2 has 1 neighbor Total Frames Out: 16843 Total Frames In: 17464 Total Neighbor information Age outs: 0 Total Multiple Neighbors Detected: 0 Total Frames Discarded: 0 Total In Error Frames: 0 Total Unrecognized TLVs: 0 Total TLVs
CONFIGURATION mode or INTERFACE mode hello Example of Viewing LLDPDU Intervals Dell#conf Dell(conf)#protocol lldp Dell(conf-lldp)#show config ! protocol lldp Dell(conf-lldp)#hello ? <5-180> Hello interval in seconds (default=30) Dell(conf-lldp)#hello 10 Dell(conf-lldp)#show config ! protocol lldp hello 10 Dell(conf-lldp)# Dell(conf-lldp)#no hello Dell(conf-lldp)#show config ! protocol lldp Dell(conf-lldp)# Configuring a Time to Live The information received from a neighbor expires after a specific amount o
advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description multiplier 5 no disable R1(conf-lldp)#no multiplier R1(conf-lldp)#show config ! protocol lldp advertise dot1-tlv port-protocol-vlan-id port-vlan-id advertise dot3-tlv max-frame-size advertise management-tlv system-capabilities system-description no disable R1(conf-lldp)# Clearing LLDP Counters You can clear LLDP statistics that are maintained on an Aggregator for LLDP counters for frames transmitted to and rec
Figure 27. The debug lldp detail Command — LLDPDU Packet Dissection Relevant Management Objects Dell Networkings OS supports all IEEE 802.1AB MIB objects. The following tables list the objects associated with: • received and transmitted TLVs • the LLDP configuration on the local agent • IEEE 802.
Table 13. LLDP Configuration MIB Objects MIB Object Category LLDP Variable LLDP adminStatus Configuration Basic TLV Selection LLDP MIB Object Description lldpPortConfigAdminStatus Whether you enable the local LLDP agent for transmit, receive, or both. msgTxHold lldpMessageTxHoldMultiplie Multiplier value. r msgTxInterval lldpMessageTxInterval Transmit Interval value. rxInfoTTL lldpRxInfoTTL Time to live for received TLVs. txInfoTTL lldpTxInfoTTL Time to live for transmitted TLVs.
MIB Object Category LLDP Variable LLDP MIB Object Description statsTLVsUnrecognizedTota lldpStatsRxPortTLVsUnreco l gnizedTotal Total number of all TLVs the local agent does not recognize. Table 14.
TLV Type TLV Name TLV Variable management address System LLDP MIB Object Remote lldpRemManAddrSu btype Local lldpLocManAddr Remote lldpRemManAddr interface numbering Local subtype interface number OID lldpLocManAddrIfSu btype Remote lldpRemManAddrIfS ubtype Local lldpLocManAddrIfId Remote lldpRemManAddrIfId Local lldpLocManAddrOID Remote lldpRemManAddrOI D Table 15. LLDP 802.
TLV Type TLV Name TLV Variable VLAN name System LLDP MIB Object Remote lldpXdot1RemVlanN ame Local lldpXdot1LocVlanNa me Remote lldpXdot1RemVlanN ame Table 16.
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object L2 Priority Local lldpXMedLocMediaP olicyPriority Remote lldpXMedRemMedia PolicyPriority Local lldpXMedLocMediaP olicyDscp Remote lldpXMedRemMedia PolicyDscp Local lldpXMedLocLocatio nSubtype Remote lldpXMedRemLocati onSubtype Local lldpXMedLocLocatio nInfo Remote lldpXMedRemLocati onInfo Local lldpXMedLocXPoED eviceType Remote lldpXMedRemXPoED eviceType Local lldpXMedLocXPoEPS EPowerSource DSCP Value 3 Location Ident
TLV Sub-Type TLV Name TLV Variable System LLDP-MED MIB Object Power Value Local lldpXMedLocXPoEPS EPortPowerAv lldpXMedLocXPoEP DPowerReq Remote lldpXMedRemXPoEP SEPowerAv lldpXMedRemXPoEP DPowerReq 182 Link Layer Discovery Protocol (LLDP)
Port Monitoring 14 The Aggregator supports user-configured port monitoring. See Configuring Port Monitoring for the configuration commands to use. Port monitoring copies all incoming or outgoing packets on one port and forwards (mirrors) them to another port. The source port is the monitored port (MD) and the destination port is the monitoring port (MG). Supported Modes Standalone, PMUX, VLT, Stacking Configuring Port Monitoring To configure port monitoring, use the following commands. 1.
Dell(conf-mon-sess-0)#exit Dell(conf)# do show monitor session 0 SessionID Source Destination Direction --------- ---------------- --------0 TenGig 1/1 TenGig 1/42 rx Dell(conf)# Mode ---interface Type ---Port-based In the following example, the host and server are exchanging traffic which passes through the uplink interface 1/1. Port 1/1 is the monitored port and port 1/42 is the destination port, which is configured to only monitor traffic received on tengigabitethernet 1/1 (host-originated traffic).
• • • A monitoring port may not be a member of a VLAN. There may only be one destination port in a monitoring session. A source port (MD) can only be monitored by one destination port (MG).
Dell(conf-mon-sess-300)# source tengig 0/17 destination tengig 0/33 direction tx Dell(conf-mon-sess-300)# do show monitor session SessionID Source Destination Direction Mode Type --------- ---------------- --------- ------0 TenGig 0/13 TenGig 0/33 rx interface Port-based 10 TenGig 0/14 TenGig 0/34 rx interface Port-based 20 TenGig 0/15 TenGig 0/35 rx interface Port-based 30 TenGig 0/16 TenGig 0/37 rx interface Port-based 300 TenGig 0/17 TenGig 0/33 tx interface Port-based Dell(conf-mon-sess-300)# The follow
Security 15 The Aggregator provides many security features. This chapter describes several ways to provide access security to the Dell Networking system. For details about all the commands described in this chapter, refer to the Security chapter in the Dell PowerEdge Command Line Reference Guide for the M I/O Aggregator . Supported Modes Standalone, PMUX, VLT, Stacking Understanding Banner Settings This functionality is supported on the Aggregator.
the no version of this command to reactivate the Telnet or SSH session capability for the device. Use the show restrict-access command to view whether the access to a device using Telnet or SSH is disabled or not. AAA Authentication Dell Networking OS supports a distributed client/server system implemented through authentication, authorization, and accounting (AAA) to help secure networks against unauthorized access.
way, and does so to ensure that users are not locked out of the system if network-wide issue prevents access to these servers. 1. Define an authentication method-list (method-list-name) or specify the default. CONFIGURATION mode aaa authentication login {method-list-name | default} method1 [... method4] The default method-list is applied to all terminal lines. Possible methods are: 2. • enable: use the password you defined using the enable secret or enable password command in CONFIGURATION mode.
Enabling AAA Authentication — RADIUS To enable authentication from the RADIUS server, and use TACACS as a backup, use the following commands. 1. Enable RADIUS and set up TACACS as backup. CONFIGURATION mode aaa authentication enable default radius tacacs 2. Establish a host address and password. CONFIGURATION mode radius-server host x.x.x.x key some-password 3. Establish a host address and password. CONFIGURATION mode tacacs-server host x.x.x.
AAA Authorization The Dell Networking OS enables AAA new-model by default. You can set authorization to be either local or remote. Different combinations of authentication and authorization yield different results. By default, the system sets both to local. Privilege Levels Overview Limiting access to the system is one method of protecting the system and your network. However, at times, you might need to allow others access to the router and you can limit that access to a subset of commands.
• Configuring Custom Privilege Levels (mandatory) • Specifying LINE Mode Password and Privilege (optional) • Enabling and Disabling Privilege Levels (optional) For a complete listing of all commands related to privilege levels and passwords, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Configuring a Username and Password In the Dell Networking OS, you can assign a specific username to limit user access to the system.
To view the configuration for the enable secret command, use the show running-config command in EXEC Privilege mode. In custom-configured privilege levels, the enable command is always available. No matter what privilege level you entered, you can enter the enable 15 command to access and configure all CLIs. Configuring Custom Privilege Levels In addition to assigning privilege levels to the user, you can configure the privilege levels of commands so that they are visible in different privilege levels.
• • • level level: the range is from 0 to 15. Levels 0, 1, and 15 are pre-configured. Levels 2 to 14 are available for custom configuration. command: an Dell CLI keyword (up to five keywords allowed). reset: return the command to its default privilege mode. To view the configuration, use the show running-config command in EXEC Privilege mode. The following example shows a configuration to allow a user john to view only EXEC mode commands and all snmp-server commands.
exit no show terminal traceroute Dell#confi Dell(conf)#? end Exit from the EXEC Negate a command Show running system information Set terminal line parameters Trace route to destination Exit from Configuration mode Specifying LINE Mode Password and Privilege You can specify a password authentication of all users on different terminal lines. The user’s privilege level is the same as the privilege level assigned to the terminal line, unless a more specific privilege level is assigned to the user.
RADIUS Remote authentication dial-in user service (RADIUS) is a distributed client/server protocol. This protocol transmits authentication, authorization, and configuration information between a central RADIUS server and a RADIUS client (the Dell Networking system). The system sends user information to the RADIUS server and requests authentication of the user and password. The RADIUS server returns one of the following responses: • Access-Accept — the RADIUS server authenticates the user.
For a complete listing of all Dell Networking OS commands related to RADIUS, refer to the Security chapter in the Dell Networking OS Command Reference Guide. NOTE: RADIUS authentication and authorization are done in a single step. Hence, authorization cannot be used independent of authentication. However, if you have configured RADIUS authorization and have not configured authentication, a message is logged stating this.
Specifying a RADIUS Server Host When configuring a RADIUS server host, you can set different communication parameters, such as the UDP port, the key password, the number of retries, and the timeout. To specify a RADIUS server host and configure its communication parameters, use the following command. • Enter the host name or IP address of the RADIUS server host.
radius-server deadtime seconds • – seconds: the range is from 0 to 2147483647. The default is 0 seconds. Configure a key for all RADIUS communications between the system and RADIUS server hosts. CONFIGURATION mode radius-server key [encryption-type] key – encryption-type: enter 7 to encrypt the password. Enter 0 to keep the password as plain text. • – key: enter a string. The key can be up to 42 characters long. You cannot use spaces in the key.
For a complete listing of all commands related to TACACS+, refer to the Security chapter in the Dell Networking OS Command Reference Guide. Choosing TACACS+ as the Authentication Method One of the login authentication methods available is TACACS+ and the user’s name and password are sent for authentication to the TACACS hosts specified. To use TACACS+ to authenticate users, specify at least one TACACS+ server for the system to communicate with and configure TACACS+ as one of your authentication methods.
aaa authentication login default tacacs+ local aaa authentication login LOCAL local tacacs+ aaa authorization exec default tacacs+ none aaa authorization commands 1 default tacacs+ none aaa authorization commands 15 default tacacs+ none aaa accounting exec default start-stop tacacs+ aaa accounting commands 1 default start-stop tacacs+ aaa accounting commands 15 default start-stop tacacs+ Dell(conf)# Dell(conf)#do show run tacacs+ ! tacacs-server key 7 d05206c308f4d35b tacacs-server host 10.10.10.
CONFIGURATION mode tacacs-server host {hostname | ip-address} [port port-number] [timeout seconds] [key key] Configure the optional communication parameters for the specific host: – port port-number: the range is from 0 to 65535. Enter a TCP port number. The default is 49. – timeout seconds: the range is from 0 to 1000. Default is 10 seconds. – key key: enter a string for the key. The key can be up to 42 characters long. This key must match a key configured on the TACACS+ server host.
ssh {hostname} [-l username | -p port-number | -v {1 | 2}| -c encryption cipher | -m HMAC algorithm • • hostname is the IP address or host name of the remote device. Enter an IPv4 or IPv6 address in dotted decimal format (A.B.C.D). SSH V2 is enabled by default on all the modes. Display SSH connection information.
Secure Shell Authentication Secure Shell (SSH) is enabled by default using the SSH Password Authentication method. Enabling SSH Authentication by Password Authenticate an SSH client by prompting for a password when attempting to connect to the Dell Networking system. This setup is the simplest method of authentication and uses SSH version 1. To enable SSH password authentication, use the following command. • Enable SSH password authentication.
ip ssh rsa-authentication my-authorized-keys flash://public_key Example of Generating RSA Keys admin@Unix_client#ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/admin/.ssh/id_rsa): /home/admin/.ssh/id_rsa already exists. Overwrite (y/n)? y Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/admin/.ssh/id_rsa. Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
AyWhVgJDQh39k8v3e8eQvLnHBIsqIL8jVy1QHhUeb7GaDlJVEDAMz30myqQbJgXBBRTWgBpLWwL/ doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hL1by8cYZP2kYS2lnSyQWk= admin@Unix_client# ls id_rsa id_rsa.pub shosts admin@Unix_client# cat shosts 10.16.127.201, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA8K7jLZRVfjgHJzUOmXxuIbZx/AyW hVgJDQh39k8v3e8eQvLnHBIsqIL8jVy1QHhUeb7GaDlJVEDAMz30myqQbJgXBBRTWgBpLWwL/ doyUXFufjiL9YmoVTkbKcFmxJEMkE3JyHanEi7hg34LChjk9hL1by8cYZP2kYS2lnSyQWk= The following example shows creating rhosts.
Telnet To use Telnet with SSH, first enable SSH, as previously described. By default, the Telnet daemon is enabled. If you want to disable the Telnet daemon, use the following command, or disable Telnet in the startup config. To enable or disable the Telnet daemon, use the [no] ip telnet server enable command.
You can assign line authentication on a per-VTY basis; it is a simple password authentication, using an access-class as authorization. Configure local authentication globally and configure access classes on a per-user basis. Dell Networking OS can assign different access classes to different users by username. Until users attempt to log in, Dell Networking OS does not know if they will be assigned a VTY line.
Dell(config-line-vty)#access-class deny10 Dell(config-line-vty)#end (same applies for radius and line authentication) VTY MAC-SA Filter Support Dell Networking OS supports MAC access lists which permit or deny users based on their source MAC address. With this approach, you can implement a security policy based on the source MAC address. To apply a MAC ACL on a VTY line, use the same access-class command as IP ACLs. The following example shows how to deny incoming connections from subnet 10.0.0.
16 Simple Network Management Protocol (SNMP) Network management stations use SNMP to retrieve or alter management data from network elements. A datum of management information is called a managed object; the value of a managed object can be static or variable. Network elements store managed objects in a database called a management information base (MIB).
values, increase the timeout value to greater than 3 seconds, and increase the retry value to greater than 2 seconds on your SNMP server. Setting up SNMP Dell Networking OS supports SNMP version 1 and version 2 which are community-based security models. The primary difference between the two versions is that version 2 supports two additional protocol operations (informs operation and snmpgetbulk query) and one additional object (counter64 object).
• Read the value of a single managed object. • snmpget -v version -c community agent-ip {identifier.instance | descriptor.instance} Read the value of the managed object directly below the specified object. • snmpgetnext -v version -c community agent-ip {identifier.instance | descriptor.instance} Read the value of many objects at once. snmpwalk -v version -c community agent-ip {identifier.instance | descriptor.
LineSpeed auto ARP type: ARPA, ARP Timeout 04:00:00 Last clearing of "show interface" counters 00:12:42 Queueing strategy: fifo Time since last interface status change: 00:12:42 To display the ports in a VLAN, send an snmpget request for the object dot1qStaticEgressPorts using the interface index as the instance number, as shown in the following example. Example of Viewing the Ports in a VLAN in SNMP > snmpget -v2c -c mycommunity 10.11.131.185 . 1.3.6.1.2.1.17.7.1.4.3.1.2.1107787786 SNMPv2-SMI::mib-2.17.7.
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 The value 40 is in the first set of 7 hex pairs, indicating that these ports are in Stack Unit 0. The hex value 40 is 0100 0000 in binary. As described, the left-most position in the string represents Port 1. The next position from the left represents Port 2 and has a value of 1, indicating that Port 0/2 is in VLAN 10. The remaining positions are 0, so those ports are not in the VLAN.
Example of Fetching Dynamic MAC Addresses on the Default VLAN -----------------------------MAC Addresses on Dell Networking System------------------------------Dell#show mac-address-table VlanId Mac Address Type Interface State 1 00:01:e8:06:95:ac Dynamic Tengig 0/7 Active ----------------Query from Management Station--------------------->snmpwalk -v 2c -c techpubs 10.11.131.162 .1.3.6.1.2.1.17.4.3.1 SNMPv2-SMI::mib-2.17.4.3.1.1.0.1.232.6.149.
• • • • • the next 4 bits represent the interface type the next 7 bits represent the port number the next 5 bits represent the slot number the next 1 bit is 0 for a physical interface and 1 for a logical interface the next 1 bit is unused For example, the index 44634369 is 10101010010001000100000001 in binary. The binary interface index for TenGigabitEthernet 0/41 of an Aggregator. Notice that the physical/logical bit and the final, unused bit are not given.
SNMPv2-SMI::enterprises.6027.3.2.1.1.6.1.2.1107755009.1 = INTEGER: 1 dot3aCommonAggFdbTagConfig SNMPv2-SMI::enterprises.6027.3.2.1.1.6.1.3.1107755009.1 = INTEGER: 2 (Tagged 1 or Untagged 2) dot3aCommonAggFdbStatus SNMPv2-SMI::enterprises.6027.3.2.1.1.6.1.4.1107755009.1 = INTEGER: 1 << Status active, 2 – status inactive If you learn the MAC address for the LAG, the LAG status also displays. dot3aCurAggVlanId SNMPv2-SMI::enterprises.6027.3.2.1.1.4.1.1.1.0.0.0.0.0.1.
• Containment Tree: Each physical component may be modeled as contained within another physical component. A containment-tree is the conceptual sequence of entPhysicalIndex values that uniquely specifies the exact physical location of a physical component within the managed system. It is generated by following and recording each entPhysicalContainedIn instance up the tree towards the root, until a value of zero indicating no further containment is found.
SNMPv2-SMI::mib-2.47.1.1.1.1.2.61 = STRING: "Unit: 0 Port 52 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.66 = STRING: "PowerConnect I/O-Aggregator" SNMPv2-SMI::mib-2.47.1.1.1.1.2.67 = STRING: "Module 0" SNMPv2-SMI::mib-2.47.1.1.1.1.2.68 = STRING: "Unit: 1 Port 1 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.69 = STRING: "Unit: 1 Port 2 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.70 = STRING: "Unit: 1 Port 3 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.1.2.71 = STRING: "Unit: 1 Port 4 10G Level" SNMPv2-SMI::mib-2.47.1.1.1.
status OID in the standard VLAN MIB is expected to simplify and speed up this process. This OID provides 4000 VLAN entries with port membership bit map for each VLAN and reduces the scan for (4000 X Number of ports) to 4000. Enhancements 1. The dot1qVlanCurrentEgressPorts MIB attribute has been enhanced to support logical LAG interfaces. 2. Current status OID in standard VLAN MIB is accessible over SNMP. 3.
In standalone mode, there are 4000 VLANs, by default. The SNMP output will display for all 4000 VLANs. To view a particular VLAN, issue the snmp query with VLAN interface ID. Dell#show interface vlan 1010 | grep “Interface index” Interface index is 1107526642 Use the output of the above command in the snmp query. snmpwalk -Os -c public -v 1 10.16.151.151 1.3.6.1.2.1.17.7.1.4.2.1.4.0.1107526642 mib-2.17.7.1.4.2.1.4.0.
MIB Support to Display the Software Core Files Generated by the System Dell Networking provides MIB objects to display the software core files generated by the system. The chSysSwCoresTable contains the list of software core files generated by the system. The following table lists the related MIB objects. Table 20. MIB Objects for Displaying the Software Core Files Generated by the System MIB Object OID Description chSysSwCoresTable 1.3.6.1.4.1.6027.3.19.1.2.
enterprises.6027.3.10.1.2.10.1.3.1.1 enterprises.6027.3.10.1.2.10.1.3.1.2 enterprises.6027.3.10.1.2.10.1.3.1.3 enterprises.6027.3.10.1.2.10.1.3.2.1 enterprises.6027.3.10.1.2.10.1.4.1.1 enterprises.6027.3.10.1.2.10.1.4.1.2 enterprises.6027.3.10.1.2.10.1.4.1.3 enterprises.6027.3.10.1.2.10.1.4.2.1 enterprises.6027.3.10.1.2.10.1.5.1.1 enterprises.6027.3.10.1.2.10.1.5.1.2 enterprises.6027.3.10.1.2.10.1.5.1.3 enterprises.6027.3.10.1.2.10.1.5.2.
Stacking 17 An Aggregator auto-configures to operate in standalone mode. To use an Aggregator in a stack, you must manually configure it using the CLI to operate in stacking mode. Stacking is supported only on the 40GbE ports on the base module. Stacking is limited to six Aggregators in the same or different m1000e chassis. To configure a stack, you must use the CLI. Stacking provides a single point of management for high availability and higher throughput.
Figure 29. A Two-Aggregator Stack Stack Management Roles The stack elects the management units for the stack management. • • Stack master — primary management unit Standby — secondary management unit The master holds the control plane and the other units maintain a local copy of the forwarding databases.
• • • • Switch failure Inter-switch stacking link failure Switch insertion Switch removal If the master switch goes off line, the standby replaces it as the new master. NOTE: For the Aggregator, the entire stack has only one management IP address. Stack Master Election The stack elects a master and standby unit at bootup time based on MAC address. The unit with the higher MAC value becomes master. To view which switch is the stack master, enter the show system command.
from standby to master. The lack of a standby unit triggers an election within the remaining units for a standby role. After the former master switch recovers, despite having a higher priority or MAC address, it does not recover its master role but instead takes the next available role. MAC Addressing All port interfaces in the stack use the MAC address of the management interface on the master switch.
Stacking Port Numbers By default, each Aggregator in Standalone mode is numbered stack-unit 0. Stack-unit numbers are assigned to member switches when the stack comes up. The following example shows the numbers of the 40GbE stacking ports on an Aggregator.
Figure 30.
Stacking in PMUX Mode PMUX stacking allows the stacking of two or more IOA units. This allows grouping of multiple units for high availability. IOA supports a maximum of six stacking units. NOTE: Prior to configuring the stack-group, ensure the stacking ports are connected and in 40G native mode. 1. Configure stack groups on all stack units.
2. Configure each Aggregator to operate in stacking mode. 3. Reload each Aggregator, one after the other in quick succession. Stacking Prerequisites Before you cable and configure a stack of MXL 10/40GbE switches, review the following prerequisites. • All Aggregators in the stack must be powered up with the initial or startup configuration before you attach the cables. • All stacked Aggregators must run the same Dell Networking OS version. The minimum Dell networking OS version required is 8.3.17.0.
Setting the priority will determine which switch will become the management (Master) switch. The switch with the highest priority number is elected Master. The default priority is 0. NOTE: It is best practice to assign priority values to all switches before stacking them in order to acquire and retain complete control over each units role in the stack. 2. Configure the stack-group for each stack-unit.
2. Connect a 40GbE base port on the second Aggregator to a 40GbE port on the first Aggregator. The resulting ring topology allows the entire stack to function as a single switch with resilient fail-over capabilities. If you do not connect the last switch to the first switch (Step 4), the stack operates in a daisy chain topology with less resiliency. Any failure in a non-edge stack unit causes a split stack.
To verify the stack-unit number assigned to each switch in the stack, use the show system brief command. Adding a Stack Unit You can add a new unit to an existing stack both when the unit has no stacking ports (stack groups) configured and when the unit already has stacking ports configured. If the units to be added to the stack have been previously used, they are assigned the smallest available unit ID in the stack. To add a standalone Aggregator to a stack, follow these steps: 1. Power on the switch.
To rest a unit on a stack, use the following commands: • Reload a stack-unit from the master switch. EXEC Privilege mode • reset stack-unit unit-number Reset a stack-unit when the unit is in a problem state. EXEC Privilege mode reset stack-unitunit-number {hard} Removing an Aggregator from a Stack and Restoring Quad Mode To remove an Aggregator from a stack and return the 40GbE stacking ports to 4x10GbE quad mode follow the below steps: 1. Disconnect the stacking cables from the unit.
This functionality to set the uplink speed is available from the CLI or the CMC interface when the Aggregator functions as a simple MUX or a VLT node, with all of the uplink interfaces configured to be member links in the same LAG bundle. You cannot configure the uplink speed to be set as 40 GbE if the Aggregator functions in programmable MUX mode with multiple uplink LAG interfaces or in stacking mode.
Depending on the uplink speed configured, the fan-out setting is designed accordingly during the booting of the switch. The following example displays the output of the show system stack-unit unit-number iomuplink-speed command with the Boot-speed field contained in it: Dell# show system stack-unit 0 iom-uplink-speed Unit Boot-speed Next-Boot -----------------------------------------------0 10G 40G Merging Two Operational Stacks The recommended procedure for merging two operational stacks is as follows: 1.
• show inventory optional-module Displays the stack groups allocated on a stacked switch. The range is from 0 to 5. • show system stack-unit unit-number stack-group configured Displays the port numbers that correspond to the stack groups on a switch. The valid stack-unit numbers are from 0 to 5. • show system stack-unit unit-number stack-group Displays the type of stack topology (ring or daisy chain) with a list of all stacked ports, port status, link speed, and peer stack-unit connection.
No Of MACs : 3 -- Unit 2 -Unit Type : Member Unit Status : not present Required Type : Example of the show inventory optional-module Command Dell# show inventory optional-module Unit Slot Expected Inserted Next Boot Power -----------------------------------------0 0 SFP+ SFP+ AUTO Good 0 1 QSFP+ QSFP+ AUTO Good * - Mismatch Example of the show system stack-unit stack-group configured Command Dell# show system stack-unit 1 stack-group configured Configured stack groups in stack-unit 1 -----------------------
0/37 1/33 1/37 1/37 0/37 40 40 40 up up up up down up Troubleshooting a Switch Stack To perform troubleshooting operations on a switch stack, use the following commands on the master switch. 1. Displays the status of stacked ports on stack units. show system stack-ports 2. Displays the master standby unit status, failover configuration, and result of the last master-standby synchronization; allows you to verify the readiness for a stack failover. show redundancy 3.
--------------------------------------------------------Primary Stack-unit: mgmt-id 0 Auto Data Sync: Full Failover Type: Hot (Failover Failover type with redundancy.
The following syslog messages are generated when a member unit fails: Dell#May 31 01:46:17: %STKUNIT3-M:CP %IPC-2-STATUS: target stack unit 4 not responding May 31 01:46:17: %STKUNIT3-M:CP %CHMGR-2-STACKUNIT_DOWN: Major alarm: Stack unit 4 down - IPC timeout Dell#May 31 01:46:17: %STKUNIT3-M:CP %IFMGR-1-DEL_PORT: Removed port: Te 4/1-32,41-48, Fo 4/ 49,53 Dell#May 31 01:46:18: %STKUNIT5-S:CP %IFMGR-1-DEL_PORT: Removed port: Te 4/1-32,41-48, Fo 4/ 49,53 Unplugged Stacking Cable • Problem: A stacking cable
within 10 seonds.Shutting down this stack port now. 10:55:18: %STKUNIT1-M:CP %KERN-2-INT: Error: Please check the stack cable/module and power-cycle the stack. Master Switch Recovers from Failure • Problem: The master switch recovers from a failure after a reboot and rejoins the stack as the standby unit or member unit. Protocol and control plane recovery requires time before the switch is fully online.
Upgrading a Switch Stack To upgrade all switches in a stack with the same Dell Networking OS version, follow these steps. 1. Copy the new Dell Networking OS image to a network server. 2. Download the Dell Networking OS image by accessing an interactive CLI that requests the server IP address and image filename, and prompts you to upgrade all member stack units.
! Image upgraded to all Dell# configure Dell(conf)# boot system stack-unit all primary system: A: Dell(conf)# end Dell# write memory Jan 3 14:01:48: %STKUNIT0-M:CP %FILEMGR-5-FILESAVED: Copied running-config to startup-config in flash by default Synchronizing data to peer Stack-unit !!!! Dell# reload Proceed with reload [confirm yes/no]: yes Upgrading a Single Stack Unit Upgrading a single stacked switch is necessary when the unit was disabled due to an incorrect Dell Networking OS version.
Dell#Jan 3 14:27:00: %STKUNIT0-M:CP %SYS-5-CONFIG_I: Configured from console Dell# write memory Jan 3 14:27:10: %STKUNIT0-M:CP %FILEMGR-5-FILESAVED: Copied running-config to startup-config in flash by default Synchronizing data to peer Stack-unit !!!! ....
Broadcast Storm Control 18 On the Aggregator, the broadcast storm control feature is enabled by default on all ports, and disabled on a port when an iSCSI storage device is detected. Broadcast storm control is re-enabled as soon as the connection with an iSCSI device ends. Broadcast traffic on Layer 2 interfaces is limited or suppressed during a broadcast storm. You can view the status of a broadcast-storm control operation by using the show io-aggregator broadcast storm-control status command.
3. 248 To configure the percentage of unknown-unicast traffic allowed on an interface, use the stormcontrol unknown-unicast [packets_per_second in] command from INTERFACE mode.
System Time and Date 19 The Aggregator auto-configures the hardware and software clocks with the current time and date. If necessary, you can manually set and maintain the system time and date using the CLI commands described in this chapter.
Setting the Timezone Universal time coordinated (UTC) is the time standard based on the International Atomic Time standard, commonly known as Greenwich Mean time. When determining system time, you must include the differentiator between the UTC and your local timezone. For example, San Jose, CA is the Pacific Timezone with a UTC offset of -8. To set the clock timezone, use the following command. • Set the clock to the appropriate timezone.
– start-time: enter the time in hours:minutes. For the hour variable, use the 24-hour format; example, 17:15 is 5:15 pm. – end-month: enter the name of one of the 12 months in English. You can enter the name of a day to change the order of the display to time day month year. – end-day: enter the number of the day. The range is from 1 to 31. You can enter the name of a month to change the order of the display to time day month year. – end-year: enter a four-digit number as the year.
* first: Enter the keyword first to end daylight saving time in the first week of the month. * last: Enter the keyword last to end daylight saving time in the last week of the month. – end-month: Enter the name of one of the 12 months in English. You can enter the name of a day to change the order of the display to time day month year. – end-day: Enter the number of the day. The range is from 1 to 31. You can enter the name of a month to change the order of the display to time day month year.
Uplink Failure Detection (UFD) 20 Supported Modes Standalone, PMUX, VLT, Stacking Feature Description UFD provides detection of the loss of upstream connectivity and, if used with network interface controller (NIC) teaming, automatic recovery from a failed link. A switch provides upstream connectivity for devices, such as servers. If a switch loses its upstream connectivity, downstream devices also lose their connectivity.
Figure 31. Uplink Failure Detection How Uplink Failure Detection Works UFD creates an association between upstream and downstream interfaces. The association of uplink and downlink interfaces is called an uplink-state group. An interface in an uplink-state group can be a physical interface or a port-channel (LAG) aggregation of physical interfaces. An enabled uplink-state group tracks the state of all assigned upstream interfaces.
Figure 32. Uplink Failure Detection Example If only one of the upstream interfaces in an uplink-state group goes down, a specified number of downstream ports associated with the upstream interface are put into a Link-Down state. You can configure this number and is calculated by the ratio of the upstream port bandwidth to the downstream port bandwidth in the same uplink-state group.
UFD and NIC Teaming To implement a rapid failover solution, you can use uplink failure detection on a switch with network adapter teaming on a server. For more information, refer to Network Interface Controller (NIC) Teaming. For example, as shown previously, the switch/ router with UFD detects the uplink failure and automatically disables the associated downstream link port to the server.
– For an example of debug log message, refer to Clearing a UFD-Disabled Interface. Uplink Failure Detection (SMUX mode) In Standalone or VLT modes, by default, all the server-facing ports are tracked by the operational status of the uplink LAG. If the uplink LAG goes down, the aggregator loses its connectivity and is no longer operational. All the server-facing ports are brought down after the specified defer-timer interval, which is 10 seconds by default.
To delete an uplink-state group, use the no uplink-state-group group-id command. 2. Assign a port or port-channel to the uplink-state group as an upstream or downstream interface.
6. (Optional) Enter a text description of the uplink-state group. UPLINK-STATE-GROUP mode description text The maximum length is 80 alphanumeric characters. 7. (Optional) Disable upstream-link tracking without deleting the uplink-state group. UPLINK-STATE-GROUP mode no enable The default is upstream-link tracking is automatically enabled in an uplink-state group. To re-enable upstream-link tracking, use the enable command.
00:10:12: %STKUNIT0-M:CP down: Te 0/2 00:10:12: %STKUNIT0-M:CP down: Te 0/3 00:10:12: %STKUNIT0-M:CP Te 0/1 00:10:12: %STKUNIT0-M:CP Te 0/2 00:10:12: %STKUNIT0-M:CP Te 0/3 00:10:13: %STKUNIT0-M:CP to down: Group 3 00:10:13: %STKUNIT0-M:CP error-disabled: Te 0/4 00:10:13: %STKUNIT0-M:CP error-disabled: Te 0/5 00:10:13: %STKUNIT0-M:CP error-disabled: Te 0/6 00:10:13: %STKUNIT0-M:CP Te 0/4 00:10:13: %STKUNIT0-M:CP Te 0/5 00:10:13: %STKUNIT0-M:CP Te 0/6 %IFMGR-5-ASTATE_DN: Changed interface Admin state to %IFM
• – detail: displays additional status information on the upstream and downstream interfaces in each group. Display the current status of a port or port-channel interface assigned to an uplink-state group. EXEC mode show interfaces interface interface specifies one of the following interface types: – 10 Gigabit Ethernet: enter tengigabitethernet slot/port. – 40 Gigabit Ethernet: enter fortygigabitethernet slot/port. – Port channel: enter port-channel {1-512}.
Upstream Interfaces : Downstream Interfaces : Uplink State Group : 7 Status: Enabled, Up Upstream Interfaces : Downstream Interfaces : Uplink State Group : 16 Status: Disabled, Up Upstream Interfaces : Tengig 0/41(Dwn) Po 8(Dwn) Downstream Interfaces : Tengig 0/40(Dwn) Dell#show interfaces gigabitethernet 7/45 TenGigabitEthernet 7/45 is up, line protocol is down (error-disabled[UFD]) Hardware is Force10Eth, address is 00:01:e8:32:7a:47 Current address is 00:01:e8:32:7a:47 Interface index is 280544512 Intern
Sample Configuration: Uplink Failure Detection The following example shows a sample configuration of UFD on a switch/router in which you configure as follows. • Configure uplink-state group 3. • Add downstream links Gigabitethernet 0/1, 0/2, 0/5, 0/9, 0/11, and 0/12. • Configure two downstream links to be disabled if an upstream link fails. • Add upstream links Gigabitethernet 0/3 and 0/4. • Add a text description for the group. • Verify the configuration with various show commands.
(Up): Interface up (Dwn): Interface down (Dis): Interface disabled Uplink State Group : 3 Status: Enabled, Up Upstream Interfaces : Te 0/3(Dwn) Te 0/4(Up) Downstream Interfaces : Te 0/1(Dis) Te 0/2(Dis) Te 0/5(Up) Te 0/9(Up) Te 0/11(Up) Te 0/12(Up) 264 Uplink Failure Detection (UFD)
PMUX Mode of the IO Aggregator 21 This chapter provides an overview of the PMUX mode. I/O Aggregator (IOA) Programmable MUX (PMUX) Mode IOA PMUX is a mode that provides flexibility of operation with added configurability. This involves creating multiple LAGs, configuring VLANs on uplinks and the server side, configuring data center bridging (DCB) parameters, and so forth. By default, IOA starts up in IOA Standalone mode.
After system is up, you can see the PMUX mode status: Dell#sh system stack-unit 0 iom-mode Unit Boot-Mode Next-Boot ------------------------------------------------------0 programmable-mux programmable-mux Dell# The IOA is now ready for PMUX operations. Configuring the Commands without a Separate User Account Starting with Dell Networking OS version 9.3(0.0), you can configure the PMUX mode CLI commands without having to configure a new, separate user profile.
Overview VLT allows physical links between two chassis to appear as a single virtual link to the network core or other switches such as Edge, Access, or top-of-rack (ToR). VLT reduces the role of spanning tree protocols (STPs) by allowing link aggregation group (LAG) terminations on two separate distribution or core switches, and by supporting a loop-free topology. (To prevent the initial loop that may occur prior to VLT being established, use a spanning tree protocol.
Setting up VLT The following figure shows the sample VLT topology. Figure 33. Sample VLT Topology Ports 33 and 37 are used as ICL links and these two 40G ports are connected back to back between the two Aggregators. In PMUX VLT, you can choose any uplink ports for configuring VLT. NOTE: Ensure the connectivity to ToR from each Aggregator. To enable VLT and verify the configuration, follow these steps. 1. Enable VLT in node 1 and 2.
LAG L 127 Mode L2 Status up Uptime 00:18:22 128 L2 up 00:00:00 Ports Fo 0/33 Fo 0/37 Fo 0/41 (Up)<<<<<<<
Remote System MAC address Remote system version Delay-Restore timer Delay-Restore Abort Threshold Peer-Routing Peer-Routing-Timeout timer Multicast peer-routing timeout Dell# 5. : 00:01:e8:8a:e9:76 : 6(3) : 90 seconds : 60 seconds : Disabled : 0 seconds : 150 seconds Configure the secondary VLT. NOTE: Repeat steps from 1 through 4 on the secondary VLT, ensuring you use the different backup destination and unit-id.
o - OpenFlow untagged, O - OpenFlow tagged G - GVRP tagged, M - Vlan-stack, H - VSN tagged i - Internal untagged, I - Internal tagged, v - VLT untagged, V - VLT tagged NUM Status Description Q Ports * 1 Active U Te 0/33 10 Active T Po128(Te 0/41-42) T Te 0/1 11 Active T Po128(Te 0/41-42) T Te 0/1 12 Active T Po128(Te 0/41-42) T Te 0/1 13 Active T Po128(Te 0/41-42) T Te 0/1 14 Active T Po128(Te 0/41-42) T Te 0/1 15 Active T Po128(Te 0/41-42) T Te 0/1 20 Active U Po128(Te 0/41-42) U Te 0/1 Dell You can remove
Configure Virtual Link Trunking VLT requires that you enable the feature and then configure the same VLT domain, backup link, and VLT interconnect on both peer switches. Important Points to Remember • VLT port channel interfaces must be switch ports. • Dell Networking strongly recommends that the VLTi (VLT interconnect) be a static LAG and that you disable LACP on the VLTi. • If the lacp-ungroup feature is not supported on the ToR, reboot the VLT peers one at a time.
– Port-channel link aggregation (LAG) across the ports in the VLT interconnect is required; individual ports are not supported. Dell Networking strongly recommends configuring a static LAG for VLTi. – The VLT interconnect synchronizes L2 and L3 control-plane information across the two chassis. – The VLT interconnect is used for data traffic only when there is a link failure that requires using VLTi in order for data packets to reach their final destination.
number for port channels on VLT peers that connects to the device. The discovery protocol requires that an attached device always runs LACP over the port-channel interface. – VLT provides a loop-free topology for port channels with endpoints on different chassis in the VLT domain. – VLT uses shortest path routing so that traffic destined to hosts via directly attached links on a chassis does not traverse the chassis-interconnect link.
Primary and Secondary VLT Peers Primary and Secondary VLT Peers are supported on the Aggregator. To prevent issues when connectivity between peers is lost, you can designate Primary and Secondary roles for VLT peers . You can elect or configure the Primary Peer. By default, the peer with the lowest MAC address is selected as the Primary Peer. If the VLTi link fails, the status of the remote VLT Primary Peer is checked using the backup link.
VLT Port Delayed Restoration When a VLT node boots up, if the VLT ports have been previously saved in the start-up configuration, they are not immediately enabled. To ensure MAC and ARP entries from the VLT per node are downloaded to the newly enabled VLT node, the system allows time for the VLT ports on the new node to be enabled and begin receiving traffic. The delay-restore feature waits for all saved configurations to be applied, then starts a configurable timer.
Verifying a VLT Configuration To monitor the operation or verify the configuration of a VLT domain, use any of the following show commands on the primary and secondary VLT switches. • Display information on backup link operation. EXEC mode • show vlt backup-link Display general status information about VLT domains currently configured on the switch.
Peer HeartBeat status: HeartBeat Timer Interval: HeartBeat Timeout: UDP Port: HeartBeat Messages Sent: HeartBeat Messages Received: Up 1 3 34998 1026 1025 Dell_VLTpeer2# show vlt backup-link VLT Backup Link ----------------Destination: Peer HeartBeat status: HeartBeat Timer Interval: HeartBeat Timeout: UDP Port: HeartBeat Messages Sent: HeartBeat Messages Received: 10.11.200.
Example of the show vlt role Command Dell_VLTpeer1# show vlt role VLT Role ---------VLT Role: System MAC address: System Role Priority: Local System MAC address: Local System Role Priority: Primary 00:01:e8:8a:df:bc 32768 00:01:e8:8a:df:bc 32768 Dell_VLTpeer2# show vlt role VLT Role ---------VLT Role: System MAC address: System Role Priority: Local System MAC address: Local System Role Priority: Secondary 00:01:e8:8a:df:bc 32768 00:01:e8:8a:df:e6 32768 Example of the show running-config vlt Command Dell
Additional VLT Sample Configurations To configure VLT, configure a backup link and interconnect trunk, create a VLT domain, configure a backup link and interconnect trunk, and connect the peer switches in a VLT domain to an attached access device (switch or server). Review the following examples of VLT configurations. Configuring Virtual Link Trunking (VLT Peer 1) Enable VLT and create a VLT domain with a backup-link and interconnect trunk (VLTi).
Troubleshooting VLT To help troubleshoot different VLT issues that may occur, use the following information. NOTE: For information on VLT Failure mode timing and its impact, contact your Dell Networking representative. Table 22. Troubleshooting VLT Description Behavior at Peer Up Behavior During Run Time Action to Take Bandwidth monitoring A syslog error message and an SNMP trap is generated when the VLTi bandwidth usage goes above the 80% threshold and when it drops below 80%.
Description Behavior at Peer Up Behavior During Run Time Action to Take Peer 2 unit ID must be “1’. Version ID mismatch A syslog error message and an SNMP trap are generated. A syslog error message and an SNMP trap are generated. Verify the Dell Networking OS software versions on the VLT peers is compatible. For more information, refer to the Release Notes for this release. VLT LAG ID is not configured on one VLT peer A syslog error message is generated.
FC Flex IO Modules 22 This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module. FC Flex IO Modules This part provides a generic, broad-level description of the operations, capabilities, and configuration commands of the Fiber Channel (FC) Flex IO module.
switch to operate as NPIV proxy gateways. The MXL 10/40GbE Switch or the I/O Aggregator can function in NPIV proxy gateway mode when an FC Flex IO module is present or in the FIP snooping bridge (FSB) mode when all the ports are Ethernet ports. The FC Flex IO module uses the same baseboard hardware of the MXL 10/40GbE Switch or the Aggregator and the M1000 chassis. You can insert the FC Flex IO module into any of the optional module slots of the MXL 10/40GbE Switch and it provides four FC ports per module.
• Two 40GbE, four 10GBASE-T, and four 8GB FC ports FC Flex IO Module Capabilities and Operations The FC Flex IO module has the following characteristics: • You can install one or two FC Flex IO modules on the MXL 10/40GbE Switch or I/O Aggregator. Each module supports four FC ports. • Each port can operate in 2Gbps, 4Gbps, or 8Gbps of Fibre Channel speed.
• The FC Flex IO does not have persistent storage for any runtime configuration. All the persistent storage for runtime configuration is on the MXL and IOA baseboard. • With both FC Flex IO modules present in the MXL or I/O Aggregator switches, the power supply requirement and maximum thermal output are the same as these parameters needed for the M1000 chassis. • Each port on the FC Flex IO module contains status indicators to denote the link status and transmission activity.
• priority-group 2 bandwidth 40 pfc on • priority-pgid 0 0 0 2 1 0 0 0 • On I/O Aggregators, uplink failure detection (UFD) is disabled if FC Flex IO module is present to allow server ports to communicate with the FC fabric even when the Ethernet upstream ports are not operationally up. • Ensure that the NPIV functionality is enabled on the upstream switches that operate as FC switches or FCoE forwarders (FCF) before you connect the FC port of the MXL or I/O Aggregator to these upstream switches.
Processing of Data Traffic The Dell Networking OS determines the module type that is plugged into the slot. Based on the module type, the software performs the appropriate tasks. The FC Flex IO module encapsulates and decapsulates the FCoE frames. The module directly switches any non-FCoE or non-FIP traffic, and only FCoE frames are processed and transmitted out of the Ethernet network.
Installing and Configuring the Switch After you unpack the MXL 10/40GbE Switch, refer to the flow chart in the following figure for an overview of the steps you must follow to install the blade and perform the initial configuration.
Installing and Configuring Flowchart for FC Flex IO Modules 290 FC Flex IO Modules
To see if a switch is running the latest Dell Networking OS version, use the show version command. To download a Dell Networking OS version, go to http://support.dell.com. Installation Site Preparation Before installing the switch or switches, make sure that the chosen installation location meets the following site requirements: • Clearance — There is adequate front and rear clearance for operator access. Allow clearance for cabling, power connections, and ventilation.
Interconnectivity of FC Flex IO Modules with Cisco MDS Switches In a network topology that contains Cisco MDS switches, FC Flex IO modules that are plugged into the MXL and I/O Aggregator switches enable interoperation for a robust, effective deployment of the NPIV proxy gateway and FCoE-FC bridging behavior.
Figure 34. Case 1: Deployment Scenario of Configuring FC Flex IO Modules Figure 35. Case 2: Deployment Scenario of Configuring FC Flex IO Modules Fibre Channel over Ethernet for FC Flex IO Modules FCoE provides a converged Ethernet network that allows the combination of storage-area network (SAN) and LAN traffic on a Layer 2 link by encapsulating Fibre Channel data into Ethernet frames.
Ethernet local area network (LAN) (IP cloud) for data — as well as FC links to one or more storage area network (SAN) fabrics. FCoE works with the Ethernet enhancements provided in Data Center Bridging (DCB) to support lossless (no-drop) SAN and LAN traffic. In addition, DCB provides flexible bandwidth sharing for different traffic types, such as LAN and SAN, according to 802.1p priority classes of service. DCBx should be enabled on the system before the FIP snooping feature is enabled.
FC FLEXIO FPORT 23 FC FlexIO FPort is now supported on the MXL switch platform. FC FLEXIO FPORT The M I/O Aggregator blade switch is a Trident+ based switch which is plugged into the Dell M1000 Blade server chassis. The blade module contains two slots for pluggable flexible module. The goal is to provide support for direct connectivity to FC equipments through Fibre channel ports by FC Flex IO optional module.
INTERFACE mode fcoe-map {tengigabitEthernet slot/port | fortygigabitEthernet slot/port} The FCoE map contains FCoE and FC parameter settings (refer to FCoE Maps). Manually apply the fcoe-map to any Ethernet ports used for FCoE. Name Server Each participant in the FC environment has a unique ID, which is called the World Wide Name (WWN). This WWN is a 64-bit address. A Fibre Channel fabric uses another addressing scheme to address the ports in the switched fabric.
FCoE Maps To identify the SAN fabric to which FCoE storage traffic is sent, use an FCoE map. Using an FCoE map, an NPG operates as an FCoE-FC bridge between an FC SAN and FCoE network by providing FCoE-enabled servers and switches with the necessary parameters to log in to a SAN fabric. An FCoE map applies the following parameters on server-facing Ethernet and fabric-facing FC ports: • The dedicated FCoE VLAN used to transport FCoE storage traffic.
The values for the FCoE VLAN, fabric ID, and FC-MAP must be unique. Apply an FCoE map on downstream server-facing Ethernet ports and upstream fabric-facing Fibre Channel ports. 1. Create an FCoE map which contains parameters used in the communication between servers and a SAN fabric. CONFIGURATION mode fcoe-map map-name 2. Configure the association between the dedicated VLAN and the fabric where the desired storage arrays are installed.
7. Configure the time interval (in seconds) used to transmit FIP keepalive advertisements. FCoE MAP mode fka-adv-period seconds The range is from 8 to 90 seconds. The default is 8 seconds. Zoning The zoning configurations are supported for Fabric FCF Port mode operation on the MXL. In FCF Port mode, the fcoe-map fabric map-name has the default Zone mode set to deny. This setting denies all the fabric connections unless included in an active zoneset.
Dell(conf-fc-zone-z1)#member 020202 Dell(conf-fc-zone-z1)#exit Creating Zone Alias and Adding Members To create a zone alias and add devices to the alias, follow these steps. 1. Create a zone alias name. CONFIGURATION mode fc alias ZoneAliasName 2. Add devices to an alias. ALIAS CONFIGURATION mode member word The member can be WWPN (00:00:00:00:00:00:00:00), port ID (000000), or alias name (word).
Activating a Zoneset Activating a zoneset makes the zones within it effective. On a switch, only one zoneset can be active. Any changes in an activated zoneset do not take effect until it is re-activated. By default, the fcoe-map fabric map-namedoes not have any active zonesets. 1. Enter enter the fc-fabric command in fcoe-map to active or de-activate the zoneset.
Example of the show config Command Dell(conf-fcoe-SAN_FABRIC)#show config ! fcoe-map SAN_FABRIC description SAN_FABRIC fc-map 0efc00 fabric-id 1002 vlan 1002 ! fc-fabric default-zone-allow all Dell(conf-fcoe-SAN_FABRIC)# Example of the show fcoe-map Command Dell(conf)#do show Fabric Name fcoe-map map Fabric Type Fport Fabric Id 1002 Vlan Id 1002 Vlan priority 3 FC-MAP 0efc00 FKA-ADV-Period 8 Fcf Priority 128 Config-State ACTIVE Oper-State UP ======================================================= Switch C
Intf# Domain FC-ID Enode-WWPN Enode-WWNN Fc 0/3 1 01:35:00 10:00:8c:7c:ff:17:f8:01 20:00:8c:7c:ff:17:f8:01 Dell# Example of the show fc zoneset Command Dell#show fc zoneset ZoneSetName ZoneName ZoneMember ======================================== fcoe_srv_fc_tgt brcd_sanb brcd_cna1_wwpn1 sanb_p2tgt1_wwpn Active Zoneset: fcoe_srv_fc_tgt ZoneName ZoneMember ======================================== brcd_sanb 10:00:8c:7c:ff:21:5f:8d 20:02:00:11:0d:03:00:00 Dell# Example of the show fc zoneset active Command Dell
24 NPIV Proxy Gateway The N-port identifier virtualization (NPIV) Proxy Gateway (NPG) feature provides FCoE-FC bridging capability on the Aggregator, allowing server CNAs to communicate with SAN fabrics over the Aggregator.
NPIV Proxy Gateway Operation Consider a sample scenario of NPG operation. An FX2 server chassis configured as an NPG does not join a SAN fabric, but functions as an FCoE-FC bridge that forwards storage traffic between servers and core SAN switches. The core switches forward SAN traffic to and from FC storage arrays. An FX2 chassis FC port is configured as an N (node) port that logs in to an F (fabric) port on the upstream FC core switch and creates a channel for N-port identifier virtualization.
received over the Aggregator with the ENode ports, are converted into FDISCs addressed to the upstream F ports on core switches. NPIV Proxy Gateway Functionality The Aggregator with the NPG provides the following functionality in a storage area network: • FIP Snooping bridge that provides security for FCoE traffic using ACLs. • FCoE gateway that provides FCoE-to-FC bridging. N-port virtualization using FCoE maps exposes upstream F ports as FCF ports to downstream server-facing ENode ports on the NPG.
Term Description FC-MAP FCoE MAC-address prefix — The unique 24-bit MAC address prefix in FCoE packets used to generate a fabric-provided MAC address (FPMA). The FPMA is required to send FCoE packets from a server to a SAN fabric. FCoE map Template used to configure FCoE and FC parameters on Ethernet and FC ports in a converged fabric. FCoE VLAN VLAN dedicated to carrying only FCoE traffic between server CNA ports and a SAN fabric. (FCoE traffic must travel in a VLAN.
NOTE: In each FCoE map, the fabric ID, FC-MAP value, and FCoE VLAN must be unique. Use one FCoE map to access one SAN fabric. You cannot use the same FCoE map to access different fabrics. When you configure an Aggregator with the NPG, FCoE transit with FIP snooping is automatically enabled and configured using the parameters in the FCoE map applied to server-facing Ethernet and fabric-facing FC interfaces.
PfcMode:ON -------------------PG:0 TSA:ETS BW:30 PFC:OFF Priorities:0 1 2 5 6 7 PG:1 TSA:ETS Priorities:4 BW:30 PFC:OFF PG:2 TSA:ETS Priorities:3 BW:40 PFC:ON Default FCoE map Dell(conf)#do show fcoe-map Fabric Name Fabric Id Vlan Id Vlan priority FC-MAP FKA-ADV-Period Fcf Priority Config-State Oper-State Members Fc 0/9 Te 0/4 SAN_FABRIC 1002 1002 3 0efc00 8 128 ACTIVE UP DCB_MAP_PFC_OFF Dell(conf)#do show qos dcb-map DCB_MAP_PFC_OFF ----------------------State :In-Progress PfcMode:OFF -------------
Step Task Command Command Mode 1 Create a DCB map to specify PFC and ETS settings for groups of dot1p priorities. dcb-map name CONFIGURATION 2 Configure the PFC setting (on or off) and the ETS bandwidth percentage allocated to traffic in each priority group. Configure whether the priority group traffic should be handled with strict-priority scheduling. The sum of all allocated bandwidth percentages must be 100 percent. Strict-priority traffic is serviced first.
• If you delete the dot1p priority-to-priority group mapping (no priority pgid command) before you apply the new DCB map, the default PFC and ETS parameters are applied on the interfaces. This change may create a DCB mismatch with peer DCB devices and interrupt the network operation. Applying a DCB Map on Server-facing Ethernet Ports You can apply a DCB map only on a physical Ethernet interface and can apply only one DCB map per interface.
Creating an FCoE Map An FCoE map consists of: • An association between the dedicated VLAN, used to carry FCoE traffic, and the SAN fabric where the storage arrays are installed. Use a separate FCoE VLAN for each fabric to which the FCoE traffic is forwarded. Any non-FCoE traffic sent on a dedicated FCoE VLAN is dropped. • The FC-MAP value, used to generate the fabric-provided MAC address (FPMA). The FPMA is used by servers to transmit FCoE traffic to the fabric.
Step Task Command Command Mode 7 Configure the time interval (in seconds) used to transmit FIP keepalive advertisements. fka-adv-period seconds FCoE MAP Range: 8–90 seconds. Default: 8 seconds. Applying an FCoE Map on Server-facing Ethernet Ports You can apply multiple FCoE maps on an Ethernet port or port channel. When you apply an FCoE map on a server-facing port or port channel: • The port is configured to operate in hybrid mode (accept both tagged and untagged VLAN frames).
When you apply an FCoE map on a fabric-facing FC port, the FC port becomes part of the FCoE fabric, whose settings in the FCoE map are configured on the port and exported to downstream server CNA ports. Each Aggregator, with the FC port, is associated with an Ethernet MAC address (FCF MAC address). When you enable a fabric-facing FC port, the FCoE map applied to the port starts sending FIP multicast advertisements using the parameters in the FCoE map over server-facing Ethernet ports.
Dell(conf-dcbx-name)# priority-pgid 0 0 0 1 2 4 4 4 2. Apply the DCB map on a downstream (server-facing) Ethernet port: Dell(config)# interface tengigabitethernet 1/0 Dell(config-if-te-0/0)#dcb-map SAN_DCB_MAP 3. Create the dedicated VLAN to be used for FCoE traffic: Dell(conf)#interface vlan 1002 4.
Command Description NOTE: Although the show interface status command displays the Fiber Channel (FC) interfaces with the abbreviated label of 'Fc' in the output, if you attempt to specify a FC interface by using the interface fc command in the CLI interface, an error message is displayed. You must configure FC interfaces by using the interface fi command in CONFIGURATION mode. show fcoe-map [brief | mapname] Displays the Fibre Channel and FCoE configuration parameters in FCoE maps.
Ethernet ports - up (transmitting FCoE and LAN storage traffic) or down (not transmitting traffic). Fibre Channel ports - up (link is up and transmitting FC traffic) or down (link is down and not transmitting FC traffic), link-wait (link is up and waiting for FLOGI to complete on peer SW port), or removed (port has been shut down). Speed Transmission speed (in Megabits per second) of Ethernet and FC ports, including auto-negotiated speed (Auto).
FC-MAP FCoE MAC-address prefix value - The unique 24-bit MAC address prefix that identifies a fabric. FKA-ADV-period Time interval (in seconds) used to transmit FIP keepalive advertisements. FCF Priority The priority used by a server to select an upstream FCoE forwarder.
Field Description Priorities 802.1p priorities configured in the priority group.
ENode Intf FCF MAC Fabric Intf FCoE Vlan Fabric Map ENode WWPN ENode WWNN FCoE MAC FC-ID LoginMethod Secs Status : : : : : : : : : : : : Te 0/11 5c:f9:dd:ef:10:c8 Fc 0/9 1003 fid_1003 20:01:00:10:18:f1:94:20 20:00:00:10:18:f1:94:21 0e:fc:03:01:02:01 01:02:01 FLOGI 5593 LOGGED_IN ENode[1]: ENode MAC ENode Intf FCF MAC Fabric Intf FCoE Vlan Fabric Map ENode WWPN ENode WWNN FCoE MAC FC-ID LoginMethod Secs Status : : : : : : : : : : : : : 00:10:18:f1:94:22 Te 0/12 5c:f9:dd:ef:10:c9 Fc 0/10 1003 fid_1003 10
Field Description FCoE MAC Fabric-provided MAC address (FPMA). The FPMA consists of the FCMAP value in the FCoE map and the FC-ID provided by the fabric after a successful FLOGI. In the FPMA, the most significant bytes are the FCMAP; the least significant bytes are the FC-ID. FC-ID FC port ID provided by the fabric. LoginMethod Method used by the server CNA to log in to the fabric; for example, FLOGI or FDISC. Secs Number of seconds that the fabric connection is up.
25 Upgrade Procedures To find the upgrade procedures, go to the Dell Networking OS Release Notes for your system type to see all the requirements needed to upgrade to the desired Dell Networking OS version. To upgrade your system type, follow the procedures in the Dell Networking OS Release Notes. Get Help with Upgrades Direct any questions or concerns about the Dell Networking OS upgrade procedures to the Dell Technical Support Center. You can reach Technical Support: • On the web: http://support.dell.
26 Debugging and Diagnostics This chapter contains the following sections:. • Debugging Aggregator Operation • Software Show Commands • Offline Diagnostics • Trace Logs • Show Hardware Commands Supported Modes Standalone, PMUX, VLT Debugging Aggregator Operation This section describes common troubleshooting procedures to use for error conditions that may arise during Aggregator operation.
Dell#show uplink-state-group 1 detail (Up): Interface up (Dwn): Interface down Uplink State Group Defer Timer Upstream Interfaces Downstream Interfaces 0/5(Up) 0/10(Up) Te 0/15(Up) Te 0/20(Dwn) Te 0/25(Dwn) Te 0/30(Dwn) 2.
Flooded packets on all VLANs are received on a server Symptom: All packets flooded on all VLANs on an Aggregator are received on a server, even if the server is configured as a member of only a subset of VLANs. This behavior happens because all Aggregator ports are, by default, members of all (4094) VLANs. Resolution: Configure a port that is connected to the server with restricted VLAN membership. Steps to Take: 1.
Software show Commands Use the show version and show system stack-unit 0 commands as a part of troubleshooting an Aggregator’s software configuration in a standalone or stacking scenario. Table 31. Software show Commands Command Description show version Display the current version of Dell Networking OS software running on an Aggregator. show system stack-unit 0 Display software configuration on an Aggregator in stacking mode.
Product Name : I/O Aggregator Mfg By : DELL Mfg Date : 2012-05-01 Serial Number : TW282921F00038 Part Number : 0NVH81 Piece Part ID : TW-0NVH81-28292-1F0-0038 PPID Revision : Service Tag : N/A Expr Svc Code : N/A PSOC FW Rev : 0xb ICT Test Date : 0-0-0 ICT Test Info : 0x0 Max Power Req : 31488 Fabric Type : 0x3 Fabric Maj Ver : 0x1 Fabric Min Ver : 0x0 SW Manageability: 0x4 HW Manageability: 0x1 Max Boot Time : 3 minutes Link Tuning : unsupported Auto Reboot : enabled Burned In MAC : 00:1e:c9:f1:03:42 No Of
Running Offline Diagnostics To run offline diagnostics, use the following commands. For more information, refer to the examples following the steps. 1. Place the unit in the offline state. EXEC Privilege mode offline stack-unit You cannot enter this command on a MASTER or Standby stack unit. NOTE: The system reboots when the offline diagnostics complete. This is an automatic process.
Trace Logs In addition to the syslog buffer, the Dell Networking OS buffers trace messages which are continuously written by various software tasks to report hardware and software events and status information. Each trace message provides the date, time, and name of the Dell Networking OS process. All messages are stored in a ring buffer. You can save the messages to a file either manually or automatically after failover.
• This view provides insight into the packet types entering the CPU to see whether CPU-bound traffic is internal (IPC traffic) or network control traffic, which the CPU must process. View the modular packet buffers details per stack unit and the mode of allocation. EXEC Privilege mode • show hardware stack-unit {0-5} buffer total-buffer View the modular packet buffers details per unit and the mode of allocation.
• show hardware stack-unit {0-5} unit {0-0} ipmc-replication View the internal statistics for each port-pipe (unit) on per port basis. EXEC Privilege mode • show hardware stack-unit {0-5} unit {0-0} port-stats [detail] View the stack-unit internal registers for each port-pipe. EXEC Privilege mode • show hardware stack-unit {0-5} unit {0-0} register View the tables from the bShell through the CLI without going into the bShell.
SFP 49 TX Power High Alarm threshold SFP 49 RX Power High Alarm threshold SFP 49 Temp Low Alarm threshold SFP 49 Voltage Low Alarm threshold SFP 49 Bias Low Alarm threshold SFP 49 TX Power Low Alarm threshold SFP 49 RX Power Low Alarm threshold =================================== SFP 49 Temp High Warning threshold SFP 49 Voltage High Warning threshold SFP 49 Bias High Warning threshold SFP 49 TX Power High Warning threshold SFP 49 RX Power High Warning threshold SFP 49 Temp Low Warning threshold SFP 49 Volt
Unit0 Dell# 58 61 84 86 90 Troubleshoot an Over-Temperature Condition To troubleshoot an over-temperature condition, use the following information. 1. Use the show environment commands to monitor the temperature levels. 2. Check air flow through the system. Ensure that the air ducts are clean and that all fans are working correctly. 3. After the software has determined that the temperature levels are within normal limits, you can repower the card safely.
Table 32. SNMP Traps and OIDs OID String OID Name Description chSysPortXfpRecvPower OID displays the receiving power of the connected optics. chSysPortXfpTxPower OID displays the transmitting power of the connected optics. chSysPortXfpRecvTemp OID displays the temperature of the connected optics. Receiving Power .1.3.6.1.4.1.6027.3.10.1.2.5.1.6 Transmitting power .1.3.6.1.4.1.6027.3.10.1.2.5.1.8 Temperature .1.3.6.1.4.1.6027.3.10.1.2.5.1.
3. Front-End Link — Output queues going from the FP to the front-end PHY. All ports support eight queues, four for data traffic and four for control traffic. All eight queues are tunable. Physical memory is organized into cells of 128 bytes. The cells are organized into two buffer pools — the dedicated buffer and the dynamic buffer. • Dedicated buffer — this pool is reserved memory that other interfaces cannot use on the same ASIC or by other queues on the same interface.
Figure 36. Buffer Tuning Points Deciding to Tune Buffers Dell Networking recommends exercising caution when configuring any non-default buffer settings, as tuning can significantly affect system performance. The default values work for most cases. As a guideline, consider tuning buffers if traffic is bursty (and coming from several interfaces). In this case: • Reduce the dedicated buffer on all queues/interfaces. • Increase the dynamic buffer on all interfaces.
• Change the dedicated buffers on a physical 1G interface. BUFFER PROFILE mode • buffer dedicated Change the maximum number of dynamic buffers an interface can request. BUFFER PROFILE mode • buffer dynamic Change the number of packet-pointers per queue. BUFFER PROFILE mode • buffer packet-pointers Apply the buffer profile to a CSF to FP link.
Dynamic buffer 194.88 (Kilobytes) Queue# Dedicated Buffer Buffer Packets (Kilobytes) 0 2.50 256 1 2.50 256 2 2.50 256 3 2.50 256 4 9.38 256 5 9.38 256 6 9.38 256 7 9.
Using a Pre-Defined Buffer Profile The Dell Networking OS provides two pre-defined buffer profiles, one for single-queue (for example, non-quality-of-service [QoS]) applications, and one for four-queue (for example, QoS) applications. You must reload the system for the global buffer profile to take effect, a message similar to the following displays: % Info: For the global pre-defined buffer profile to take effect, please save the config and reload the system..
! buffer fp-uplink stack-unit 0 port-set 0 buffer-policy fsqueue-hig buffer fp-uplink stack-unit 0 port-set 1 buffer-policy fsqueue-hig ! Interface range gi 0/1 - 48 buffer-policy fsqueue-fp Dell#show run int Tengig 0/10 ! interface TenGigabitEthernet 0/10 Troubleshooting Packet Loss The show hardware stack-unit command is intended primarily to troubleshoot packet loss. To troubleshoot packet loss, use the following commands.
Example of the show hardware stack-unit Command to View Drop Counters Statistics Dell#show hardware stack-unit 0 drops UNIT No: 0 Total Ingress Drops :0 Total IngMac Drops :0 Total Mmu Drops :0 Total EgMac Drops :0 Total Egress Drops :0 UNIT No: 1 Total Ingress Drops :0 Total IngMac Drops :0 Total Mmu Drops :0 Total EgMac Drops :0 Total Egress Drops :0 Dell#show hardware stack-unit 0 drops unit 0 Port# :Ingress Drops :IngMac Drops :Total Mmu Drops :EgMac Drops :Egress Drops 1 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0
HOL DROPS on COS14 HOL DROPS on COS15 HOL DROPS on COS16 HOL DROPS on COS17 TxPurge CellErr Aged Drops --- Egress MAC counters--Egress FCS Drops --- Egress FORWARD PROCESSOR IPv4 L3UC Aged & Drops TTL Threshold Drops INVALID VLAN CNTR Drops L2MC Drops PKT Drops of ANY Conditions Hg MacUnderflow TX Err PKT Counter --- Error counters--Internal Mac Transmit Errors Unknown Opcodes Internal Mac Receive Errors : : : : : : 0 0 0 0 0 0 : 0 Drops : 0 : 0 : 0 : 0 : 0 : 0 : 0 --- : 0 : 0 : 0 Dataplane Statistics
txReqTooLarge txInternalError txDatapathErr txPkt(COS0) txPkt(COS1) txPkt(COS2) txPkt(COS3) txPkt(COS4) txPkt(COS5) txPkt(COS6) txPkt(COS7) txPkt(UNIT0) :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 :0 The show hardware stack-unit cpu party-bus statistics command displays input and output statistics on the party bus, which carries inter-process communication traffic between CPUs Example of Viewing Party Bus Statistics Dell#show hardware stack-unit 2 cpu party-bus statistics Input Statistics: 27550 packets, 2559298 byt
Enabling Buffer Statistics Tracking You can enable the tracking of statistical values of buffer spaces at a global level. The buffer statistics tracking utility operates in the max use count mode that enables the collection of maximum values of counters. To configure the buffer statistics tracking utility, perform the following step: 1. Enable the buffer statistics tracking utility and enter the Buffer Statistics Snapshot configuration mode.
MCAST 3 0 Unit 1 unit: 3 port: 13 (interface Fo 1/156) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 17 (interface Fo 1/160) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 unit: 3 port: 21 (interface Fo 1/164) --------------------------------------Q# TYPE Q# TOTAL BUFFERED CELLS --------------------------------------MCAST 3 0 Unit 1 uni
UCAST UCAST UCAST UCAST UCAST MCAST MCAST MCAST MCAST MCAST MCAST MCAST MCAST MCAST 7 8 9 10 11 0 1 2 3 4 5 6 7 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Restoring the Factory Default Settings Restoring factory defaults deletes the existing NVRAM settings, startup configuration and all configured settings such as stacking or fanout. To restore the factory default settings, use the restore factory-defaults stack-unit {0-5 | all} {clear-all | nvram} command in EXEC Privilege mode.
Standards Compliance 27 This chapter describes standards compliance for Dell Networking products. NOTE: Unless noted, when a standard cited here is listed as supported by the Dell Networking Operating System (OS), the system also supports predecessor standards. One way to search for predecessor standards is to use the http://tools.ietf.org/ website. Click “Browse and search IETF documents,” enter an RFC number, and inspect the top of the resulting document for obsolescence citations to related RFCs.
RFC and I-D Compliance The Dell Networking OS supports the following standards. The standards are grouped by related protocol. The columns showing support by platform indicate which version of Dell Networking OS first supports the standard. General Internet Protocols The following table lists the Dell Networking OS support per platform for general internet protocols. Table 33.
RFC# Full Name 1027 Using ARP to Implement Transparent Subnet Gateways 1035 DOMAIN NAMES - IMPLEMENTATION AND SPECIFICATION (client) 1042 A Standard for the Transmission of IP Datagrams over IEEE 802 Networks 1191 Path MTU Discovery 1305 Network Time Protocol (Version 3) Specification, Implementation and Analysis 1519 Classless Inter-Domain Routing (CIDR): an Address Assignment and Aggregation Strategy 1542 Clarifications and Extensions for the Bootstrap Protocol 1812 Requirements for IP Ve
RFC# Full Name 2012 SNMPv2 Management Information Base for the Transmission Control Protocol using SMIv2 2013 SNMPv2 Management Information Base for the User Datagram Protocol using SMIv2 2024 Definitions of Managed Objects for Data Link Switching using SMIv2 2096 IP Forwarding Table MIB 2570 Introduction and Applicability Statements for Internet Standard Management Framework 2571 An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks 2572 Message Proce
RFC# Full Name Table, Ethernet History Table, Alarm Table, Event Table, Log Table 2863 The Interfaces Group MIB 2865 Remote Authentication Dial In User Service (RADIUS) 3273 Remote Network Monitoring Management Information Base for High Capacity Networks (64 bits): Ethernet Statistics High-Capacity Table, Ethernet History HighCapacity Table 3416 Version 2 of the Protocol Operations for the Simple Network Management Protocol (SNMP) 3418 Management Information Base (MIB) for the Simple Network Mana
RFC# Full Name FORCE10-SYSTEM-COMPONENT-MIB Force10 System Component MIB (enables the user to view CAM usage information) FORCE10-TC-MIB Force10 Textual Convention FORCE10-TRAP-ALARM-MIB Force10 Trap Alarm MIB FORCE10-FIPS NOOPING-MI B Force10 FIP Snooping MIB (Based on T11-FCoE-MIB mentioned in FC-BB-5) FORCE10-DCB -MIB Force10 DCB MIB IEEE 802.1Qaz Management Information Base extension module for IEEE 802.1 organizationally defined discovery information (LDP-EXT-DOT1-DCBX-MIB) IEEE 802.