Dell EMC Networking FCoE Deployment with S4148U-ON in NPG Mode Connecting server FCoE CNAs to Fibre Channel storage using two Dell EMC PowerSwitch S4148U-ON switches and Brocade Fibre Channel switches Abstract This document provides the deployment steps for configuring S4148UON switches in NPIV Proxy Gateway (NPG) mode connected to FC switches and storage.
Revisions Date Description April 2019 Initial release The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved.
Table of contents Revisions............................................................................................................................................................................. 2 1 2 Introduction ................................................................................................................................................................... 5 1.1 Typographical conventions .............................................................................................
B.1.1 Reset server CNA interfaces to factory defaults ..............................................................................................25 B.1.2 Determine FCoE CNA port WWPNs ................................................................................................................25 B.2 Dell EMC Unity 500F storage array configuration ............................................................................................27 B.2.1 Create a storage pool .................................
1 Introduction Our vision at Dell EMC is to be the essential infrastructure company from the edge, to the core, and to the cloud. Dell EMC Networking ensures modernization for today’s applications and for the emerging cloud-native world. Dell EMC is committed to disrupting the fundamental economics of the market with an open strategy that gives you the freedom of choice for networking operating systems and top-tier merchant silicon.
1.1 Typographical conventions The CLI and GUI examples in this document use the following conventions: 1.2 Monospace Text CLI examples Underlined Monospace Text CLI examples that wrap the page Italic Monospace Text Variables in CLI examples Bold Monospace Text Commands entered at the CLI prompt, or to highlight information in CLI output Bold text UI elements and information entered in the GUI Attachments This document in .pdf format includes one or more file attachments.
2 Hardware Overview This section briefly describes the hardware that is used to validate the deployment examples in this document. Appendix A contains a complete listing of hardware and software validated for this guide.
2.2.1 Dell EMC Unity 500F storage array The Unity 500F storage platform delivers all-flash storage with up to 8PB raw capacity. It has concurrent support for NAS, iSCSI, and FC protocols. The Disk Processing Enclosure (DPE) has a 2-RU form factor, redundant Storage Processors (SPs), and supports up to twenty-five 2.5" drives. Additional 2-RU Disk Array Enclosures (DAEs) may be added providing twenty-five additional drives each. Dell EMC Unity 500F front view 2.2.
3 Topology overview Each S4148U-ON provides unified ports, configured as FC or Ethernet. The mix of FC and Ethernet ports allows the switches to simultaneously connect directly to an FC SAN for storage traffic and also act as a leaf in a leaf-spine network for production TCP/IP traffic. Note: FC SAN traffic does not traverse the leaf-spine network, or the VLT interconnect (VLTi). A combined FC SAN and leaf-spine topology are shown in the diagram below.
FC SAN topology 3.2 OOB management network The out-of-band (OOB) management network is an isolated network for management traffic only. It is used by administrators to remotely configure and manage servers, switches, and storage devices. Production traffic initiated by the network end users does not traverse the management network. An S3048-ON management switch is installed at the top of each rack for OOB management connections as shown.
Four 10GbE SFP+ ports are available on each S3048-ON switch for use as uplinks to the OOB management network core. Downstream connections to servers, switches, and storage devices are 1GbE BASE-T. The dedicated OOB management port of each leaf and spine switch is used for these connections. Each PowerEdge R740xd server has a connection to the S3048-ON for the server’s iDRAC port. The Unity 500F storage array has two dedicated management ports: one for each Storage Processor (SP), SP A and SP B.
4 Deployment Overview This section provides high-level guidance for deploying the total solution including FC storage, networking, server resources, and virtualization. 4.1 Configuration strategy and sequence This document provides specific configuration examples for the S4148U-ON leaf pair. Note: A Dell EMC Unity 500F storage array, Brocade 6510 FC switches, and Dell EMC PowerEdge R740xd servers were used to validate the complete solution.
5 S4148U-ON switch configuration This section details steps to configure the S4148U-ON leaf switches running OS10EE. 5.1 Prepare switches 5.1.1 Factory default configuration The configuration commands in the sections that follow begin with S4148U-ON switches at their factory default settings.
5.2 Configure switches After setting the switch port profile on both S4148U-ONs, the commands in the tables that follow are run to complete the FC configuration on both switches. Note: The commands in the tables below should be entered in the order shown. Switch running-configuration files are provided as attachments named S4148U-Leaf1.txt and S4148U-Leaf2.txt. Configure global switch settings Configure the hostname, OOB management IP address, and OOB management default gateway.
Configure the FC VLAN and virtual fabrics For each switch define the VLANs and virtual fabrics. Apply to appropriate FC and server interfaces. The global command feature fc npg puts the switch in NPG mode.
Configure the compute VLAN interfaces The VLANs shown in this example represent a generic converged or hyper-converged deployment and should be changed to apply to the deployment specifications. S4148U-Leaf1 S4148U-Leaf2 interface Vlan 1612 ip address 172.16.12.1/24 description "vMotion" no shutdown interface Vlan 1612 ip address 172.16.12.2/24 description "vMotion" no shutdown interface Vlan 1613 ip address 172.16.13.1/24 description "vSAN" no shutdown interface Vlan 1613 ip address 172.16.13.
S4148U-Leaf1 S4148U-Leaf2 interface vlan 1615 vrrp-group 15 virtual-address 172.16.15.254 interface vlan 1615 vrrp-group 15 virtual-address 172.16.15.254 interface vlan 1616 vrrp-group 16 virtual-address 172.16.16.254 interface vlan 1616 vrrp-group 16 virtual-address 172.16.16.
Configure upstream interfaces to spine switches S4148U-Leaf1 S4148U-Leaf2 interface ethernet 1/1/25 description "Z9264F-Spine1 eth 1/1/1" no switchport ip address 192.168.1.1/31 no shutdown interface ethernet 1/1/25 description "Z9264F-Spine1 eth 1/1/2" no switchport ip address 192.168.1.3/31 no shutdown interface ethernet 1/1/26 description "Z9264F-Spine2 eth 1/1/1” no switchport ip address 192.168.2.
S4148U-Leaf1 S4148U-Leaf2 remote-as 64601 inherit template spine-leaf no shutdown exit neighbor 192.168.2.0 remote-as 64602 inherit template spine-leaf no shutdown remote-as 64601 inherit template spine-leaf no shutdown exit neighbor 192.168.2.2 remote-as 64602 inherit template spine-leaf no shutdown Configure uplink failure detection (UFD) UFD is recommend on all server-facing interfaces.
S4148U-Leaf1 S4148U-Leaf2 dcbx enable dcbx enable class-map type network-qos class_Dot1p_3 match qos-group 3 class-map type queuing map_ETSQueue_0 match queue 0 class-map type queuing map_ETSQueue_3 match queue 3 class-map type network-qos class_Dot1p_3 match qos-group 3 class-map type queuing map_ETSQueue_0 match queue 0 class-map type queuing map_ETSQueue_3 match queue 3 trust dot1p-map map_Dot1pToGroups qos-group 0 dot1p 0-2,4-7 qos-group 3 dot1p 3 qos-map traffic-class map_GroupsToQueues queue
6 S4148U-ON validation After configuring connected devices, many commands are available to validate the network configuration. This section provides a list of the most common commands and their output for this topology. Note: The commands and output shown below are for S4148U-Leaf1. The output for S4148U-Leaf2 is similar. For additional commands and output related to the leaf-spine portion of the topology, such as VLT, BGP, etc.
6.3 show fcoe sessions The show fc fcoe sessions command shows all currently active FCoE sessions on the switch. In this example, four FCoE sessions are active on each switch.
6.5 show vfabric The show vfabric command output provides a variety of information including the default zone mode, the active zone set, and interfaces that are members of the vfabric.
A Validated components Leaf switches Qty Item Version 2 Dell EMC S4841U-ON 10.4.2.1 Management switches Qty Item Version 2 Dell EMC S3048-ON 10.4.2.1 Spine switches Qty Item Version 2 Dell EMC Z9264-ON 10.4.2.1 Fibre Channel switches Qty Item Version 2 Brocade 6510 v8.1.0a Storage Qty Item Version 1 Dell EMC Unity 500F 4.3.0.1522077968 Servers Qty Item 4 Dell EMC PowerEdge R740xd Version - BIOS 1.6.12 - iDRAC 3.21.26.22 - rNDC Intel(R) Gigabit 4P X710/I350 rNDC 18.8.
B PowerEdge server, Unity storage, and VMware setup B.1 PowerEdge server configuration This section details the configuration of the CNAs used to validate the network topology. Note: Exact iDRAC steps in this section may vary depending on hardware, software and browser versions used. See the PowerEdge server documentation for steps to connect to the iDRAC. B.1.
CNA ports listed in iDRAC 4. Under Ports and Partitioned Ports, click the icon next to the first port to expand the details as shown: WWPN for FCoE CNA port 1 5. Record the WWPN, outlined in red in Figure 9. A convenient method is to copy and paste it into a text file. The WWPN is used in the S4148U-ON switch FC zone configuration. 6. Repeat steps 4 and 5 for CNA port 2. 7. Repeat steps 1-6 for remaining servers.
The FC WWPNs used in this deployment example are shown in Table 7. The Switch column has been added for reference per the cable connections in the SAN topology diagram (Figure 6). Server FCoE CNA port WWPNs B.
Two WWNs are listed for each port. The World Wide Node Name (WWNN), outlined in black, identifies this Unity storage array (the node). It is not used in zone configuration. The WWPNs, outlined in blue, identify the individual ports and are used for FC zoning. 3. Record the WWPNs as shown in Table 8. The Switch column has been added based on the physical cable connections shown in Figure 6. Storage array FC adapter WWPNs B.2.
4. 5. 6. 7. 8. A list of discovered ESXi hosts is displayed. Select the applicable hosts and click Next. A VMware API for Storage Awareness (VASA) Provider is not used in this example. Click Next. On the Summary page, review the ESXi Hosts to be added. Click Finish. When the Overall status shows 100% Completed, click Close. The vCenter server is displayed as shown in Figure 13. vCenter server added to Unisphere 9. The list of added ESXi hosts is displayed on the ESXi Hosts tab, as shown in Figure 14.
LUN created Create additional LUNs and grant access (map) to hosts as needed. Note: To modify host access at any time, check the box next to the LUN to select it. Click the select the Host Access tab. B.3 VMware preparation B.3.1 VMware ESXi download and installation icon, and Install VMware ESXi 6.7 U1 or later, on each PowerEdge server. Dell EMC recommends using the latest Dell EMC customized ESXi .iso image available on support.dell.com.
Datacenter and cluster created with ESXi hosts B.3.4 Configure storage on ESXi hosts The example LUN created on the storage array is used to create a datastore on an ESXi host. The datastore is used to create a virtual disk on a virtual machine (VM) residing on the ESXi host. This process may be repeated as needed for additional LUNs, hosts, and VMs. B.3.5 Rescan storage 1. In the vSphere Web Client, go to Home > Hosts and Clusters. 2.
6. Repeat for host's second adapter, vmhba5 in this example. The LUN information on the Adapter Details > Devices tab is identical to the first adapter. 7. Select the first storage adapter, e.g., vmhba4, then select the Adapter Details > Paths tab as shown in Figure 18. The target, LUN number (e.g., LUN 0) and path status are shown. The target field includes the two active storage WWPNs connected to vmhba4. The status field is marked either Active or Active (I/O) for each path.
The datastore is now accessible by selecting the host in the Navigator pane. Select the Configure tab > Storage > Datastores as shown in Figure 20. Datastore configured The datastore is also accessible by going to Home > Storage. It is listed under the Datacenter object in the Navigator pane. B.3.7 Create a virtual disk In this example, the ESXi host with the datastore configured in the previous section contains a VM named VM1 that is running a Windows Server guest OS.
New hard disk configuration options 6. Next to New Hard disk, set the size in GB less than or equal to the Maximum size shown on the line below. The size is set to 40 GB in this example. 7. Click OK to close the Edit Settings window and create the virtual disk.
B.3.8 Configure the virtual disk in Windows Server The following example is applicable for VMs running Windows Server 2008, 2012, or 2016. See the operating system documentation to configure virtual disks on other supported guest operating systems. 1. Power on the VM and log in to the Windows Server guest OS. 2. In Windows, go to Server Manager > Tools > Computer Management > Storage > Disk Management.
C Technical resources Dell EMC Networking Guides Dell EMC Networking Layer 3 Leaf-Spine Deployment and Best Practices with OS10EE OS10 Enterprise Edition User Guide Release 10.4.2.
D Support and feedback Contacting Technical Support Support Contact Information Web: http://www.dell.com/support Telephone: USA: 1-800-945-3355 Feedback for this document We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to Dell_Networking_Solutions@Dell.com.