Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide Humair Ahmed Dell Technical Marketing – Data Center Networking August 2013
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE This document is for informational purposes only and may contain typographical errors. The content is provided as is, without express or implied warranties of any kind. © 2013 Dell Inc. All rights reserved. Dell and its affiliates cannot be responsible for errors or omissions in typography or photography. Dell, the Dell logo, and PowerEdge are trademarks of Dell Inc.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Contents Overview ................................................................................................................................................................. 4 A: Converged Network Solution - Dell PowerEdge Server, Dell Compellent storage array, and Dell S5000 as NPIV Proxy Gateway ......................................................................................................................
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Overview In the “Dell Networking S5000: The Building Blocks of Unified Fabric and LAN/SAN Convergence” whitepaper we demonstrated and explained the movement from a traditional non-converged LAN/SAN network to a converged LAN/SAN infrastructure and how the Dell S5000 switch is an ideal solution for this transition.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 1: Traditional LAN/SAN non-converged network The Dell Compellent Storage Center controllers are used to support various I/O adapters including FC, iSCSI, FCoE, and SAS. A Dell Compellent Storage Center consists of one or two controllers, FC switches, and one or more enclosures.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE processes for FC and iSCSI connected storage devices. You can configure load balancing to use up to 32 independent paths from the connected storage devices. The MPIO framework uses Device Specific Modules (DSM) to allow path configuration. For Windows Server 2008 and above, Microsoft provides a built-in generic Microsoft DSM (MSDSM) and it should be used. For Windows Server 2003 only, Dell Compellent provides a DSM.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE configuring the FC IO cards. See the “Storage Center 6.2 System Setup Guide” for more information on initial setup of the Dell Compellent Storage Center. In legacy mode, front-end IO ports (in this case FC ports) are broken into primary and reserve ports based on a fault domain. The reserve port is in a standby mode until a primary port fails over to the reserve port.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE During initial configuration of the Compellent Storage Center, we created a disk pool labeled “Pool_1” consisting of seven 300 GB drives. The total disk space is 1.64 TB; this can be seen in the screen shot of the Storage Center System Manager GUI as shown below in Figure 5. Figure 5: Storage Center System Manager GUI displays disk pool “Pool_1” with 1.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 7: Assigning ports on Compellent Storage to respective Fault Domains If you get a warning that paths are not balanced, navigate to the left-hand pane, right click ‘Controllers’ and select ‘Rebalance Local Ports’. Next, a server object needs to be created and the respective FC ports have to be selected to be used by the server object.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 9: Installing Windows Server 2008 R2 Enterprise Multipath I/O feature Now navigate to ‘Start->Control Panel->MPIO’ and click the ‘Add’ button. When prompted for a ‘Device Hardware ID’, input “COMPELNTCompellent Vol” and click the ‘OK’ button. The system will need to be restarted for the changes to take effect.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 10: Installing Windows Server 2008 R2 Enterprise Multipath I/O for Compellent array Next, create a volume and map it to a server object so the respective server can write to the FC storage array. Simply right click ‘Volumes’ on the left-hand pane and select ‘Çreate Volume’ to get started.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 11: Created 20 GB “Finance_Data_Compellent” volume on Compellent array 12 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 12: Confirming to keep the default value for ‘Replay Profiles’ The last step in configuring the Dell Compellent Storage Center array is mapping the newly created volume to the server. Once you create the volume, you will be asked if you want to map it to a server object. You can do it at this time or later.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 13: Initialized and formatted virtual disk within Windows Server 2008 R2 Enterprise Now the volume on the Compellent storage array displays in Windows just like a typical hard drive. Note, no special configuration was needed on the HBA.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE To observe that the storage ports and HBA ports are logged into the fabric, you can use the ‘nsshow’ command on the Brocade FC switch as shown below in Figure 15. Note that since the command is run on the fabric A FC switch, only eight storage ports and one host FC HBA port is logged into the fabric as expected.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 15: Node logins on the fabric A FC switch You can also see the node WWPN by looking at what is logged in on the physical port as shown in Figure 16 below.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 16: Check WWPNs logged in on port 2 of fabric A FC switch We can use the respective port WWPNs to create a specific zoning configuration such as that displayed below in Figure 17.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE ‘50:00:d3:10:00:ed:b2:3b’, and ‘50:00:d3:10:00:ed:b2:41’. This zoning configuration is allowing all four storage ports to communicate only to each other and the server FC HBA node. On the fabric B FC switch you can see the WWPN of the server HBA port is ‘10:00:8c:7c:ff:30:7d:28’ and the WWPNs of the storage ports are ‘50:00:d3:10:00:ed:b2:3c’, ‘50:00:d3:10:00:ed:b2:42’, ‘50:00:d3:10:00:ed:b2:3a’, and ‘50:00:d3:10:00:ed:b2:40’.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Adding the Dell S5000 Converged Switch to the Topology In Figure 19, you can see how the traditional non-converged topology has changed with the introduction of the Dell S5000 switch in a possible use case. Note how the Dell S4810 Ethernet switches have been replaced by Dell S5000 converged switches. Also, note how the separate Ethernet NIC and FC adapters on the server have been replaced by one converged network adapter (CNA).
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE As in the traditional non-converged setup, the LAN side will be the usual setup with either an active/standby configuration up to separate ToR Dell S5000 switches which have VLT employed up to the core Z9000 switches. The difference here is that the Ethernet ports connecting up to the ToR are virtual ports.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Configuration steps: 1. Create the LACP LAG up to the VLT 2. Configure port to the CNA as a hybrid port. Create a LAN VLAN and tag it to both the ‘tengigabitethernet 0/12’ interface going to the respective CNA and port channel going up to VLT. 3. Enable FC capability 4. Create DCB Map and configure the priority-based flow control (PFC) and enhanced transmission selection (ETS) settings for LAN and SAN traffic.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 22: Dell S5000 (fabric A ) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable > exit /* Create LACP LAG */ > interface fortyGigE 0/48 > port-channel-protocol lacp > port-channel 10 mode active > no shut > interface fortyGigE 0/60 > port-channel-protocol lacp > port-channel 10 mode active > no shut /* Create LAN VLAN and tag inte
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE /* Create FCoE VLAN */ > interface vlan 1002 > exit /* Create FCoE MAP */ > fcoe-map SAN_FABRIC_A > fabric-id 1002 vlan 1002 > fc-map 0efc00 > exit /* Apply FCoE MAP to interface */ > interface fibrechannel 0/0 > fabric SAN_FABRIC_A > no shutdown /* Apply FCoE MAP and DCB MAP to interface */ > interface tengigabitethernet 0/12 > dcb-map SAN_DCB_MAP > fcoe-map SAN_FABRIC_A /* Make server port an edge-port */ > protocol spanning-tr
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 23: Dell S5000 (fabric B) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable > exit /* Create LACP LAG */ > interface fortyGigE 0/48 > port-channel-protocol lacp > port-channel 11 mode active > no shut > interface fortyGigE 0/60 > port-channel-protocol lacp > port-channel 11 mode active > no shut /* Create LAN VLAN and tag inter
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE /* Create FCoE VLAN */ > interface vlan 1003 > exit /* Create FCoE MAP */ > fcoe-map SAN_FABRIC_B > fabric-id 1003 vlan 1003 > fc-map 0efc01 > exit /* Apply FCoE MAP to interface */ > interface fibrechannel 0/0 > fabric SAN_FABRIC_B > no shutdown /* Apply FCoE MAP and DCB MAP to interface */ > interface tengigabitethernet 0/12 > dcb-map SAN_DCB_MAP > fcoe-map SAN_FABRIC_A /* Make server port an edge-port */ > protocol spanning-tr
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE In Figure 24 below, you can see the output of the ‘switchshow’ command on the fabric A FC switch. Notice that the port connected to the Dell S5000 switch (port 4) now states “F-Port 1 N Port + 1 NPIV public” similar to those connected to the Compellent array which is in virtual port mode.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 25: Output of the ‘nsshow’ command on the fabric A FC switch Since we swapped the FC HBA card for a Dell QLogic CNA card, we do have to update the HBA ‘server object’ mapping on the Compellent storage array.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Center System Manager GUI. On the left-hand side we navigate to ‘Storage Center->Servers>Finance_Server’, and then we click the ‘Add HBAs to Server’ button. In Figure 26 below you can see we have added the ports corresponding to the new Dell QLogic QLE8262 CNA adapter to the ‘server object’.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 28: Zoning for fabric B FC switch > zonecreate financeServer1_p2_test,"50:00:d3:10:00:ed:b2:3c;50:00:d3:10:00:ed:b2:42;50:00:d3:10:00:ed:b2:3a; 50:00:d3:10:00:ed:b2:40;20:01:00:0e:1e:0f:2d:8f " > cfgcreate zoneCfg_test,"financeServer1_p2_test" > cfgenable zoneCfg_test > cfgsave Figure 29: Output of the ‘zoneshow’ command on the fabric A FC switch You can see that our zoning configuration matches what is displayed in Figure 2
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 30: Output of the ‘portshow 4’ command on the fabric A FC switch To see information on NPIV devices logged into the fabric, use the ‘show npiv devices’ command as shown below. Note the FCoE MAC is ‘0e:fc:00:01:04:01’ (the FCoE Map + FC_ID as expected).
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 31: Check NPIV devices logged into fabric A To see currently active FIP-snooping sessions, use the ‘show fip-snooping sessions’ command.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 35: See more detailed information on fcoe-map ‘SAN_FABRIC_A’ 32 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE B: Converged Network Solution - Dell PowerEdge Server, Dell PowerVault storage array, and Dell S5000 as NPIV Proxy Gateway We will first demonstrate a non-converged setup and then add the Dell S5000 to the picture. This will allow us to see how the connections and configuration change from a traditional non-converged environment to a converged environment with the introduction of the Dell S5000 switch.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE (MPIO). For Windows Server 2008 R2 Enterprise, three load balancing policy options are available. A load balance policy is used to determine which path is used to process I/O. PowerVault Load Balancing Policy Options: 1. Round-robin with subset — The round-robin with subset I/O load balance policy routes I/O requests, in rotation, to each available data path to the RAID controller module that owns the virtual disks.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 37: Windows load balancing policy set by default to “Least Queue Depth” The two FC switches I am using are Brocade 6505s and the zoning configuration is below. The WWPNs starting with ‘10’ are the FC HBA WWPNs and the other WWPNs are for the PowerVault storage array.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 39: Zoning for fabric B FC switch > zonecreate financeServer1_p2_test,"10:00:8c:7c:ff:30:7d:29;20:24:90:b1:1c:04:a4:84;20:25:90:b1:1c:04:a4:84; 20:44:90:b1:1c:04:a4:84;20:45:90:b1:1c:04:a4:84" > cfgcreate zoneCfg_test,"financeServer1_p2_test" > cfgenable zoneCfg_test > cfgsave On the fabric A FC switch you can see the WWPN of the server HBA port is ‘10:00:8c:7c:ff:30:7d:28;20:14:90’ and the WWPNs of the storage ports are ‘20:
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 40: Virtual disk (Finance) created on PowerVault M3660f storage array You can see in Figure 41 below that the virtual disk ‘Finance’ was created on the PowerVault storage array and mapped to be accessible by the server ‘D2WK1TW1’. When you are creating the virtual disk, it will ask you if you would like to map the disk to a detected host.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 41: Host Mapping on PowerVault M3660f Storage Array As soon as the HBA on the Windows server detects storage available for it, it will be detected in the Windows disk management administration tool after performing a disk scan. To perform a disk scan, right click ‘Disk Management’ on the left-hand pane and select ‘Rescan Disks’. You must right click the detected virtual disk and initialize it.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Now the virtual disk on the PowerVault storage array displays in Windows just like a typical hard drive. Note, no special configuration was needed on the HBA. Figure 43: Remote storage on PowerVault as seen in Windows as drive ‘F:’ To observe that the storage ports and HBA ports are logged into the fabric, you can use the ‘nsshow’ command on the Brocade FC switch as shown below in Figure 44.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 44: Node logins on the fabric A FC switch 40 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 45: Zoning configuration on the fabric A FC switch You can see that our zoning configuration matches what is displayed in Figure 38. Another useful FC switch command to check what ports are connected to what WWPNs is ‘switchshow’.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 46: ‘switchshow’ output displays the WWPNs connected to the respective FC ports Note, both controllers on the PowerVault are active and each FC switch has two paths to controller 1 and two paths to controller 2. They are all logged into the fabric. However, we’re only using one disk group with one virtual disk on the PowerVault which is owned by one controller (primary controller 1).
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 47: Changing the primary controller for the virtual disk Adding the Dell S5000 Converged Switch to the Topology In Figure 48, you can see how the traditional non-converged topology has changed with the introduction of the Dell S5000 switch in a possible use case. Note how the Dell S4810 Ethernet switches have been replaced by Dell S5000 converged switches.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 48: Dell S5000 acting as a NPIV Gateway and allowing for a converged infrastructure As you can see, a Dell PowerEdge R720 server with a two port CNA is used to connect to two Dell S5000 switches which are then each connected to a FC switch. The FC switches are connected to the Dell PowerVault MD3660f storage array.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE The Dell PowerEdge R720 server has its virtual Ethernet NICs configured via NIC teaming and connecting to two separate Dell S5000 switches. The virtual HBA ports are connecting to the same Dell S5000 switches but are logically separated from the Ethernet NICs and the NIC teaming configuration is not taken into account.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Configuration steps: 1. Create the LACP LAG up to the VLT 2. Configure port to CNA as hybrid port. Create a LAN VLAN and tag it to both the ‘tengigabitethernet 0/12’ interface going to the respective CNA and port channel going up to VLT. 3. Enable FC capability 4. Create DCB Map and configure the priority-based flow control (PFC) and enhanced traffic selection (ETS) settings for LAN and SAN traffic.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 51: Dell S5000 (fabric A) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable > exit /* Create LACP LAG */ > interface fortyGigE 0/48 > port-channel-protocol lacp > port-channel 10 mode active > no shut > interface fortyGigE 0/60 > port-channel-protocol lacp > port-channel 10 mode active > no shut /* Create LAN VLAN and tag inter
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE /* Create FCoE VLAN */ > interface vlan 1002 > exit /* Create FCoE MAP */ > fcoe-map SAN_FABRIC_A > fabric-id 1002 vlan 1002 > fc-map 0efc00 > exit /* Apply FCoE MAP to interface */ > interface fibrechannel 0/0 > fabric SAN_FABRIC_A > no shutdown /* Apply FCoE MAP and DCB MAP to interface */ > interface tengigabitethernet 0/12 > dcb-map SAN_DCB_MAP > fcoe-map SAN_FABRIC_A /* Make server port an edge-port */ > protocol spanning-tr
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 52: Dell S5000 (fabric B) configuration /* Enable RSTP (Enabled due to VLT config on upstream Z9000s) */ > enable > config terminal > protocol spanning-tree rstp > no disable > exit /* Create LACP LAG */ > interface fortyGigE 0/48 > port-channel-protocol lacp > port-channel 11 mode active > no shut > interface fortyGigE 0/60 > port-channel-protocol lacp > port-channel 11 mode active > no shut /* Create LAN VLAN and tag inte
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE /* Create FCoE VLAN */ > interface vlan 1003 > exit /* Create FCoE MAP */ > fcoe-map SAN_FABRIC_B > fabric-id 1003 vlan 1003 > fc-map 0efc01 > exit /* Apply FCoE MAP to interface */ > interface fibrechannel 0/0 > fabric SAN_FABRIC_B > no52: shutdown Figure Dell S5000 (fabric B) configuration /* Apply FCoE MAP and DCB MAP to interface */ > interface tengigabitethernet 0/12 > dcb-map SAN_DCB_MAP > fcoe-map SAN_FABRIC_A /* Make ser
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 53: Output of the ‘switchshow’ command on the fabric A FC switch The ‘nsshow’ command output below shows that both the Dell QLogic CNA port and Dell S5000 switch are logged into fabric A. Note here that the QLogic adapter WWPN is ’20:01:00:0e:1e:0f:2d:8e’ and the Dell S5000 WWPN is ’20:00:5c:f9:dd:ef:25:c0’. The four storage WWPNs are unchanged.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 54: Output of the ‘nsshow’ command on the fabric A FC switch Since we swapped the FC HBA card for a Dell QLogic CNA card, we need to update the zoning configuration and remove the FC HBA WWPN and add the Dell QLogic CNA WWPN to the respective zoning configurations on each switch.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE the zoning configuration.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE If we look at the details of what’s connected to port 4 of the fabric A Fibre Channel switch, we see the WWPNs of both the Dell S5000 switch and the Dell QLogic CNA. Figure 58: Output of the ‘portshow 4’ command on the fabric A FC switch To see information on NPIV devices logged into the fabric, use the ‘show npiv devices’ command as shown below. Note the FCoE MAC is ‘0e:fc:00:01:04:01’ (the FCoE Map + FC_ID as expected).
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 59: Check NPIV devices logged into fabric A To see currently active FIP-snooping sessions, use the ‘show fip-snooping sessions’ command.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 63: See more detailed information on fcoe-map ‘SAN_FABRIC_A’ 56 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE C: Using Dell S4810 or Dell MXL/IOA Blade switch as a FIP-snooping Bridge To stick to our original diagram from section A our example setup has the Dell PowerEdge R720 server with a Dell QLogic QLE8262 CNA, Dell S5000 switch as a NPIV Proxy Gateway, and a Dell Compellent storage array for FC storage.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 65: Fabric A Dell S4810 (FSB) configuration > enable > config terminal > dcb stack-unit 0 pfc-buffering pfc-ports 64 pfc-queues 2 > cam-acl l2acl 6 ipv4acl 2 ipv6acl 0 ipv4qos 2 l2qos 1 l2pt 0 ipmacacl 0 vman-qos 0 ecfmacl 0 fcoeacl 2 iscsioptacl 0 > exit > write > reload > enable > config terminal > protocol spanning-tree rstp > no disable > exit > dcb enable > feature fip-snooping > service-class dynamic dot1p > interface te
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE > config terminal > interface vlan 5 > tagged tengigabitethernet 0/42 > tagged port-channel 20 > fip-snooping fc-map 0efc01 > fip-snooping enable > end > write Figure 66: N_Port WWPN logged into fabric A with S4810 as FSB As mentioned prior, with the Dell PowerEdge m1000e chassis it’s more likely the S5000 switch will be at ToR going to all the storage at EoR.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 67: Dell S5000 acting as a NPIV Proxy Gateway and Dell MXL as FSB One thing to note in Figure 67 above is that a separate link is needed for FCoE from the MXL to the S5000; the reason for this is because FCoE is not supported over VLT.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Note, it is not necessary to use only the 40 GbE links to the downstream MXL/IOA as shown in the following diagrams. If it is planned to have multiple M1000e chassis connecting to the S5000, it would be preferred to use a 40 GbE QSPF+ to 4 x 10 GbE breakout cable from the MXL/IOA to the S5000.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 69: Fabric A Dell MXL (FSB) configuration > enable > config terminal > dcb enable > feature fip-snooping > service-class dynamic dot1p > interface range fortyGigE 0/33 - 37 > port-channel-protocol lacp > port-channel 12 mode active > exit > protocol lldp > no advertise dcbx-tlv ets-reco > dcbx port-role auto-upstream > no shut > exit > interface port-channel 12 > portmode hybrid > switchport > fip-snooping port-mode fcf > no s
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 70: N_Port WWPN logged into fabric A with MXL as FSB Figure 71: Dell S5000 acting as a NPIV Proxy Gateway and Dell IOA as FSB The Dell PowerEdge M I/O Aggregator requires no configuration in terms of setting up the environment for FCoE. By default it is already preconfigured for FCoE transit. Also by default, all uplink ports are lagged together via LACP in ‘port-channel 128’.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE preconfiguration already applied by default, once the Dell S5000 and server is configured properly, the IOA will automatically function as a FCoE transit switch. For the Dell, MXL, in figure 68, we manually applied much of the same configuration such as uplink-failure detection.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE D: FCoE CNA adapter configuration specifics As mentioned prior, it’s important to note that as long as the appropriate drivers for both FC and Ethernet are installed, the operating system can see two CNA ports as multiple Ethernet ports and FC HBA ports if NIC partitioning (NPAR) is employed. Note, in the following examples NPAR is used in conjunction with FCoE. It is also possible to deploy FCoE without the use of NPAR.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 76: View of Broadcom BCM57810S in Broadcom Advanced Control Suite 4 In ‘Control Panel->Network and Internet->Network Connections’, we see eight virtual ports as shown in Figure 77. Figure 77: Virtual adapter network connections as seen in Windows By default each function is configured only as a NIC. You can see in Figure 76, for the virtual port highlighted, FCoE is disabled.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE To keep things simple and as based on requirements, we use one virtual port on each physical port and disable the rest. This can be done easily through Broadcom Advanced Control Suite 4 by selecting the virtual port in the left-pane, expanding the ‘Resource Reservations’ item on the right-pane, clicking the ‘Configure’ button, clicking the checkbox next to ‘Ethernet/Ndis’ to disable it, and confirming the request.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 80: Windows view in Device Manager of one Broadcom BCM57810S CNA with NPAR and FCoE enabled Creating a NIC Team Since the NICs and HBAs are seen as separate ports, we can treat them as separate entities and create a NIC team with the virtual CNA NICs. To configure a NIC team on our two virtual NIC ports, click the ‘Filter’ drop-down box on the top left of the ‘Broadcom Advanced Control Suite 4’ GUI and select ‘TEAM VIEW’.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE On the next dialog, we select the respective adapters to NIC team. Figure 82: Selecting virtual NIC ports on Broadcom BCM57810S to NIC team Next, we select the port to use for standby. Figure 83: Additional configuration to create active/active NIC team on Broadcom BCM57810S We also leave the Broadcom LiveLink option at the default setting.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 84: We leave ‘LiveLink’ feature on Broadcom BCM57810S at the default setting Next, we enter VLAN information. We have setup LAN traffic on VLAN 5 in our topology.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 86: Select ‘Tagged’ for the VLAN configuration on Broadcom BCM57810S Figure 87: We use VLAN 5 for our LAN traffic 71 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 88: We are not configuring additional VLANs The final step is to confirm the changes. Figure 89: Commit changes to create NIC team on Broadcom BCM57810S Once the configuration is complete, we see the below NIC team setup with both virtual ports as members.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 90: NIC team view in Broadcom Advanced Control Suite 4 of Broadcom BCM57810S Now Windows Server 2008 R2 Enterprise sees a virtual adapter as shown in Figure 91 and Figure 92.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Dell QLogic QLE8262 QLogic offers CNAs in three formats for Dell 12G servers: QLE8262 standard PCI Express, QME8262-k mezzanine for Dell blade servers, and QMD8262-k for the Dell Network Daughter Card. The Dell QLogic QLE8262 allows for up to four partitions per physical port and eight partitions total per 2-port adapter. A partition can be looked upon as a virtual port.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 94: Dell QLogic QLE8262 CNA FCoE/NPAR Configuration 75 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Creating a NIC Team Since the NICs and HBAs are seen as virtual ports, we can treat them as separate entities and create a NIC team with the virtual CNA NIC ports. In Figure 95 and Figure 96, you can see we NIC team the two virtual NIC ports and use ‘Failsafe Team’. In this example, we use Windows Server 2008 R2 Enterprise as an example.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 96: Dell QLogic QLE8262 adapter propertise displaying the created NIC team The NIC team will now show in Windows as a new virtual adapter as shown in Figure 97 and Figure 98.
Dell Networking S5000: Deployment of a Converged Infrastructure with FCoE Figure 99: Tagging the NIC team with VLAN 5 78 Deployment of a Converged Infrastructure with FCoE Deployment/Configuration Guide