Installation Guide Storage Networking (Unified Fabric Pilot) QLogic 8100 Series Adapters, QLogic 5800 Series Switches, Cisco Nexus 5000 Series Switches, JDSU Xgig Platform 51031-00 A
Installation Guide Storage Networking (Unified Fabric Pilot) Information furnished in this manual is believed to be accurate and reliable. However, QLogic Corporation assumes no responsibility for its use, nor for any infringements of patents or other rights of third parties which may result from its use. QLogic Corporation reserves the right to change product specifications at any time without notice. Applications described in this document for any of these products are for illustrative purposes only.
Table of Contents Preface Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Documentation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Overview 2 Planning Selecting a Test Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installation Guide Storage Networking (Unified Fabric Pilot) Validation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation Step 2—FIP Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation Step 3—FCoE Function and I/O Tests . . . .
Installation Guide Storage Networking (Unified Fabric Pilot) List of Figures Figure 3-1 Reference Architecture Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1 Driver Download Page Model Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-2 Driver Download Page Driver Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3 Driver Download Page Driver and Documentation . . . . . . . . . . . . . . . . . . . . . .
Installation Guide Storage Networking (Unified Fabric Pilot) 5-28 5-29 5-30 5-31 5-32 B-1 B-2 iSCSI Traffic Performance Validation Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Throughput Results Showing the Fibre Channel and iSCSI Application Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setup for Verifying Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Preface This document describes how to install and certify a pilot unified fabric configuration. This configuration demonstrates lossless Ethernet and data center bridging (DCB), which includes priority flow control (PFC), enhanced transmission selection (ETS), and data center bridging exchange protocol (DCBX) for a Fibre Channel and 10Gb Ethernet unified fabric.
Preface Documentation Conventions The Ethernet Alliance has white papers that further describe Enhanced Ethernet: http://www.ethernetalliance.org/library/ethernet_in_the_data_center/white_papers Documentation Conventions This guide uses the following documentation conventions: NOTE: provides additional information. CAUTION! indicates the presence of a hazard that has the potential of causing damage to data or equipment.
Preface Documentation Conventions 51031-00 A Text in italics indicates terms, emphasis, variables, or document titles. For example: For a complete listing of license agreements, refer to the QLogic Software End User License Agreement. What are shortcut keys? To enter the date type mm/dd/yyyy (where mm is the month, dd is the day, and yyyy is the year).
Preface Documentation Conventions x 51031-00 A
1 Overview Martin (2010) defines a converged network as follows: "A unified data center fabric is a networking fabric that combines traditional LAN and storage area network (SAN) traffic on the same physical network with the aim of reducing architecture complexity and enhancing data flow and access. To make this work, the traditional Ethernet network must be upgraded to become lossless and provide additional data center networking features and functions.
1–Overview 1-2 51031-00 A
2 Planning Selecting a Test Architecture When planning the pilot of a unified network, it is important to choose both Fibre Channel and traditional Ethernet-based traffic flows. Combining a test SAN infrastructure and a test LAN infrastructure is often the easiest and most available option for a pilot project. Alternatively, a critical business application test system can closely simulate a production environment.
2–Planning Where and How to Deploy Currently, a Converged Network Adapter must always be connected to a switch that has DCB. There are two types of switches that have DCB: a DCB switch and an FCoE switch. The DCB switch has enhanced Ethernet support, but does not have Fibre Channel forwarder (FCF) capabilities, and does not support the conversion of Fibre Channel frames to FCoE frames. A DCB switch supports converging-Ethernet-based protocols, but does not support Fibre Channel protocols.
3 Architecture Approach The test configuration described in this section was installed and validated at the QLogic NETtrack Developer Center (NDC) located in Shakopee, MN. The NDC provides QLogic customers and alliance partners with the test tracks and engineering staff to test interoperability and optimize performance with the latest server, networking, and storage technology.
3–Architecture Reference Architecture Description Figure 3-1.
3–Architecture Reference Architecture Description Equipment Details Table 3-1 lists the referenced architecture equipment. Table 3-2 lists the JDSU Xgig equipment and testing software. Table 3-1.
3–Architecture Reference Architecture Description 3-4 51031-00 A
4 Installation This section describes how to set up an FCoE environment. It assumes a general understanding of SAN administration concepts. Determine Configuration QLogic FCoE adapters are supported on multiple hardware platforms and operating systems. Generally, the following specifications apply, but you should always check the QLogic Web site for current information.
4–Installation Equipment Installation and Configuration Equipment Installation and Configuration This section focuses on the converged network installation and configuration. You do not have to change your current storage and network management practices. Install the Converged Network Adapter Hardware Begin by identifying a pilot server that meets Converged Network Adapter hardware requirements (PCI slot type, length, available slot) and install the adapters. To install the adapter hardware: 1.
4–Installation Equipment Installation and Configuration Install the Adapter Drivers To install the FCoE and Ethernet drivers: 1. Go to the QLogic Driver Downloads/Documentation page (Figure 4-1) at http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/default.aspx. Figure 4-1. Driver Download Page Model Selection 51031-00 A 2. In the table at the bottom of the page, select Converged Network Adapters, the adapter model, your operating system, and then click Go. 3.
4–Installation Equipment Installation and Configuration Install SANsurfer FC HBA Manager To install SANsurfer® FC HBA Manager: 1. Go to the QLogic Driver Downloads/Documentation page at http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/default.aspx. 2. In the table at the bottom of the page (Figure 4-2), select Converged Network Adapters, the adapter model, your operating system, and then click Go. Figure 4-2. Driver Download Page Driver Link 3.
4–Installation Equipment Installation and Configuration Cabling To connect the Fibre Channel and Ethernet cables: 1. Connect the Fibre Channel cables from the servers to the Cisco FCoE Nexus switch. 2. Connect the Fibre Channel cables from the storage to the Cisco FCoE Nexus switch. 3. Connect any necessary Ethernet cables for device management and iSCSI storage.
4–Installation Equipment Installation and Configuration Configuring DCB on a Nexus Switch NOTE: In this procedure, you may need to adjust some of the parameters to suit your environment, such as VLAN IDs, Ethernet interfaces, and virtual Fibre Channel interfaces. In this example, the Cisco FCF uses NIC traffic on priority 2 and VLAN 2, and FCoE traffic on priority 3 and VLAN 1002. To enable PFC, ETS, and DCB functions on a Cisco Nexus 5000 series switch: 1. Open a terminal configuration setting.
4–Installation Equipment Installation and Configuration 5. Configure qos policy-maps. policy-map type qos policy1 class type qos class-nic set qos-group 2 6. Configure queuing policy-maps and assign network bandwidth. Divide the network bandwidth evenly between FcoE and NIC traffic. policy-map type queuing policy1 class type queuing class-nic bandwidth percent 50 class type queuing class-fcoe bandwidth percent 50 class type queuing class-default bandwidth percent 0 7.
4–Installation Equipment Installation and Configuration 10. Configure Ethernet port 1/3 to enable VLAN-1002-tagged FCoE traffic and VLAN-2-tagged NIC traffic. interface Ethernet1/3 switchport mode trunk switchport trunk allowed vlan 2,1002 spanning-tree port type edge trunk interface vfc3 bind interface Ethernet1/3 no shutdown 11. Display the configuration to confirm that it is correct.
4–Installation Equipment Installation and Configuration bandwidth percent 50 Class-map (queuing): class-fcoe (match-any) Match: qos-group 1 bandwidth percent 50 Class-map (queuing): class-default (match-any) Match: qos-group 0 bandwidth percent 0 Service-policy (queuing) output: policy statistics status: Class-map (queuing): policy1 disabled class-nic (match-any) Match: qos-group 2 bandwidth percent 50 Class-map (queuing): class-fcoe (match-any) Match: qos-group 1 bandwidth percent 50 Class-map (
4–Installation Verify Equipment Connectivity Verify Equipment Connectivity To verify that all equipment is logged in and operating properly: 1. Verify LAN management capability on all devices through the associated device management application. 2. Verify that servers and Converged Network Adapters are logged into an FCoE switch under both Ethernet (eth1/16) and Fibre Channel (vfc116) protocols. Figure 4-4 shows the Device Manager interface for the Cisco Nexus 5000 FCoE switch.
4–Installation Verify Equipment Connectivity 4. When the LUNs have been created and all zoning is complete, use the management interface to add the Converged Network Adapter Fibre Channel WWNs and iSCSI initiators to your storage so that the servers can discover the LUNs. Figure 4-6 shows an example of the discovered FCoE Converged Network Adapters and the created initiator groups on a NetApp Filer after zoning is complete. Figure 4-6. NETApp Zone Validation 51031-00 A 5.
4–Installation Verify Equipment Connectivity Figure 4-7. SANsurfer Management Validation Screen 1 Figure 4-8. SANsurfer Management Validation Screen 2 4-12 7. Use the operating system tools to create disk partitions and volumes on your servers using the assigned LUNs. 8. Proceed to validation in Section 5, Validation Methods for DCB and FCoE.
5 Validation Methods for DCB and FCoE Validation Step 1—Verify PFC and ETS Parameters Verify the interoperability of the unified fabric devices by capturing the DCBX exchanges between the QLogic initiator and each target. The key parameters to verify are: DCBX version used by each network component Priorities assigned to each protocol Priorities with the per PFC enabled Bandwidth assigned to each priority Validation Process 1. Place the Analyzer into the two paths shown as in Figure 5-1.
5–Validation Methods for DCB and FCoE Validation Step 1—Verify PFC and ETS Parameters a. Select the trigger mode. b. Set up the capture trigger condition to capture any LLDP frames. c. Set the trigger fill to capture 90 percent after the trigger. Figure 5-2. Analyzer TraceControl Configured to Capture LLDP Frames Only Between the Adapter, Switch, and Target 4. 5.
5–Validation Methods for DCB and FCoE Validation Step 1—Verify PFC and ETS Parameters 51031-00 A pfc_enable value is the same for the adapter, switch, and targets. Application type-length-value (TLV) contains one entry for FCoE protocol and one entry for FCoE initialization protocol (FIP). The user_priority_map should be the same for the adapter, switch, and targets. Priority group TLV for the switch contains the ETS bandwidth allocation configured at the switch console.
5–Validation Methods for DCB and FCoE Validation Step 1—Verify PFC and ETS Parameters Validation Results Figure 5-3 shows one trace captured between the adapter and FCoE switch exchanging DCB parameters using the DCBX protocol. Both peers use version 1.01. Figure 5-3.
5–Validation Methods for DCB and FCoE Validation Step 1—Verify PFC and ETS Parameters From the trace, you can verify the following: DCBX Oper_Version is 0 for all devices. ER bit is OFF in all TLVs for all devices. EN bit is ON in all TLVs for all devices messages. W bit is ON for the adapter and target, and OFF for the switch. pfc_enable value is 0x08 for all devices, which means PFC is enabled for priority 3.
5–Validation Methods for DCB and FCoE Validation Step 2—FIP Validation Validation Step 2—FIP Validation The next step is to verify FCoE initialization protocol (FIP). FIP performs the FCoE link initialization and maintenance to enable FCoE traffic in the unified fabric. FIP ensures that: FCoE switches advertise themselves in broadcast messages. End devices query for the VLAN ID to be used for FCoE. End devices query for other FCoE characteristics.
5–Validation Methods for DCB and FCoE Validation Step 2—FIP Validation 4. The switch responds with the FIP Discovery Advertisement/Response To Solicitation message. This message is enlarged to the maximum FCoE frame size. The delivery of this message proves that the network is capable of delivering maximum-sized FCoE frames. The message contains the Available For Login (A) bit, which must be ON so that end devices can log in.
5–Validation Methods for DCB and FCoE Validation Step 2—FIP Validation Validation Results Figure 5-6 shows the FIP test results between the QLogic QLE8152 adapter and the Cisco Nexus 5000 FCoE switch. Figure 5-6. FIP Verification Results—Converged Network Adapter and FCoE Switch From the captured trace, you can verify the following: 5-8 FIP version used by both devices is 1. FCoE VLAN ID returned by the switch is 1002. The link is capable of delivering 2181-byte frames.
5–Validation Methods for DCB and FCoE Validation Step 2—FIP Validation In summary, the FIP link initialization process is successful—the end device successfully connects to the switch, and the FCoE virtual link is set up. Similar results should be seen on the FCoE target side. When the FCoE link is up, the end device sends an FIP Keep Alive frame every eight seconds so that the switch knows that the device is present even when the link is idle.
5–Validation Methods for DCB and FCoE Validation Step 2—FIP Validation Unlike end devices, the FCoE switches do not send FIP Keep Alive messages. Instead, FCoE switches send FIP Discovery Advertisement messages every eight seconds, which notifies end devices that the switches are still present. Figure 5-9 shows the time difference between two FIP Discovery Advertisements.
5–Validation Methods for DCB and FCoE Validation Step 2—FIP Validation Figure 5-11 shows the time difference between the last FIP Keep Alive Sent message and the FIP Clear Virtual Link message from the switch. Figure 5-11.
5–Validation Methods for DCB and FCoE Validation Step 3—FCoE Function and I/O Tests Validation Step 3—FCoE Function and I/O Tests Figure 5-12 shows the three test configurations that validate FCoE function and I/O in the FCoE fabric: Converged Network Adapter host > FCoE switch > FCoE storage Converged Network Adapter host > FCoE switch > direct-attached Fibre Channel storage Converged Network Adapter host > FCoE switch > Fibre Channel switch > Fibre Channel storage Converged Network Adapter Hos
5–Validation Methods for DCB and FCoE Validation Step 3—FCoE Function and I/O Tests 3. Configure the test as follows: MLTT test pattern: mixed read/write operations, 100 percent read and write traffic Data size: 512K Queue depth: eight 4. Use MLTT to calculate IOPS and latency for the three test configurations, and verify data integrity. 5. Set up the Analyzer’s TraceControl to capture FCoE frames over the Ethernet VLAN by setting these frame types as trigger conditions.
5–Validation Methods for DCB and FCoE Validation Step 3—FCoE Function and I/O Tests Similar to previous test results, the captured FCoE traces listed in Table 5-1 show no FCoE frame errors. Table 5-1.
5–Validation Methods for DCB and FCoE Validation Step 4—Validate PFC and Lossless Link Validation Step 4—Validate PFC and Lossless Link The critical factor for a successful FCoE deployment is the ability of the virtual Ethernet link to carry lossless FCoE traffic. Figure 5-14 shows the test configuration to validate PFC function and to validate that the link is lossless. 4Gbps 10Gbps Figure 5-14.
5–Validation Methods for DCB and FCoE Validation Step 4—Validate PFC and Lossless Link 3. To validate that PFC Pause frames are sent to reduce the traffic, configure TraceControl to capture a large buffer when it encounters a PFC Pause frame. To set up a capture: a. Select the trigger mode. b. Set up the trigger condition to capture on a PFC Pause frame and 80 percent of post-fill.
5–Validation Methods for DCB and FCoE Validation Step 4—Validate PFC and Lossless Link As described earlier, the Expert software analyzes all the pauses in a capture and reports critical errors, such as Frame Received while PFC Class Paused (Figure 5-16). These errors are serious—it is equivalent to sending a Fibre Channel frame on a link without waiting for a credit.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification Table 5-2 shows some of the statistics reported by Expert software for evaluating PFC functions and performance. The sample data indicates that there 33 frames were received during PFC pauses, and as a result, the link may not guarantee lossless traffic for the FCoE application. Furthermore, the large variance between the maximum and minimum exchange completion time is caused by the PFC pause. Table 5-2.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification 2. Run all the tests with the Write configuration (Figure 5-18), and then repeat all the tests with the Read configuration (Figure 5-19). 4Gbps 10Gbps Figure 5-18. Verifying ETS with Write Operations to the Storage 4Gbps 10Gbps Figure 5-19. Verifying ETS with Read Operations from the Storage 51031-00 A 3. Start the FCoE and other traffic (TCP, iSCSI) simultaneously in the MLTT application to saturate the link. 4.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification 7. Use Expert to analyze the capture, and verify the following: No error occurs on FCoE traffic Link throughput per priority in MBps (Figure 5-20) Each traffic class can still generate I/O at its ETS setting when the link is saturated. Exchange completion time Figure 5-20. Throughput per Priority Validation Results Table 5-3 lists the port settings to verify the ETS setup.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification Figure 5-21 shows that the FCoE traffic is read at the rate of 837MBps. The throughput numbers on ports (1,1,1) and (1,2,2) are equal, indicating that all traffic from the FCoE storage is going to the server. The throughput of 837MBps represents the maximum rate for FCoE in this configuration. Figure 5-21. FCoE Traffic in a Read Operation from Storage In Figure 5-22, TCP read traffic starts while maintaining the FCoE read operations.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification Figure 5-23 shows how to enable a cross-port analysis in Expert's Edit / Preferences dialog. In a cross-port analysis, Expert ensures that each frame is making its way across the switch, and reports frame losses and the latency through the switch. In addition, Expert reports aborted sequences and retransmitted FCoE, iSCSI, and TCP traffic.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification The next step is to verify that the switch allows more TCP traffic when more bandwidth is available. Figure 5-25 shows that the TCP traffic increases to 741MBps after stopping the FCoE traffic. Again, the switch is behaving properly. Figure 5-25. LAN Traffic Only Table 5-4 summarizes the throughput achieved for the previous three tests. Table 5-4.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification Figure 5-26 shows the relationship between FCoE, TCP, and pause frames. Before the start of TCP traffic, server ingress is equal to the FCoE traffic. When the TCP traffic starts, server ingress is the sum of the FCoE and TCP traffic. The switch starts sending PFC pause requests to the storage when the TCP traffic starts to slow the FCoE traffic going to the server. Figure 5-26.
5–Validation Methods for DCB and FCoE Validation Step 5—ETS Verification Exchange Completion Time (Latency Monitor) Latency is the measurement of how fast commands complete. The time between command issue and status received is measured in milliseconds (ms). Figure 5-27 shows the Fibre Channel read-response time in relationship to changes in traffic load.
5–Validation Methods for DCB and FCoE Validation Step 6—iSCSI Function and I/O Test Validation Step 6—iSCSI Function and I/O Test This test verifies that FCoE and iSCSI (IP) traffic is functioning well within the unified fabric environment. The key measurement parameters are throughput, IOPS, and completed I/O exchange rate for each traffic type. Figure 5-28 shows the configuration for testing iSCSI and I/O functions. 4Gbps 10Gbps Figure 5-28.
5–Validation Methods for DCB and FCoE Validation Step 6—iSCSI Function and I/O Test Queue depth: eight. 9. Capture a trace for the iSCSI storage. 10. Use Expert to analyze the iSCSI storage trace and calculate the average, minimum, and maximum throughput. 11. Capture a trace of the same read application for Fibre Channel and iSCSI storage together through the unified fabric. 12.
5–Validation Methods for DCB and FCoE Validation Step 6—iSCSI Function and I/O Test Table 5-5 lists the average, minimum, and maximum data. Table 5-5. Performance Comparison Between Separate and Combined Applications FCoE only (MBps) Test option FCoE and iSCSI (MBps) iSCSI Only (MBps) FCoE iSCSI Combined Average 390.670 365.706 380.240 351.356 740.507 Maximum 397.937 420.04 405.570 460.579 862.970 Minimum 275.835 0 169.261 63.266 454.
5–Validation Methods for DCB and FCoE Validation Step 7—Virtualization Verification Validation Step 7—Virtualization Verification This validation step compares the performance of virtual machines on a single host with performance on multiple hosts. The test runs one iSCSI initiator in one virtual machine, and one FCoE initiator in another virtual machine as shown in Figure 5-30. 4Gbps 10Gbps Figure 5-30. Setup for Verifying Virtualization Validation Process 51031-00 A 1.
5–Validation Methods for DCB and FCoE Validation Step 7—Virtualization Verification 6. Configure the test as follows: Trigger: any PFC frame Post fill: 90 percent after the trigger Data size: 512K Traffic: 100 percent read on both virtual machines 7. Use Analyzer to capture a trace. 8. Use Expert to analyze the trace, and calculate the average, minimum, and maximum throughput results per priority. 9. Compare I/O performance for the two virtual machines.
5–Validation Methods for DCB and FCoE Validation Step 7—Virtualization Verification Table 5-6 lists the average minimum and maximum throughput values. Table 5-6. FCoE and iSCSI Application Throughput when Running Separate Virtual Machines on One Physical Server Test Option Average FCoE (MBps) Combined (MBps) iSCSI (MBps) 835.819 166.289 999.367 Maximum 1182.395 653.632 1182.395 Minimum 367.888 0 371.
5–Validation Methods for DCB and FCoE Validation Step 7—Virtualization Verification In summary, multiple factors may determine the behavior of virtual-server-based applications in the unified network environment. The internal application balancing algorithm (either among virtual servers or within the unified storage) plays the key role, and could interfere with the DCB-based bandwidth management during congestion.
A Hardware and Software Cisco Unified Fabric Switch The Cisco Nexus 5000 Series switch enables a high-performance, standards-based, Ethernet unified fabric. The platform consolidates separate LAN, SAN, and server cluster environments into a single physical fabric while preserving existing operational models.
A–Hardware and Software JDSU (formerly Finisar) Equipment and Software JDSU (formerly Finisar) Equipment and Software JDSU Xgig JDSU Xgig® is a unified, integrated platform employing a unique chassis and blade architecture to provide users with the utmost in scalability and flexibility. Various blades support a wide range of protocols and can be easily configured to act as a protocol Analyzer, Jammer, bit error rate tester (BERT), traffic generator, and Load Tester, all without changing hardware.
A–Hardware and Software QLogic QLE8142 Converged Network Adapter QLogic QLE8142 Converged Network Adapter The QLogic QLE8142 Converged Network Adapter is a single chip, fully-offloaded FCoE initiator, operating in both virtual and non-virtual environments, running over an Enhanced Ethernet fabric. The QLE8142 adapter initiator boosts system performance with 10Gbps speed and full hardware offload for FCoE protocol processing.
A–Hardware and Software QLogic 5800V Series Fibre Channel Switch A-4 51031-00 A
B Data Center Bridging Technology The following descriptions of Enhanced Ethernet were taken from Ethernet: The Converged Network Ethernet Alliance Demonstration at SC'09, published by the Ethernet Alliance November, 2009. Data Center Bridging (DCB) For Ethernet to carry LAN, SAN and IPC traffic together and achieve network convergence, some necessary enhancements are required.
B–Data Center Bridging Technology Data Center Bridging (DCB) Enhanced Transmission Selection (ETS) protocol addresses the bandwidth allocation issues among various traffic classes to maximize bandwidth usage. The IEEE 802.1Qaz standard specifies the protocol to support allocation of bandwidth amongst priority groups. ETS allows each node to control bandwidth per priority group.
B–Data Center Bridging Technology Data Center Bridging (DCB) Figure B-1. Priority Flow Control Fibre Channel Over Ethernet FCoE is an ANSI T11 standard for the encapsulation of a complete Fibre Channel frame into an Ethernet frame. The resulting Ethernet frame is transported over Enhanced Ethernet networks as shown in Figure B-2.
B–Data Center Bridging Technology Data Center Bridging (DCB) iSCSI The Internet Small Computer Systems Interface (iSCSI) is a SCSI mass storage transport that operates between the Transport Control Protocol (TCP) and the SCSI Protocol Layers. The iSCSI protocol is defined in RFC 3720 [iSCSI], which was finalized by the Internet Engineering Task Force (IETF) in April, 2004. A TCP/IP connection ties the iSCSI initiator and target session components together.
C References Fibre Channel over Ethernet Design Guide, Cisco and QLogic (2010), QLogic adapters and Cisco Nexus 5000 Series switches, Cisco document number C11-569320-01. Downloaded from http://www.qlogic.com/SiteCollectionDocuments/Education_and_Resource/white papers/whitepaper2/QLogic_Cisco_FCoE_Design_Guide.pdf Ethernet: The Converged Network Ethernet Alliance Demonstration at Sc'09, Ethernet Alliance. (2009), retrieved from http://www.ethernetalliance.
C–References C-2 51031-00 A
Index A adapters 4-2, 4-3 architecture 2-1, 3-1 audience vii Fibre Channel forwarder 2-2 Fibre Channel switches 4-5 FIP - see FCoE initialization protocol H C cabling 4-1, 4-5 configuration 4-1 connectivity 4-10 conventions viii converged network 1-1, 4-2 D data center bridging exchange protocol vii design guide 4-5 drivers 4-3 E enhanced transmission selection vii, 5-1, 5-18 equipment 3-3 connectivity 4-10 ETS - see enhanced transmission selection exchange completion time 5-25 F FCoE function validati
Installation Guide Storage Networking (Unified Fabric Pilot) organizational ownership 2-1 V virtualization verification 5-29 P PFC - see priority flow control planning 2-1 priority flow control vii, 5-1, 5-15 process summary 3-1 Q QLogic and Cisco FCoE Design Guide 4-5 R related materials vii S SANsurfer FC HBA Manager 4-4 server bus interface 4-1 storage 4-1, 4-9 switch FCoE 4-5, 4-6 Fibre Channel 4-5 Nexus 4-6 types 4-1 T test architecture 2-1 U unified data center 1-1 unified fabric certify 1-1 c
Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 949.389.6000 www.qlogic.com International Offices UK | Ireland | Germany | India | Japan | China | Hong Kong | Singapore | Taiwan © 2010 QLogic Corporation. Specifications are subject to change without notice. All rights reserved worldwide. QLogic, the QLogic logo, and SANsurfer are registered trademarks of QLogic Corporation. Cisco and Cisco Nexus are trademarks or registered trademarks of Cisco Systems, Inc.