Unified Networking on 10GbE_Intel and NetApp

4
Unied Networking for 10 Gigabit Ethernet
ETHERNET ENHANCEMENTS
FOR STORAGE
Data Center Bridging
for Lossless Ethernet
To strengthen 10GbE as a unied data
center fabric, the IEEE has developed
and ratied standards for Ethernet
enhancements to support storage trac.
ese enhancements strengthen 10GbE as a
unied data center fabric for running FCoE
and iSCSI. Known collectively as “Data
Center Bridging” (DCB), these extensions
enable better trac prioritization over a
single interface and an advanced means for
shaping trac on the network to decrease
congestion. In short, DCB provides the
QoS that delivers a lossless Ethernet fabric
for storage trac. For more information,
see the DCB white paper from the Ethernet
Alliance: http://ethernetalliance.org/les/
static_page_les/83AD0BBC-C299-
B906-8F5985957E3327AA/Data Center
Bridging.pdf
Fibre Channel over Ethernet:
Extending Consolidation
FCoE is a logical extension of Ethernet
that uses FC’s Network, Service, and
Protocol layers to carry data packets over
Ethernets physical and data link layers.
Using FC’s upper layers smoothes the
transition to FCoE because existing SAN-
based applications do not need to change
to benet from the performance and cost
benets of FCoE.
Many Enterprises have extensive FC
installations, and FCoE provides easy
FC SAN access for any server with a
capable 10GbE port. By using standard
Ethernet fabrics, FCoE eliminates the
need for dedicated FC host bus adapters
(HBAs), reducing cabling and switch-port
requirements, while coexisting with existing
FC hardware and software infrastructures.
e result is a simplied data center
infrastructure, lower equipment and power
costs, and universal SAN connectivity
across the data center over the trusted
Ethernet fabric.
Contrasting FCoE Architectures
Today we see two approaches for enabling
servers with FCoE connectivity, with each
approach diering signicantly according
to accessibility, ease of use, and cost
of ownership.
e rst approach, Open FCoE, consists
of native FCoE initiators in OSs including
Linux*, Microsoft Windows, and VMware
ESX*, which enable FCoE in standard
10GbE adapters. is approach provides
a robust, scalable, and high-performance
server connectivity option without
expensive, proprietary hardware. As shown
in Figure 2, Open FCoE implements the
complete FC protocol in the OS kernel.
It provides libraries for dierent system-
level implementations, allowing vendors
to implement data plane functions of
the FCoE stack in hardware to deliver
optimum performance.
In comparison, incumbent FC vendors
have developed converged network
adapters (CNAs) that ooad FCoE
functions in hardware. e CNAs leverage
those vendors’ existing FC software,
including drivers, APIs, and management
applications.
Open FCoE: Momentum in
the Linux* Community
Initiated by Intel, the Open FCoE project
was accepted by the Linux community
in November 2007 with the goal of
accelerating development of a native FCoE
initiator in the Linux kernel. e industry
responded enthusiastically, and today there
are over 190 active participants in the
community who are contributing code,
providing review comments, and testing
the Open FCoE stack. To date, the Open
FCoE Source Web site (www.open-fcoe.org)
has received over 20,000 hits. Open
industry standards and Open Source play a
signicant role in the modern data center,
as they lower research and development
costs and enable access to a multi-
FCoE data path offloads
Ethernet MAC
Ethernet PHY
Figure 2. Overview of Fibre Channel over Ethernet (FCoE) host bus adapters
(converged network adapters) and Open FCoE initiator solutions
Open FCoE Solution FCoE HBA (CNA) Solution
Operating System
Device Driver
Device Driver
Operating
System
Host Bus Adapter
Application I/O
Fibre Channel protocol
Application I/O
SCSI storage interface
FCoE transport protocol
SCSI storage interface
Fibre Channel protocol
Ethernet MAC
FCoE transport protocol
Ethernet PHY