Converging SAN and LAN Infrastructure with Fibre Channel over Ethernet for Efficient, Cost-Effective Date Centers
Converging SAN and LAN Infrastructure with Fibre Channel over Ethernet for Efcient, Cost-Effective Data Centers Page 4
Logical Fit with Rest of Hardware Upgrade Lifecycle
Implementing FCoE in an IT environment does not require
changes to the core network. Support for FCoE trafc will typically
require an upgrade to one or more edge switches, such as
embedded blade and top-of-rack switches, and does not affect
the core switching equipment or topology. Moreover, this switch
upgrade is only an incremental addition to the upgrade to
10 Gigabit Ethernet from Gigabit Ethernet that many organizations
will be undertaking in the next year or two. More powerful
multicore servers support higher workload levels than previous
machines, which in turn require the greater network throughput of
10 Gigabit Ethernet.
The adoption of FCoE is a logical addition to mainstream network
design strategies. Because FCoE is compatible with existing Fibre
Channel topologies, new systems can be deployed using FCoE
side by side with the existing network environment. This strategy
means less disruption to the organization as a whole, as well as
easier integration into operations and budgets.
Preservation of Existing Management Infrastructure
Organizations that already have Fibre Channel based SANs in
place can use their existing management infrastructure, protecting
their investment in management applications, expertise, and
training as well as simplifying implementation. Because FCoE
uses the same protocols as traditional Fibre Channel, the
management framework is the same, regardless of whether the
environment is based on traditional Fibre Channel, FCoE, or a
combination of the two. Only the lowest two layers of the ve-layer
Fibre Channel model change.
Reduced Power Consumption
The smaller number of network interfaces and switches used
with FCoE can reduce the power requirements in server rooms
substantially. For example, each Fibre Channel HBA may
consume about 12.5 watts, and a typical Fibre Channel switch
may consume about 70 watts. Cooling the server environment
requires additional energy equal to approximately 0.8X to 1.5X
the input power.
2
For a typical rack, the power savings may be
400 watts from removal of the two HBAs from each of 16 servers,
plus an additional 140 watts from elimination of the two switches.
Multiplying this number by approximately 2 to account for cooling
results in a savings of roughly 1080 watts per rack. For a medium-
sized or large enterprise, this combination of factors can represent
a signicant power savings. In addition to the potential for cost
savings, every watt of power that an IT infrastructure conserves
has a positive net effect on the environment. As companies look
for new ways of making their operations more “green,” such
discoveries are welcome.
No-Drop Data Center Bridging
Because conventional Ethernet is a best-effort topology, it
drops packets in response to trafc congestion, which makes it
unsuitable for use in storage environments. SANs typically use
Fibre Channel or iSCSI to overcome this limitation, adding cost
and complexity to the environment. FCoE provides a number of
mechanisms that contribute to no-drop behavior over an Ethernet
fabric, collectively called Data Center Bridging.
For additional information about how each of the mechanisms works,
see “Ethernet Enhancements Supporting I/O Consolidation” at http://
www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/
ns783/white_paper_c11-462422.html.
The most basic way for FCoE to make Ethernet topologies
lossless is to enable congested ports to send PAUSE control
frames, which are specied in IEEE 802.3x. This technique
instructs the transmitting port to stop sending data temporarily
to avoid the need to drop packets. Using the PAUSE frame like
this is a simple way to make the transmission lossless, but it
can cause a ripple effect of congestion over the network, which
is an impediment to performance as well as scalability. DCB
extends the notion of quality of service (QoS) beyond a point-
to-point scenario, covering the entire data center cloud.