Driving 10 Gigabit Ethernet Adoption in the Data Center
DRIVING 10 GIGABIT ETHERNET ADOPTION IN THE DATA CENTER
2
Motivation Driving 10 Gigabit Ethernet
Today, most server, desktop, and laptop systems come standard
with Gigabit Ethernet connectivity. It has become natural for us to
just “plug into” a Gigabit Ethernet network. Although 10 Gigabit
Ethernet has been around since 2002, its adoption has been
limited to the core of data center networks. That is, until now.
Advances in technology delivering 10 Gigabit Ethernet and
falling prices are driving the adoption of 10 Gigabit Ethernet
from the core to the edge of the network. Let’s look at the
motivators driving the adoption of 10 Gigabit Ethernet in
modern data centers.
MULTI-CORE PROCESSORS
Keeping pace with the processing demands of multi-core
and multi-threaded processors requires robust I/O interfaces
and devices. Constraining the I/O paths results in the “hour
glass” effect, or the metering of I/O requests through a
single, narrow interface. As in an hour glass, I/O requests
can be constrained by slow interfaces and devices resulting in
ineffective utilization of processing resources as they wait for
I/O requests to complete.
Figure 1: Network data flow in a previous-generation platform
The interface to the network and even the network itself can be
one of the leading contributors to limiting processing efficiency
in multi-core processor systems. Ten years ago it was typical
to find single-core, single-threaded processors being supported
by a Gigabit Ethernet network. Today, trying to support
eight to sixteen concurrent execution streams supported by
multi-core, multi-threaded processors on yesterday’s Gigabit
Ethernet is not sufficient. To achieve processing efficiency with
these advanced multi-core, multi-threaded processors requires
network interfaces and the network itself to be exponentially
greater—10 Gigabit Ethernet.
VIRTUALIZED ENVIRONMENTS
In today’s information technology vernacular, “server
virtualization” means the ability to run more than one
operating system (OS) image on a single physical server.
Server virtualization uses a software hypervisor running as
the kernel on a physical server to present multiple virtual
machine (VM) images to the guest OSs. In this model, the
OSs can be disparate; for example, Microsoft Windows*
Server and Linux* running on a VMware hypervisor on an
Intel
®
Xeon
®
processor-based server. Key benefits to server
virtualization are:
• Betterutilizationofphysicalserverresources(specically
with multi-core processors)
• Improveddeploymentofapplicationsonvirtualservers
(you don’t need to order another physical server to deploy
an application)
• Abilitytobalancephysicalserverworkloadsacrossmany
virtualized servers
However, along with these benefits come challenges. It is easy
to overtax the physical I/O resources in a virtualized server
environment. Each virtualized OS thinks it has exclusive use of
the physical resources when in reality these resources are shared
across the virtualized OSs running concurrently on a physical
server. The result is an oversubscription of resources and, in a
networking environment, the physical network server adapter.
The problem with the demand on the network by a multi-core
processor is exacerbated in a virtualized environment. One
logical solution is to increase network bandwidth to satisfy this
increased virtualized server network demand by deploying 10
Gigabit Ethernet at the network edge.
LAN
CPU
I/O Hub
PORT 1 PORT 1
Ethernet Controller