Upgrading Network Architecture to 10 Gigabit Ethernet

4 www.intel.com/IT
IT@Intel White Paper Upgrading Data Center Network Architecture to 10 Gigabit Ethernet
Increasing Throughput and
Reducing Latency for Design
Applications
For some specific silicon design workloads,
we needed to build a small, very low-latency
cluster between servers. Several parallel
applications running on these servers typically
carry very large packets. As shown in Table
2, we compared application response times,
using several network fabric options.
The 10 GbE network provided acceptable
performance at an acceptable price point.
For messages 16 megabytes (MB) and
larger, the 10 GbE performance was about
one-quarter of the 1 GbE response time, and
was closer to the performance of InfiniBand,
a more expensive low-latency switched
fabric subsystem. In the table, “multi-hop
is defined as having to use more than one
switch to get through the network.
Choosing the Right
Network Components
We surveyed the market to find products
that met our technical requirements at the
right price point. We discovered that not all
equipment and network cards are identical—
performance can vary. With extensive testing,
we found that we could reduce the cost for
a 10 GbE port by 65 percent by selecting the
right architecture and supplier. For example,
we had to decide where to place the network
switch—on top of the rack, which provides more
flexibility but is more expensive, or in the center
of the row. We chose to use center-of-the-row
switches to reduce cost.
Higher transmission speed requirements have
led to new cable technologies, which we are
deploying to optimize our implementation of
a 10 GbE infrastructure:
Small-form factor pluggable (SFP+) direct-
attach cables. These twinaxial cables support
10 GbE connections over short distances of
up to 7 meters. Some suppliers are producing
a cable with a transmission capability of up to
15 meters.
Connectorized cabling. We are using this
technology to simplify cabling and reduce
installation cost, because it is supported
over SFP+ ports. One trunk cable that we
use can support 10 GbE up to 90 meters
and provides six individual connections. This
reduces the amount of space required to
support comparable densities by 66 percent.
The trunks terminate on a variety of options,
providing for a very flexible system. We
also use a Multi-fiber Push-On (MPO) cable,
which is a connectorized fiber technology
comprised of multi-strand trunk bundles
and cassettes. This technology can support
1 GbE and 10 GbE connections and can be
upgraded easily to support 40 and 100 GbE
parallel-optic connections by simply swapping
a cassette. The current range for 10 GbE is
300 meters on Optical Multimode 3 (OM3)
multi-mode fiber (MMF) and 10 kilometers on
single-mode fiber (SMF).
To maximize the supportable distances
for 10 GbE, and 40 GbE/100 GbE when it
arrives, we changed Intel’s fiber standard to
reflect a minimum of OM3 MMF and OM4
where possible, and we try to use more
energy-efficient SFP+ ports.
FUTURE PLANS FOR
OFFICE AND ENTERPRISE
I/O AND STORAGE
CONSOLIDATION
Historically, Ethernet’s bandwidth
limitations have kept it from being the
fabric of choice for some application
areas, such as I/O, storage, and
interprocess communication (IPC).
Consequently, we have used other
fabrics to meet high-bandwidth,
Table 2. Application Response Times for Various Packet Sizes
Packet Size
in Bytes
Application Response Time in Microseconds
Multi-Hop
1 Gigabit Ethernet (GbE)
Current Standard
Multi-Hop
10 GbE
One-Hop
1 GbE
Multi-Hop
InfiniBand*
8
69.78 62.50 41.78 15.57
128
75.41 62.55 44.77 17.85
1,024
116.99 62.56 64.52 32.25
4,096
165.24 65.28 103.15 60.15
16,384
257.41 62.47 195.87 168.57
32,768
414.52 129.48 348.55 271.95
65,536
699.25 162.30 627.12 477.93
131,072
1,252.90 302.15 1,182.41 883.83
Note: Intel internal measurements, June 2010.