Accelerating High Speed Networking with Intel I/O Acceleration Technology
5
Accelerating High-Speed Networking with Intel® I/O Acceleration Technology White Paper
Intel I/OAT address all three server I/O bottlenecks (illustrated
in Figures 2 and 3) by providing fast, scalable, and reliable
networking. In addition, it provides network acceleration that
scales seamlessly across multiple Ethernet ports, and it is a safe
and flexible choice for IT managers due to its tight integration
into popular operating systems.
The system-wide network I/O acceleration technologies applied
by Intel I/OAT are shown in Figure 4 and include:
• Parallel Processing of TCP and Memory Functions. Lowers
system overhead and improves the efficiency of TCP stack
processing by using the abilities of the CPU to execute multiple
instructions per clock, pre-fetch TCP/IP header information into
cache, and perform other data movement operations in parallel.
• Affinitized Data Flows. Partitions the Network Stack Processing
dynamically across multiple physical or logical CPUs. This allows
CPU cycles to be allocated to the application for faster execution.
• Asynchronous Low-Cost Copy. Intel® Quick Data Technology
provides enhanced data movement, allowing payload data copies
from the NIC buffer in system memory to the application buffer
with far fewer CPU cycles, returning the saved CPU cycles to
productive application workloads.
• Improved TCP/IP Processing with Optimized TCP/IP Stack.
Implements separate packet data and control paths to optimize
processing of the packet header from the packet payload. This and
other stack-related enhancements reduce protocol processing cycles.
Because Intel I/OAT enhances performance while keeping all process-
ing of the operating system’s TCP stack on the Intel Xeon processor
and the state information for TCP connections within the server’s
system memory, the technology is said to be ”stateless.” This is as
opposed to stateful offload technologies, such as TOE. As a stateless
technology, Intel I/OAT retains use of the system processors and
protocols as the principal engines for handling network traffic.
Additionally, Intel I/OAT is used throughout the platform to increase
CPU efficiency by reducing bottlenecks across most application I/O
sizes. Because Intel I/OAT is tightly integrated into popular operating
systems, it ensures full compatibility with critical network configura-
tions such as adapter teaming and link aggregation. As a result, Intel
I/OAT provides a fast, scalable, and reliable network acceleration
solution with significant performance advantages over prior system
implementations and technologies.
Intel® I/OAT—The System-Wide Solution
Network
Data Stream
Server with Intel® I/O Acceleration Technology
Enhanced direct memory
access (DMA) for more
efficient memory copies
Optimized
TCP/IP stack
Affinitized network
data flow for balanced
computing across
multiple CPUs
Figure 4. Intel® I/OAT performance enhancements. Intel® I/OAT implements server-wide performance enhancements in all
three major server components to ensure that data gets to and from applications consistently faster and with greater reliability.