Accelerating High Speed Networking with Intel I/O Acceleration Technology
3
Accelerating High-Speed Networking with Intel® I/O Acceleration Technology White Paper
System Overhead
• Interrupt handling
• Buffer management
• Operating system transitions
TCP/IP Processing
• TCP stack code processing
Memory Access
• Packet and data moves
• CPU stall
Figure 2. Network I/O processing tasks. Server network I/O
processing tasks fall into three major overhead categories, each
varying as a percentage of total overhead according to TCP/IP
packet size.
Intel research and development teams found the real I/O bottle-
necks when they examined the entire flow of a data packet as it is
received, processed, and sent out by the server. Figure 1 illustrates
this flow. The following numbered descriptions correspond to the
circled numbers in the illustration:
1. A client sends a request in the form of TCP/IP data packets that
the server receives through the network interface card (NIC). The
data packet contains TCP header information that includes packet
identification and routing information as well as the actual data
payload relating to the client request.
2. The server processes the TCP/IP packets and routes the payload
to the designated application. This processing includes protocol
computations involving the TCP/IP stack, multiple server memory
accesses for packet descriptors and payload moves, and various
other system overhead activities (for example, interrupt handling
and buffer management).
3. The application acknowledges the client request and recognizes
that it needs data from storage to respond to the request.
4. The application accesses storage to obtain the necessary data
to satisfy the client request.
5. Storage returns the requested data to the application.
6. The application completes processing of the client request
using the additional data received from storage.
7. The server routes the response back through the network
connection to be sent as TCP/IP packets to the client.
The above packet data flow has remained largely unchanged for
more than a decade. For the networking requirements of today, the
result is significant system overhead, excessive memory accesses,
and inefficient TCP/IP processing.
Figure 2 represents the entire network I/O overhead picture. It
is important to note, however, that the system overhead, TCP/IP
processing, and memory access categories of overhead are not
proportional. In fact, as discussed later, the amount of CPU usage
for each category varies according to application I/O packet size.
Finding the Real I/O Bottlenecks
Client
Application
Server
Storage
3
5 6
7
Network
Data Stream
1
4
2
Figure 1. Data paths to and from application. Server overhead and
response latency occurs throughout the request-response data path.
These overheads and latencies include processing incoming request
TCP/IP packets, routing packet payload data to the application, fetching
stored information, and reprocessing responses into TCP/IP packets for
routing back to the requesting client.
Source: Intel research on the Linux* operating system