User Guide

Table Of Contents
Determining Active Queue Location
The user of these performance options will want to determine the affinity of FCoE queues to CPUs in order to
verify their actual effect on queue allocation. This is easily done by using a small packet workload and an I/O
application such as IoMeter. IoMeter monitors the CPU utilization of each CPU using the built-in performance
monitor provided by the operating system. The CPUs supporting the queue activity should stand out. They
should be the first non-hyper thread CPUs available on the processor unless the allocation is specifically
directed to be shifted via the performance options discussed above.
To make the locality of the FCoE queues even more obvious, the application affinity can be assigned to an
isolated set of CPUs on the same or another processor socket. For example, the IoMeter application can be
set to run only on a finite number of hyper thread CPUs on any processor. If the performance options have
been set to direct queue allocation on a specific NUMA node, the application affinity can be set to a different
NUMA node. The FCoE queues should not move and the activity should remain on those CPUs even though
the application CPU activity moves to the other processor CPUs selected.
SR-IOV (Single Root I/O Virtualization)
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you
have an SR-IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions.
The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a
guest partition's memory, resulting in higher throughput and lower CPU utilization. SR-IOV also allows you to
move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows
Server 2012. See your operating system documentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property
sheet, under Virtualization on the Advanced Tab. Some devices may need to have SR-IOV enabled in a
preboot environment.
NOTES:
l Configuring SR-IOV for improved network security: In a virtualized envir-
onment, on Intel® Server Adapters that support SR-IOV, the virtual function
(VF) may be subject to malicious behavior. Software-generated frames are not
expected and can throttle traffic between the host and the virtual switch, redu-
cing performance. To resolve this issue, configure all SR-IOV enabled ports
for VLAN tagging. This configuration allows unexpected, and potentially mali-
cious, frames to be dropped.
l You must enable VMQ for SR-IOV to function.
l SR-IOV is not supported with ANS teams.
l VMWare ESXi does not support SR-IOV on 1GbE ports.
TCP Checksum Offload (IPv4 and IPv6)
Allows the adapter to verify the TCP checksum of incoming packets and compute the TCP checksum of
outgoing packets. This feature enhances receive and transmit performance and reduces CPU utilization.
With Offloading off, the operating system verifies the TCP checksum.
With Offloading on, the adapter completes the verification for the operating system.
Default RX & TX Enabled