User Guide
Table Of Contents
- Intel® Ethernet Adapters and Devices User Guide
- Overview
- Installing the Adapter
- Microsoft* Windows* Installation and Configuration
- Intel Network Drivers for DOS
- Data Center Bridging (DCB) for Intel® Network Connections
- Remote Boot
- Troubleshooting
- Known Issues
- Regulatory Compliance Statements
- FCC Class A Products
- FCC Class B Products
- Safety Compliance
- EMC Compliance – The following standards may apply:
- Regulatory Compliance Markings
- FCC Class A User Information
- FCC Class B User Information
- EU WEEE Logo
- Manufacturer Declaration European Community
- China RoHS Declaration
- Class 1 Laser Products
- End-of-Life / Product Recycling
- Customer Support
- Legal Disclaimers
port platform configurations. Since all ports share the same default installation directives (the .inf file, etc.),
the FCoE queues for every port will be associated with the same set of NUMA CPUs which may result in
CPU contention.
The software exporting these tuning options defines a NUMA Node to be equivalent to an individual processor
(socket). Platform ACPI information presented by the BIOS to the operating system helps define the relation
of PCI devices to individual processors. However, this detail is not currently reliably provided in all platforms.
Therefore, using the tuning options may produce unexpected results. Consistent or predictable results when
using the performance options cannot be guaranteed.
The performance tuning options are listed in the LAN RSS Configuration section.
Example 1: A platform with two physical sockets, each socket processor providing 8 core CPUs (16 when
hyper threading is enabled), and a dual port Intel adapter with FCoE enabled.
By default 8 FCoE queues will be allocated per NIC port. Also, by default the first (non-hyper thread) CPU
cores of the first processor will be assigned affinity to these queues resulting in the allocation model pictured
below. In this scenario, both ports would be competing for CPU cycles from the same set of CPUs on socket
0.
Socket Queue to CPU Allocation
Using performance tuning options, the association of the FCoE queues for the second port can be directed to
a different non-competing set of CPU cores. The following settings would direct SW to use CPUs on the other
processor socket:
l FCoE NUMA Node Count = 1: Assign queues to cores from a single NUMA node (or processor
socket).
l FCoE Starting NUMA Node = 1: Use CPU cores from the second NUMA node (or processor socket) in
the system.
l FCoE Starting Core Offset = 0: SW will start at the first CPU core of the NUMA node (or processor
socket).
The following settings would direct SW to use a different set of CPUs on the same processor socket. This
assumes a processor that supports 16 non-hyperthreading cores.
l FCoE NUMA Node Count = 1
l FCoE Starting NUMA Node = 0
l FCoE Starting Core Offset = 8
Example 2: Using one or more ports with queues allocated across multiple NUMA nodes. In this case, for
each NIC port the FCoE NUMA Node Count is set to that number of NUMA nodes. By default the queues will
be allocated evenly from each NUMA node:
l FCoE NUMA Node Count = 2
l FCoE Starting NUMA Node = 0
l FCoE Starting Core Offset = 0
Example 3: The display shows FCoE Port NUMA Node setting is 2 for a given adapter port. This is a read-
only indication from SW that the optimal nearest NUMA node to the PCI device is the third logical NUMA
node in the system. By default SW has allocated that port's queues to NUMA node 0. The following settings
would direct SW to use CPUs on the optimal processor socket: