User Guide

Table Of Contents
l FCoE NUMA Node Count = 1
l FCoE Starting NUMA Node = 2
l FCoE Starting Core Offset = 0
This example highlights the fact that platform architectures can vary in the number of PCI buses and where
they are attached. The figures below show two simplified platform architectures. The first is the older common
FSB style architecture in which multiple CPUs share access to a single MCH and/or ESB that provides PCI
bus and memory connectivity. The second is a more recent architecture in which multiple CPU processors
are interconnected via QPI, and each processor itself supports integrated MCH and PCI connectivity directly.
There is a perceived advantage in keeping the allocation of port objects, such as queues, as close as possible
to the NUMA node or collection of CPUs where it would most likely be accessed. If the port queues are using
CPUs and memory from one socket when the PCI device is actually hanging off of another socket, the result
may be undesirable QPI processor-to-processor bus bandwidth being consumed. It is important to understand
the platform architecture when using these performance options.
Shared Single Root PCI/Memory Architecture
Distributed Multi-Root PCI/Memory Architecture
Example 4: The number of available NUMA node CPUs is not sufficient for queue allocation. If your platform
has a processor that does not support an even power of 2 CPUs (for example, it supports 6 cores), then during
queue allocation if SW runs out of CPUs on one socket it will by default reduce the number of queues to a
power of 2 until allocation is achieved. For example, if there is a 6 core processor being used, the SW will only
allocate 4 FCoE queues if there only a single NUMA node. If there are multiple NUMA nodes, the NUMA node
count can be changed to a value greater than or equal to 2 in order to have all 8 queues created.