Specifications
True Scale Fabric OFED+ Host Software
February 2014 RN 7.2.2.0.8
Order Number: H31512002US 25
OFED+ Host SW
Appendix A Performance Gain Conditions Test
The following example shows how to determine if conditions 1 and 2, described in the
first bullet of “Release 7.1.1 Enhancements” on page 6 hold:
$ numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
...
node 1 cpus: 8 9 10 11 12 13 14 15
...
If numactl --hardware shows more than 1 NUMA node, then your OS supports
NUMA.
To see whether your system supports NUMA node to IO device binding and whether
your HCAs connect to different NUMA nodes, look at the files in the
/sys directories to
see if the numa_node field is populated correctly. The following steps indicate how to
do this.
1. Change directory to
/sys/class/infiniband
$ cd /sys/class/infiniband
2. List all files in the /infiniband directory in long format:
$ ls -la
This list the symbolic links to the HCA devices with the pci bus, slot and function
number (in the following example 6:00.0 and 82:00.0):
lrwxrwxrwx 1 root root 0 Jul 9 11:24 qib0 -> ../../devices/pci0000:00/
0000:00:03.0/0000:06:00.0/infiniband/qib0/
lrwxrwxrwx 1 root root 0 Jul 9 11:24 qib1 -> ../../devices/pci0000:80/
0000:80:02.0/0000:82:00.0/infiniband/qib1/
3. Print the numa node id for the respective devices:
[infiniband]$ cat ../../devices/pci0000:00/0000:00:03.0/0000:06:00.0/numa_node
0
[infiniband]$ cat ../../devices/pci0000:80/0000:80:02.0/0000:82:00.0/numa_node
1
The HCAs are bound to the two NUMA nodes first shown in this appendix with the
'numactl --hardware" command: 0 and 1.
§ §










