Product specifications

Table Of Contents
D–Troubleshooting
Performance Issues
D-10 IB6054601-00 H
S
The exact symptoms can vary with BIOS, amount of memory, etc. When the driver
starts, you may see these errors:
ib_ipath 0000:04:01.0: infinipath0: Performance problem: bandwidth
to PIO buffers is only 273 MiB/sec
infinipath: mtrr_add(feb00000,0x100000,WC,0) failed (-22)
infinipath: probe of 0000:04:01.0 failed with error -22
If you do not see any of these messages on your console, but suspect this
problem, check the /var/log/messages file. Some systems suppress driver
load messages but still output them to the log file.
To check the bandwidth, type:
$ ipath_pkt_test -B
When configured correctly, the QLE7140 and QLE7240 report in the range
of 1150–1500 MBps, while the QLE7280 reports in the range
of 1950–3000 MBps. The QHT7040/7140 adapters normally report in the range
of 2300–2650 MBps.
You can also use ipath_checkout to check for MTRR problems (see
“ipath_checkout” on page F-7).
The dmesg program (“dmesg” on page F-3) can also be used for diagnostics.
Details on both the PAT and MTRR mechanisms, and how the options should be
set, can be found in “Write Combining” on page E-1.
Large Message Receive Side Bandwidth Varies with Socket
Affinity on Opteron Systems
On Opteron systems, when using the QLE7240 or QLE7280 in DDR mode, there
is a receive side bandwidth bottleneck for CPUs that are not adjacent to the PCI
Express root complex. This may cause performance to vary. The bottleneck is
most obvious when using SendDMA with large messages on the farthest sockets.
The best case for SendDMA is when both sender and receiver are on the closest
sockets. Overall performance for PIO (and smaller messages) is better than with
SendDMA.
MVAPICH Performance Issues
At the time of publication, MVAPICH over OpenFabrics over InfiniPath
performance tuning has not been done. However, if MVAPICH on InfiniPath is
configured to use PSM, performance comparable to QLogic MPI can be obtained.