User Manual

Table Of Contents
Rev 4.60
Mellanox Technologies
126
Mellanox ConnectX EN 10Gbit Ethernet Adapter <X> device detected that the link connected to port
<Y> is up, and has initiated normal operation.
Mellanox ConnectX EN 10Gbit Ethernet Adapter <X> device detected that the link connected to port
<Y> is down. This can occur if the physical link is disconnected or damaged, or if the other end-port
is down.
Mismatch in the configurations between the two ports may affect the performance. When Using
MSI-X, both ports should use the same RSS mode. To fix the problem, configure the RSS mode of
both ports to be the same in the driver GUI.
Mellanox ConnectX EN 10Gbit Ethernet Adapter <X> device failed to create enough MSI-X vec-
tors. The Network interface will not use MSI-X interrupts. This may affects the performance. To fix
the problem, configure the number of MSI-X vectors in the registry to be at least <Y>.
10.3 Performance Troubleshooting
Issue 1. Windows Settings
Suggestion 1: In Windows 2012 and above, when a kernel debugger is configured (not neces-
sarily physically connected), flow control is disabled unless the following registry key is set
(reboot required after setting):
Registry Path: HKLM\SYSTEM\CurrentControlSet\Services\NDIS\Parameters
Type: REG_DWORD
Key name: AllowFlowControlUnderDebugger
Value: 1
Suggestion 2: Go to "Power Options" in the "Control Panel". Make sure "Maximum Perfor-
mance" is set as the power scheme, reboot is needed.
Issue 2. General Diagnostic
Suggestion 1: Go to "Device Manager", locate the Mellanox adapter that you are debugging,
right-click and go to "Information":
PCI Gen 2: should appear as "PCI-E 5.0 Gbps x8"
PCI Gen 3: should appear as "PCI-E 8.0 Gbps x8"
Link Speed: 40.0Gbps/10.0Gbps
Suggestion 2: To determine if the Mellanox NIC and PCI bus can achieve their maximum
speed, it's best to run ib_send_bw in a loopback. On the same machine:
1. Run "start /b /affinity 0x1 ibv_write_bw"
2. Run "start /b /affinity 0x2 ibv_write_bw 127.0.0.1"
3. Repeat for port 2 with additional -p2, and for other cards if necessary.
4. On PCI Gen3 the expected result is around 5700MB/s
On PCI Gen2 the expected result is around 3300MB/s
Any number lower than that points to bad configuration or installation on the wrong PCI slot.
Malfunctioning QoS settings and Flow Control can be the cause as well.
Suggestion 3: To determine the maximum speed between the two sides with the most basic
test:
1. Run "ib_send_bw" on machine 1
2. Run "ib_send_bw <host1>" on machine 2 where <host1> is the hostname for
machine 1.