User guide
3–InfiniBand
®
Cluster Setup and Administration
Performance Settings and Management Tips
IB0054606-02 A 3-29
For setting all C-States to 0 where there is no BIOS support:
1. Add kernel boot option using the following command:
processor.max_cstate=0
2. Reboot the system.
If the node uses a single-port HCA, and is not a part of a parallel file system
cluster, there is no need for performance tuning changes to a modprobe
configuration file. The driver will automatically set the parameters appropriately for
the node's Intel CPU, in a conservative manner.
For all Intel systems with Xeon 5500 Series (Nehalem) or newer CPUs, the
following settings are default:
pcie_caps=0x51
On Intel systems with Xeon 5500 Series (Nehalem) or newer CPUs, the lspci
output will read:
MaxPayload 256 bytes, MaxReadReq 4096 bytes
If you run a script, such as the following:
for x in /sys/module/ib_qib/parameters/*; do echo $(basename
$x) $(cat $x); done
Then in the list of qib parameters, you should see the following for the two
parameters being discussed:
. . .
rcvhdrcnt 0
. . .
pcie_caps 0
The 0 means the driver automatically sets these parameters. Therefore, neither
the user nor the ipath_perf_tuning script should modify these parameters.
Intel Nehalem or Westmere CPU Systems (DIMM Configuration)
Compute node memory bandwidth is important for high-performance computing
(HPC) application performance and for storage node performance. On Intel CPUs
code named Nehalem or Westmere (Xeon 5500 series or 5600 series) it is
important to have an equal number of dual in-line memory modules (DIMMs) on
each of the three memory channels for each CPU. On the common dual CPU
systems, you should use a multiple of six DIMMs for best performance.