Installation guide

Release Notes
options ip_conntrack hashsize=Nhash
to /etc/modprobe.conf, and adding:
net.ipv4.ip_conntrack_max=Nmax
to /etc/sysctl.conf.
Optionally reconfigure node names
You may declare site-specific alternative node names for cluster nodes by adding entries to /etc/beowulf/config. The
syntax for a node name entry is:
nodename format-string [IPv4offset] [netgroup]
For example,
nodename node%N
allows the user to refer to node 4 using the traditional .4 name, or alternatively using names like node4 or node004. See man
beowulf-config and the Administrator’s Guide for details.
Post-Installation Configuration Issues For Large Clusters
Larger clusters have additional issues that may require post-installation adjustments.
Optionally increase the number of nfsd threads
The default count of 8 nfsd NFS daemons may be insufficient for large clusters. One symptom of an insufficiency is a syslog
message, most commonly seen when you boot all the cluster nodes:
nfsd: too many open TCP sockets, consider increasing the number of nfsd threads
Scyld ClusterWare automatically increases the nfsd thread count to at least one thread per compute node, with a lowerbound
of eight (for <=8 nodes) and an upperbound of 64 (for >=64 nodes). If this increase is insufficient, increase the thread count
(e.g., to 16) by executing:
echo 16 > /proc/fs/nfsd/threads
Ideally, the chosen thread count should be sufficient to eliminate the syslog complaints, but not significantly higher, as that
would unnecessarily consume system resources. One approach is to repeatedly double the thread count until the syslog
error messages stop occurring, then make the satisfactory value N persistent across master node reboots by creating the file
/etc/sysconfig/nfs, if it does not already exist, and adding to it an entry of the form:
RPCNFSDCOUNT=N
A value N of 1.5x to 2x the number of nodes is probably adequate, although perhaps excessive. See the Administrator’s
Guide for a more detailed discussion of NFS configuration.
9