HP XC System Software Release Notes for Version 3.2.1

5.4.1 Scalable File Share Mount Problems With Mixed HCAs
A Scalable File Share (SFS) share might not mount properly if the head node and compute nodes
have different types of HCA cards. For example, a memfull HCA on the head node and a memfree
HCA on the compute nodes, including ConnectX HCAs.
HP does not support a mixture of ConnectX and non-ConnectX HCAs, so this situation should
rarely be encountered.
Follow this procedure to work around the problem before you run the cluster_config utility:
1. Use the text editor of your choice to edit the /etc/modprobe.conf.lustre file.
2. Set the value of the mtu attribute to 2048, as follows:
options lnet networks=o2ib0 mtu=2048
3. Invoke the cluster_config utility and complete the imaging process to propagate this
file to all compute nodes.
After imaging is complete, follow this procedure on the head node to avoid any performance
issues if the head node is installed with memfull cards:
1. On the head node, stop all jobs that are using the SFS share.
2. Stop the SFS service:
# service sfs stop
3. Use the text editor of your choice to edit the /etc/modprobe.conf.lustre file on the
head node.
4. Set the value of the mtu attribute to 4096, as follows:
options lnet networks=o2ib0 mtu=4096
5. Restart the SFS service:
# service sfs start
6. Restart all stopped jobs.
5.4.2 Benign Message From C52xcgraph
You might see the following message when you run the cluster_config utility on a cluster
with an InfiniBand interconnect:
.
.
.
Executing C52xcgraph gconfigure
Found no adapter info on IR0N00
Failed to find any Infiniband ports
Executing C54httpd gconfigure
.
.
.
This message is displayed because the C52xcgraph configuration script is probing the InfiniBand
switch to determine how many HCAs with an IP address are present. Because the HCAs have
not yet been assigned an IP address, C52xcgraph does not find any HCAs with an IP address
and prints the message. This message does not prevent the cluster_config utility from
completing.
To work around this issue, after the cluster is installed and configured, run
/opt/hptc/hpcgraph/sbin/hpcgraph-setup with no options.
5.4 Notes that Apply Before Running the cluster_config Utility 25