HP Insight control for Linux 6.2 Installation Guide

managed systems. By defining multiple servers to acts as hubs, the monitoring load is distributed
across multiple servers.
The /hptc_cluster file system stores the monitoring data collected by the management hubs.
During the Insight Control for Linux installation procedure you can elect to export this file system
to the management hubs. Doing so enables you to access all monitoring data from any
management hub.
Insight Control for Linux uses NFS to export the /hptc_cluster file system on the CMS to the
management hubs. This means that you will have to configure the CMS to allow it, which requires
opening ports in the firewall.
Because NFS uses random ports by default, you must identify and lock specific ports for NFS so
the firewall can be configured. The procedure differs for RHEL and SLES operating systems.
NOTE: The port numbers used in the procedure are intended as examples; you can elect to use
different values than are used in the procedures.
NOTE: See Table 5-1 (page 40) if you want to learn more about exporting the /hptc_cluster
file system and management hubs.
If your company policy prevents you from using NFS, you can manually export the
/hptc_cluster file system using a different mechanism.
Procedure 3-1 Opening ports on RHEL operating systems
1. Use a text editor to create a /etc/sysconfig/nfs file on a RHEL Version 4 OS or modify
the /etc/sysconfig/nfs on a RHEL Version 5 OS with content similar to the following
to lock most of the NFS services to specific port numbers. You can use any available port
number above 1024.
RPCNFSDCOUNT=8
LOCKD_TCPPORT=33776
LOCKD_UDPPORT=33776
MOUNTD_PORT=33777
STATD_PORT=33778
RQUOTAD_PORT=33779
2. Save your changes and exit the text editor.
3. Use a text editor to modify the /etc/modprobe.conf file to include this for lockd:
options lockd nlm_tcpport=10000 nlm_udpport=10001
4. Save your changes and exit the text editor.
5. When the ports are locked down, you must open these port numbers on the CMS for NFS
(and /hptc_cluster) to be shared.
Use a text editor to add the ports to the /etc/sysconfig/iptables file, as follows:
# portmapper
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
# nfs
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 2049 -j ACCEPT
# nlockmgr
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 33776 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 33776 -j ACCEPT
# mountd
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 33777 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 33777 -j ACCEPT
# rpcstatd
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 33778 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 33778 -j ACCEPT
# rquotad
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 33779 -j ACCEPT
6. Restart the iptables service:
# /etc/init.d/iptables restart
26 Preparing for the Insight Control for Linux installation