Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide HP-UX 11i v1, v2, and v3
Table Of Contents
- Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide
- Table of Contents
- 1 Overview of Serviceguard NFS
- Limitations of Serviceguard NFS
- Overview of Serviceguard NFS Toolkit A.11.31.05 with Serviceguard A.11.18 (or later) and Veritas Cluster File System Support
- Overview of the Serviceguard NFS Modular Package
- Overview of the NFS File Lock Migration Feature
- Overview of NFSv4 File Lock Migration Feature
- Overview of Serviceguard NFS with Serviceguard A.11.17 Support
- Integrating Support for Cluster File Systems into Serviceguard NFS Toolkit
- Overview of Cluster File Systems in Serviceguard NFS Toolkit
- Limitations and Issues with the current CFS implementation
- Supported Configurations
- How the Control and Monitor Scripts Work
- 2 Installing and Configuring Serviceguard NFS Legacy Package
- Installing Serviceguard NFS Legacy Package
- Before Creating a Serviceguard NFS Legacy Package
- Configuring a Serviceguard NFS Legacy Package
- Copying the Template Files
- Editing the Control Script (nfs.cntl)
- Editing the NFS Control Script (hanfs.sh)
- Editing the File Lock Migration Script (nfs.flm)
- Editing the NFS Monitor Script (nfs.mon)
- Editing the Package Configuration File (nfs.conf)
- Configuring Server-to-Server Cross-Mounts (Optional)
- Creating the Cluster Configuration File and Bringing Up the Cluster
- Configuring Serviceguard NFS Legacy Package over CFS Packages
- 3 Installing and Configuring Serviceguard NFS Modular Package
- Installing Serviceguard NFS Modular Package
- Before Creating a Serviceguard NFS Modular Package
- Configuring a Serviceguard NFS Modular Package
- Configuring Serviceguard NFS Modular Package over CFS Packages
- 4 Migration of Serviceguard NFS Legacy Package to Serviceguard NFS Modular Package
- 5 Sample Configurations for Legacy Package
- Example One - Three-Server Mutual Takeover
- Example Two - One Adoptive Node for Two Packages with File Lock Migration
- Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration
- Package Configuration File for pkg01
- NFS Control Scripts for pkg01
- NFS File Lock Migration and Monitor Scripts for pkg01
- Package Configuration File for pkg02
- NFS Control Scripts for pkg02
- NFS File Lock Migration and Monitor Scripts for pkg02
- Example Three - Three-Server Cascading Failover
- Example Four - Two Servers with NFS Cross-Mounts
- 6 Sample Configurations for Modular Package
- Index

rpc.statd, rpc.lockd, nfsd, rpc.mountd, rpc.pcnfsd, and nfs.flm processes. You can
monitor any or all of these processes as follows:
• To monitor the rpc.statd, rpc.lockd, and nfsd processes, you must set the NFS_SERVER
variable to 1 in the /etc/rc.config.d/nfsconf file. If one nfsd process dies or is killed,
the package fails over, even if other nfsd processes are running.
• To monitor the rpc.mountd process, you must set the START_MOUNTD variable to 1 in the
/etc/rc.config.d/nfsconf file. To monitor the rpc.mountd process, you must start
it when the system boots up, not by inetd.
• To monitor the rpc.pcnfsd process, you must set the PCNFS_SERVER variable to 1 in the
/etc/rc.config.d/nfsconf file.
• To monitor the nfs.flm process, you must enable the File Lock Migration feature. Monitor
this process with the ps command, not with the rpcinfo command. If you enable the File
Lock Migration feature, ensure that the monitor script name is unique for each package (for
example, nfs1.mon).
NOTE: The file name of the NFS_FLM_SCRIPT script must be limited to 13 characters or fewer.
NOTE: The nfs.mon script uses rpcinfo calls to check the status of various processes. If the
rpcbind process is not running, the rpcinfo calls time out after 75 seconds. Because 10 rpcinfo
calls are attempted before failover, it takes approximately 12 minutes to detect the failure. This
problem has been fixed in release versions 11.11.04 and 11.23.03.
The default NFS control script, hanfs.sh, does not invoke the monitor script. You do not have
to run the NFS monitor script to use Serviceguard NFS. If the NFS package configuration file
specifies AUTO_RUN YES and LOCAL_LAN_FAILOVER YES (the defaults), the package switches
to the next adoptive node or to a standby network interface in the event of a node or network
failure. However, if one of the NFS services goes down while the node and network remain up,
you need the NFS monitor script to detect the problem and to switch the package to an adoptive
node.
Whenever the monitor script detects an event, it logs the event. Each NFS package has its own
log file. This log file is named according to the NFS control script, nfs.cntl, by adding a .log
extension. For example, if your control script is called /etc/cmcluster/nfs/nfs1.cntl, the
log file is called /etc/cmcluster/nfs/nfs1.cntl.log.
TIP: You can specify the number of retry attempts for all these processes in the nfs.mon file.
On the Client Side
The client should NFS-mount a file system using the package name in the mount command. The
package name is associated with the package's relocatable IP address. On client systems, be sure
to use a hard mount and set the proper retry values for the mount. Alternatively, set the proper
timeout for automounter. The timeout should be greater than the total end-to-end recovery time
for the Serviceguard NFS package—that is, running fsck, mounting file systems, and exporting
file systems on the new node. (With journaled file systems, this time should be between one and
two minutes.) Setting the timeout to a value greater than the recovery time allows clients to
reconnect to the file system after it returns to the cluster on the new node.
NOTE: AutoFS mounts may fail when mounting file systems exported by an HA-NFS package
soon after that package has been restarted. To avoid these mount failures, AutoFS clients should
wait at least 60 seconds after an HA-NFS package has started before mounting file systems
exported from that package.
How the Control and Monitor Scripts Work 23