Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide HP-UX 11i v1, v2, and v3
Table Of Contents
- Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide
- Table of Contents
- 1 Overview of Serviceguard NFS
- Limitations of Serviceguard NFS
- Overview of Serviceguard NFS Toolkit A.11.31.05 with Serviceguard A.11.18 (or later) and Veritas Cluster File System Support
- Overview of the Serviceguard NFS Modular Package
- Overview of the NFS File Lock Migration Feature
- Overview of NFSv4 File Lock Migration Feature
- Overview of Serviceguard NFS with Serviceguard A.11.17 Support
- Integrating Support for Cluster File Systems into Serviceguard NFS Toolkit
- Overview of Cluster File Systems in Serviceguard NFS Toolkit
- Limitations and Issues with the current CFS implementation
- Supported Configurations
- How the Control and Monitor Scripts Work
- 2 Installing and Configuring Serviceguard NFS Legacy Package
- Installing Serviceguard NFS Legacy Package
- Before Creating a Serviceguard NFS Legacy Package
- Configuring a Serviceguard NFS Legacy Package
- Copying the Template Files
- Editing the Control Script (nfs.cntl)
- Editing the NFS Control Script (hanfs.sh)
- Editing the File Lock Migration Script (nfs.flm)
- Editing the NFS Monitor Script (nfs.mon)
- Editing the Package Configuration File (nfs.conf)
- Configuring Server-to-Server Cross-Mounts (Optional)
- Creating the Cluster Configuration File and Bringing Up the Cluster
- Configuring Serviceguard NFS Legacy Package over CFS Packages
- 3 Installing and Configuring Serviceguard NFS Modular Package
- Installing Serviceguard NFS Modular Package
- Before Creating a Serviceguard NFS Modular Package
- Configuring a Serviceguard NFS Modular Package
- Configuring Serviceguard NFS Modular Package over CFS Packages
- 4 Migration of Serviceguard NFS Legacy Package to Serviceguard NFS Modular Package
- 5 Sample Configurations for Legacy Package
- Example One - Three-Server Mutual Takeover
- Example Two - One Adoptive Node for Two Packages with File Lock Migration
- Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration
- Package Configuration File for pkg01
- NFS Control Scripts for pkg01
- NFS File Lock Migration and Monitor Scripts for pkg01
- Package Configuration File for pkg02
- NFS Control Scripts for pkg02
- NFS File Lock Migration and Monitor Scripts for pkg02
- Example Three - Three-Server Cascading Failover
- Example Four - Two Servers with NFS Cross-Mounts
- 6 Sample Configurations for Modular Package
- Index
NODE_FAIL_FAST_ENABLED NO
RUN_SCRIPT /etc/cmcluster/nfs/nfs2.cntl
RUN_SCRIPT_TIMEOUT NO_TIMEOUT
HALT_SCRIPT /etc/cmcluster/nfs/nfs2.cntl
HALT_SCRIPT_TIMEOUT NO_TIMEOUT
SERVICE_NAME nfs2.monitor
SERVICE_FAIL_FAST_ENABLED NO
SERVICE_HALT_TIMEOUT 300
SUBNET 15.13.112.0
NFS Control Scripts for pkg02
The nfs.cntl Control Script
This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample
configuration. Only the user-configured part of the script is shown; the executable part of the
script and most of the comments are omitted.
PATH=/sbin:/usr/bin:/usr/sbin:/etc:/bin
VGCHANGE="vgchange -a e" # Default
CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=exclusivewrite"
VG[0]=nfsu02 LV[0]=/dev/nfsu02/lvol1; FS[0]=/hanfs/nfsu021; FS_MOUNT_OPT[0]="-o rw"
VXVOL="vxvol -g \$DiskGroup startall" #Default
FS_UMOUNT_COUNT=1
FS_MOUNT_RETRY_COUNT=0
IP[0]=15.13.114.244
SUBNET[0]=15.13.112.0
function customer_defined_run_cmds
{
/etc/cmcluster/nfs/nfs2_xmnt start
remsh thyme /etc/cmcluster/nfs/nfs2_xmnt start
}
The function customer_defined_run_cmds calls a script called nfs2_xmnt. This script
NFS-mounts the file system exported by the package pkg02. If you configured the file system
in the /etc/fstab file, the package might not be active yet when the servers tried to mount
the file system at system boot. By configuring the NFS control script to NFS-mount the file system,
you ensure that the package is active before the mount command is invoked.
The first line in the customer_defined_run_cmds function executes the nfs2_xmnt script
locally on host basil (the primary node for pkg02). The second line, beginning with remsh,
executes the nfs2_xmnt script remotely on host thyme.
If pkg02 fails to come up, or if the remsh to host thyme fails, the file system will not be mounted,
and no error will be returned. The only way to be sure the file system was mounted successfully
is to run the nfs2_xmnt script manually on both host basil and host thyme.
The only user-configurable values in the nfs2_xmnt script are the SNFS[n] and CNFS[n]
variables. These specify the server location of the file system and the client mount point for the
file system. The following line is the from the nfs2_xmnt script in this example configuration:
SNFS[0]="nfs2:/hanfs/nfsu021"; CNFS[0]="/nfs/nfsu021"
In the SNFS[0] variable, nfs2 is the name that maps to the relocatable IP address of pkg02. It
must be configured in the name service the host is using (DNS, NIS, or the /etc/hosts file). If
you do not want to configure a name for the package, you can just specify the IP address in the
SNFS[0] variable, as follows:
SNFS[0]="15.13.114.244:/hanfs/nfsu021"; CNFS[0]="/nfs/nfsu021"
Example Four - Two Servers with NFS Cross-Mounts 91