Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide HP-UX 11i v1, v2, and v3
Table Of Contents
- Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide
- Table of Contents
- 1 Overview of Serviceguard NFS
- Limitations of Serviceguard NFS
- Overview of Serviceguard NFS Toolkit A.11.31.05 with Serviceguard A.11.18 (or later) and Veritas Cluster File System Support
- Overview of the Serviceguard NFS Modular Package
- Overview of the NFS File Lock Migration Feature
- Overview of NFSv4 File Lock Migration Feature
- Overview of Serviceguard NFS with Serviceguard A.11.17 Support
- Integrating Support for Cluster File Systems into Serviceguard NFS Toolkit
- Overview of Cluster File Systems in Serviceguard NFS Toolkit
- Limitations and Issues with the current CFS implementation
- Supported Configurations
- How the Control and Monitor Scripts Work
- 2 Installing and Configuring Serviceguard NFS Legacy Package
- Installing Serviceguard NFS Legacy Package
- Before Creating a Serviceguard NFS Legacy Package
- Configuring a Serviceguard NFS Legacy Package
- Copying the Template Files
- Editing the Control Script (nfs.cntl)
- Editing the NFS Control Script (hanfs.sh)
- Editing the File Lock Migration Script (nfs.flm)
- Editing the NFS Monitor Script (nfs.mon)
- Editing the Package Configuration File (nfs.conf)
- Configuring Server-to-Server Cross-Mounts (Optional)
- Creating the Cluster Configuration File and Bringing Up the Cluster
- Configuring Serviceguard NFS Legacy Package over CFS Packages
- 3 Installing and Configuring Serviceguard NFS Modular Package
- Installing Serviceguard NFS Modular Package
- Before Creating a Serviceguard NFS Modular Package
- Configuring a Serviceguard NFS Modular Package
- Configuring Serviceguard NFS Modular Package over CFS Packages
- 4 Migration of Serviceguard NFS Legacy Package to Serviceguard NFS Modular Package
- 5 Sample Configurations for Legacy Package
- Example One - Three-Server Mutual Takeover
- Example Two - One Adoptive Node for Two Packages with File Lock Migration
- Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration
- Package Configuration File for pkg01
- NFS Control Scripts for pkg01
- NFS File Lock Migration and Monitor Scripts for pkg01
- Package Configuration File for pkg02
- NFS Control Scripts for pkg02
- NFS File Lock Migration and Monitor Scripts for pkg02
- Example Three - Three-Server Cascading Failover
- Example Four - Two Servers with NFS Cross-Mounts
- 6 Sample Configurations for Modular Package
- Index
server location of the file system, and the CNFS[n] variable is the client mount point of the
file system.
SNFS[0]="nfs1:/hanfs/nfsu011";CNFS[0]="/nfs/nfsu011"
In this example, nfs1 is the name that maps to the relocatable IP address of the package. It
must be configured in the service name used by the server (DNS, NIS, or /etc/hosts file).
If a server for the package will NFS-mount the package file systems, the client mount point
(CNFS) must be different from the server location (SNFS).
4. Copy the script you have just modified to the same location in all the servers that will
NFS-mount the file systems in the package.
5. After the package is active on the primary node, execute the nfs_xmnt script on each server
that will NFS-mount the file systems.
/etc/cmcluster/nfs/pkg1/nfs1_xmnt start
Hewlett-Packard recommends that you execute the nfs_xmnt script from the command
line after the package is active on the primary node. However, you can configure the
nfs_xmnt to be executed as part of package startup. For this, configure nfs_xmnt as an
external script in the package configuration file by following the steps below:
a. Copy the external script template file to the package directory location to create the
external script specific to the package.
cp /etc/cmcluster/examples/external_script.template /etc/cmcluster/nfs/pkg1/ext1_xmnt
b. Modify the start_command function in the ext1_xmnt script, as below:
function_start_command
{
sg_log 5 "start_command"
# ADD your package start steps here
/etc/cmcluster/nfs/pkg1/nfs1_xmnt start
remsh sage /etc/cmcluster/nfs/pkg1/nfs1_xmnt start
return 0
}
c. Copy the modified external script to the same location in all the servers that will
NFS-mount the file systems in the package
d. Specify the external script location in the package configuration file, in the “external
script” section as below
external_script /etc/cmcluster/nfs/pkg1/ext1_xmnt
The second line in the start command function of ext1_xmnt invokes remsh to run the
nfs_xmnt script on remote host sage.
Running the nfs_xmnt script as an external script guarantees that the package is active
before the mount command executes. It prevents cross-mounted servers from becoming
deadlocked while each server hangs on the mount command, waiting for the other server's
package to become active. However, if the package fails to activate, or if the remsh command
fails, the file systems are not mounted, and no error is returned. The only way to be sure the
file systems are mounted successfully is to run the nfs_xmnt script manually on each host
where the file systems must be mounted.
Creating the Cluster Configuration File and Bringing Up the Cluster
To create the cluster configuration file, verify the cluster and package configuration files, and
run the cluster, perform the following steps:
1. Create the cluster configuration file from your package configuration files:
cmquerycl -v -C /etc/cmcluster/nfs/cluster.conf -n basil -n sage -n thyme
58 Installing and Configuring Serviceguard NFS Modular Package