Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.
© Copyright 2011 Hewlett-Packard Development Company, L. P Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
Contents 1 Overview of Serviceguard NFS....................................................................6 Limitations of Serviceguard NFS.................................................................................................6 Overview of Serviceguard NFS Toolkit A.11.31.07 with Serviceguard A.11.18 (or later) and Veritas Cluster File System Support........................................................................................................
3 Installing and Configuring Serviceguard NFS Modular Package......................43 Installing Serviceguard NFS Modular Package............................................................................43 Before Creating a Serviceguard NFS Modular Package...............................................................44 Configuring a Serviceguard NFS Modular Package.....................................................................46 Editing the Package Configuration File (nfs.conf).....................
NFS Control Scripts for pkg01.............................................................................................77 The nfs.cntl Control Script...............................................................................................77 The hanfs.sh Control Script.............................................................................................77 Package Configuration File for pkg02...................................................................................
1 Overview of Serviceguard NFS Serviceguard NFS is a tool kit that enables you to use Serviceguard to set up highly available NFS servers. You must set up a Serviceguard cluster before you can set up Highly Available NFS. For instructions on setting up a Serviceguard cluster, see the Managing Serviceguard manual. Serviceguard NFS is a separately purchased set of configuration files and control script, which you customize for your specific needs.
rmtab. The man page for rmtab contains a warning that it is not always totally accurate, so it is also unreliable in a standard NFS server / NFS client environment. • AutoFS mounts may fail when mounting file systems exported by an HA-NFS package soon after that package has been restarted. To avoid these mount failures, AutoFS clients should wait at least 60 seconds after an HA-NFS package has started before mounting file systems exported from that package.
on Serviceguard release A.11.18 (or later) for both VxFS (non-CFS) and Veritas Cluster File System (CFS). For information on installing and configuring modular package, see Installing and Configuring Serviceguard NFS Modular Package (page 43) Overview of the NFS File Lock Migration Feature Serviceguard NFS introduced the “File Lock Migration” feature beginning with versions A.11.11.03 and A.11.23.02.
directory is a configurable parameter and must be dedicated to hold the v4_state entries only. Both holding directories should be located in the same filesystem. • The nfs.flm script now copies v4_state entries from /var/nfs4/v4_state to the NFSv4 holding directory when copying SM entries from the /var/statmon/sm directory into the NFSv2 and NFSv3 holding directory.
12th Edition. To locate this document, go to the HP-UX Serviceguard docs page at: www.hp.com/go/hpux-serviceguard-docs. On this page, select HP Serviceguard. Serviceguard A.11.17 is not available on HP-UX 11i v1 systems, so Serviceguard CFS support is only applicable to HP-UX 11i v2. Limitations The following is a list of limitations when using Serviceguard NFS Toolkit A.11.23.05 with Serviceguard A.11.17: • Serviceguard A.11.
The Serviceguard NFS package control scripts ensure that upon package failure or shutdown the storage is made inaccessible on the node where the package failed or was halted. Figure 1 CFS versus Non-CFS (VxFS) Implementation In a Serviceguard CFS environment, files and filesystems are concurrently accessible on multiple nodes. When a package fails over, the adoptive systems do not have to mount the disks from the failed system because they are already mounted.
Figure 2 SG NFS Servers over VxFS — High Availability Figure 3 SG NFS Servers over CFS — High Availability, Scalability, Load Balancing Limitations and Issues with the current CFS implementation The main limitation with the current CFS implementation is that during package failover, an NFS client may lose a file lock in the following situation. If Client1 locks a CFS file on Server1, and Client2 attempts to lock the same CFS file on Server2, Client2 will wait for the lock to become available.
Figure 4 SG NFS Servers over CFS — High Availability, File Locking Supported Configurations Serviceguard NFS supports the following configurations: • Simple failover from an active NFS server node to an idle NFS server node. • Failover from one active NFS server node to another active NFS server node, where the adoptive node supports more than one NFS package after the failover. • A host configured as an adoptive node for more than one NFS package.
Figure 5 Simple Failover to an Idle NFS Server Node_A is the primary node for NFS server package Pkg_1. When Node_A fails, Node_B adopts Pkg_1. This means that Node_B locally mounts the file systems associated with Pkg_1 and exports them. Both Node_A and Node_B must have access to the disks that hold the file systems for Pkg_1. Failover from One Active NFS Server to Another Figure 6 shows a failover from one active NFS server node to another active NFS server node.
Figure 6 Failover from One Active NFS Server to Another In Figure 6, Node_A is the primary node for Pkg_1, and Node_B is the primary node for Pkg_2. When Node_A fails, Node_B adopts Pkg_1 and becomes the server for both Pkg_1 and Pkg_2. A Host Configured as Adoptive Node for Multiple Packages Figure 7 shows a three-node configuration where one node is the adoptive node for packages on both of the other nodes. If either Node_A or Node_C fails, Node_B adopts the NFS server package from that node.
Figure 7 A Host Configured as Adoptive Node for Multiple Packages When Node_A fails, Node_B becomes the server for Pkg_1. If Node_C fails, Node_B will become the server for Pkg_2. Alternatively, you can set the package control option in the control script, nfs.cntl, to prevent Node_B from adopting more than one package at a time. With the package control option, Node_B may adopt the package of the first node that fails, but if the second node fails, Node_B will not adopt its package.
Figure 8 Cascading Failover with Three Adoptive Nodes Server-to-Server Cross Mounting Two NFS server nodes may NFS-mount each other's file systems and still act as adoptive nodes for each other's NFS server packages. Figure 9 illustrates this configuration.
Figure 9 Server-to-Server Cross Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. Each node NFS-mounts the file systems for both packages. If Node_A fails, Node_B mounts the filesystem for Pkg_1, and the NFS mounts are not interrupted.
• Initiates the NFS monitor script to check periodically on the health of NFS services, if you have configured your NFS package to use the monitor script. • Exports each file system associated with the package so that it can later be NFS-mounted by clients. • Assigns a package IP address to the LAN card on the current node. After this sequence, the NFS server is active, and clients can NFS-mount the exported file systems associated with the package.
rpc.statd, rpc.lockd, nfsd, rpc.mountd, rpc.pcnfsd, and nfs.flm processes. You can monitor any or all of these processes as follows: • To monitor the rpc.statd, rpc.lockd, and nfsd processes, you must set the NFS_SERVER variable to 1 in the /etc/rc.config.d/nfsconf file. If one nfsd process dies or is killed, the package fails over, even if other nfsd processes are running. • To monitor the rpc.mountd process, you must set the START_MOUNTD variable to 1 in the /etc/rc.config.d/nfsconf file.
2 Installing and Configuring Serviceguard NFS Legacy Package This chapter explains how to configure Serviceguard NFS legacy package. You must set up your Serviceguard cluster before you can configure legacy package. For instructions on setting up an Serviceguard cluster, see the Managing Serviceguard manual.
cmmakepkg -s /opt/cmcluster/nfs/nfs.cntl 3. 4. Create a directory, /etc/cmcluster/nfs. Run the following command to copy the Serviceguard NFS template files to the newly created /etc/cmcluster/nfs directory: cp /opt/cmcluster/nfs/* /etc/cmcluster/nfs Before Creating a Serviceguard NFS Legacy Package Before creating a Serviceguard NFS legacy package, perform the following tasks: 1. Set up your Serviceguard cluster according to the instructions in the Managing Serviceguard manual. 2.
8. that the disks must be attached to a shared bus that is connected to all nodes that support the package. For information on configuring disks, see the Managing Serviceguard manual. Use SAM or LVM commands to set up volume groups, logical volumes, and file systems as needed for the data that will be exported to clients. The names of the volume groups must be unique within the cluster, and the major and minor numbers associated with the volume groups must be the same on all nodes.
Configuring a Serviceguard NFS Legacy Package To configure a Serviceguard NFS legacy package, complete the following tasks, included in this section: • “Copying the Template Files” • “Editing the Control Script (nfs.cntl)” • “Editing the NFS Control Script (hanfs.sh) ” • “Editing the File Lock Migration Script (nfs.flm)” • “Editing the NFS Monitor Script (nfs.mon)” • “Editing the Package Configuration File (nfs.
Editing nfs.cntl for NFS Toolkit A.11.00.05, A.11.11.02 (or above) and A.11.23.01 (or above) Starting with Serviceguard A.11.13, a package can have LVM volume groups, CVM disk groups and VxVM disk groups. Example steps: 1. Create a separateVG[n] variable for each LVM volume group that is used by the package: VG[0]=/dev/vg01 VG[1]=/dev/vg02 ... 2. Create a separate VXVM_DG[n] variable for each VxVM disk group that is used by the package: VXVM_DG[0]=dg01 VXVM_DG[1]=dg02 ... 3.
There is a short time, after one package has failed over but before the cmmodpkg command has executed, when the other package can fail over and the host will adopt it. In other words, if two packages fail over at approximately the same time, a host may adopt both packages, even though the package control option is specified. See “Example Two - One Adoptive Node for Two Packages with File Lock Migration” (page 69) for a sample configuration using the package control option.
of a node or network failure. The NFS monitor script causes the package failover if any of the monitored NFS services fails. 6. If you run the NFS monitor script, set the NFS_SERVICE_CMD variable to the full path name of the NFS monitor script. NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs.mon The path name for the executable script does not have to be unique to each package. Every package can use the same script.
“-o fsid=” must be used to force the file system ID portion of the file handle to be when clusters are composed of mixed architectures such as HP Integrity servers and HP 9000 Series 800 computers over CFS. A value between 1 and 32767 may be used, but must be unique among the shared file systems. See share_nfs(1m) for detailed information. 2.
Editing the File Lock Migration Script (nfs.flm) The File Lock Migration script, nfs.flm, handles the majority of the work involved in maintaining file lock integrity that follows an HA/NFS failover. The nfs.flm script includes the following configurable parameters: • NFS_FLM_HOLDING_DIR - Name of a unique directory created in one of the shared volumes associated with this package. This directory holds copies of the /var/nfs4/v4_state files for this package.
NOTE: fewer. The file name of the NFS_FLM_SCRIPT script must be limited to 13 characters or NOTE: The nfs.mon script uses rpcinfo calls to check the status of various processes. If the rpcbind process is not running, the rpcinfo calls time out after 75 seconds. Because 10 rpcinfo calls are attempted before failover, it takes approximately 12 minutes to detect the failure. This problem has been fixed in release version 11.11.04 and 11.23.03. 2. You can call the nfs.
Each package must have a unique service name. The SERVICE_NAME variable in the package configuration file must match the NFS_SERVICE_NAME variable in the NFS control script. If you do not want to run the NFS monitor script, comment out the SERVICE_NAME variable: # SERVICE_NAME nfs.monitor 6. Set the SUBNET variable to the subnet that will be monitored for the package. SUBNET 15.13.112.
In order to make a Serviceguard file system available to all servers, all servers must NFS-mount the file system. That way, access to the file system is not interrupted when the package fails over to an adoptive node. An adoptive node cannot access the file system through the local mount, because it would have to unmount the NFS-mounted file system before it could mount it locally. And in order to unmount the NFS-mounted file system, it would have to kill all processes using the file system.
Creating the Cluster Configuration File and Bringing Up the Cluster To create the cluster configuration file, verify the cluster and package configuration files, and run the cluster, perform the following steps: 1. Use the cmquerycl command in the following manner to create the cluster configuration file from your package configuration files. You must run this command on all nodes in the cluster: cmquerycl -v -C /etc/cmcluster/nfs/cluster.conf -n basil -n sage -n thyme 2. 3.
SG-CFS-MP-1 SG-CFS-MP-2 up up running running enabled enabled no no Create a directory on each server in the cluster to hold all of the configuration files (if this directory already exists you should save the contents before continuing): # mkdir /etc/cmcluster/nfs The rest of the configuration is dependent upon whether or not the cluster requires file locking (as described in the Limitations and Issues with the current CFS Implementation“Limitations and Issues with the current CFS implementation” (pa
3. Edit the nfs-export.cntl script and set the HA_NFS_SCRIPT_EXTENSION to export.sh. Note that when you edit the configuration scripts referred to throughout this document, you may have to uncomment the lines as you edit them. # HA_NFS_SCRIPT_EXTENSION = "export.sh" This will set the NFS specific control script to be run by the package to hanfs.export.sh as we have named it in the copy command above. No other changes are needed in this script. 4. Edit the hanfs.export.
5. Edit the nfs-export.conf file as follows: a. Set the PACKAGE_NAME variable to SG-NFS-XP-1 (by default this variable is set to FAILOVER) PACKAGE_NAME b. SG-NFS-XP-1 Change the PACKAGE_TYPE from FAILOVER to MULTI_NODE PACKAGE_TYPE c. MULTI_NODE Comment out the FAILOVER_POLICY and FAILBACK_POLICY since this package will run on each server and will not failover if a server fails #FAILOVER_POLICY #FAILBACK_POLICY d.
5. You can verify the export package is running with the cmviewcl command.
In hanfs.2.sh, set NFS_SERVICE_NAME[0] to nfs2.monitor and set NFS_SERVICE_CMD[0] to /etc/cmcluster/nfs/nfs2.mon. If you do not want to monitor NFS services, leave these variables commented out. 5. Edit the nfs.conf scripts (nfs1.conf and nfs2.conf) as follows a. Specify the package name PACKAGE_NAME b. c. d. SG-NFS1 In nfs2.conf set the PACKAGE_NAME to SG-NFS2 The PACKAGE_TYPE should be set to the default value (FAILOVER) Set the NODE_NAME variables for each node that can run the package.
4.
Configuring a Serviceguard NFS failover package Configuring a Serviceguard NFS failover package for a CFS environment is similar to configuring the package for a non-CFS environment. The main difference is that you must configure one failover package for each server that exports CFS. Use the following procedure to configure a failover package. 1. Copy the following scripts and make a spearate copy for each package (one package for each server) # cd /etc/cmcluster/nfs 2.
c. If you enable file lock migration and monitor NFS services, set the NFS_FILE_LOCK_MIGRATION and NFS_FLM_SCRIPT variables in the nfs.mon script (fns1.mon or nfs2.mon) as they were set in the previous step NFS_FILE_LOCK_MIGRATION=1 NFS_FLM_SCRIPT="$(0%/*)/nfs1.flm d. Edit the corresponding nfs.flm script(s) (nfs1.flm and/or nfs2.flm) to set the holding directory. For example in nfs1.flm NFS_FLM_HOLDING_DIR="/cfs1/sm" and in nfs2.flm NFS_FLM_HOLDING_DIR="/cfs2/sm" 6. Edit the nfs.conf scripts (nfs1.
2. Verify the cluster and package configuration files on each server # cmcheckconf -k -v -C /etc/cmcluster/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf -P /etc/cmcluster/nfs/nfs2.conf 3. Verify and apply the cluster package configuration files on a single server # cmapplyconf -v -C /etc/cmcluster/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf 4.
3 Installing and Configuring Serviceguard NFS Modular Package This chapter explains how to configure Serviceguard NFS modular package. You must set up your Serviceguard cluster before you can configure Serviceguard NFS modular package. For more information on setting up a Serviceguard cluster, see the Managing Serviceguard manual.
Before Creating a Serviceguard NFS Modular Package Before creating a Serviceguard NFS package, complete the following tasks: 1. Set up the Serviceguard cluster. For more information on setting up the Serviceguard cluster, see the Managing Serviceguard manual. 2. On the primary node and all adoptive nodes for the NFS package, set the NFS_SERVER variable to 1 in the /etc/rc.config.d/nfsconf file. NFS_SERVER=1 Do not configure the exported directories in the /etc/exports file.
8. Use SAM or LVM commands to set up volume groups, logical volumes, and file systems as needed for the data that is exported to clients. The names of the volume groups must be unique within the cluster, and the major and minor numbers associated with the volume groups must be the same on all nodes. In addition, the mounting points and exported file system names must be the same on all nodes.
Configuring a Serviceguard NFS Modular Package To configure a Serviceguard NFS modular package, complete the following tasks, included in this section: • “Editing the Package Configuration File (nfs.conf)” • “Configuring Server-to-Server Cross-Mounts (Optional)” • “Creating the Cluster Configuration File and Bringing Up the Cluster” Editing the Package Configuration File (nfs.conf) The nfs.conf file lists all the Serviceguard and HA-NFS related parameters.
vxvm_dg vxvm_dg 8.
NOTE: The NFS parameters in the package configuration file generated on Serviceguard A.11.18 and Serviceguard A.11.19 are listed in the steps below. The parameter for Serviceguard A.11.19 includes the module name (nfs/hanfs_export/ or nfs/hanfs_flm/) of the parameters. 9. Create a separate XFS variable for each NFS directory to be exported. Specify the directory name and any export options. The directories must be defined in the above mounted file system list.
13. Modify the MONITOR_DAEMONS_RETRY parameter, that represents the number of attempts to ping the rpc.statd, rpc.mountd, nfsd, rpc.pcnfsd and file lock migration script, processes before exiting. The default is 4 attempts. An example for this parameter is as follows: For nfs.conf generated on Serviceguard A.11.18 MONITOR_DAEMONS_RETRY 4 For nfs.conf generated on Serviceguard A.11.19 nfs/hanfs_export/MONITOR_DAEMONS_RETRY 4 14.
All files in this directory are deleted after a failover. This directory must be located in the same shared volume as FLM_HOLDING_DIR. An example for this parameter is as follows: For nfs.conf generated on Serviceguard A.11.18 FLM_HOLDING_DIR NFSV4_FLM_HOLDING_DIR "/export/sm" "/export/nfsv4" For nfs.conf generated on Serviceguard A.11.19 nfs/hanfs_flm/FLM_HOLDING_DIR nfs/hanfs_flm/NFSV4_FLM_HOLDING_DIR "/export/sm" "/export/nfsv4" 17.
Figure 13 Server-to-Server Cross-Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. In order to make a Serviceguard file system available to all servers, all servers must NFS-mount the file system.
In this example, nfs1 is the name that maps to the relocatable IP address of the package. It must be configured in the service name used by the server (DNS, NIS, or /etc/hosts file). If a server for the package will NFS-mount the package file systems, the client mount point (CNFS) must be different from the server location (SNFS). 4. 5. Copy the script you have just modified to the same location in all the servers that will NFS-mount the file systems in the package.
3. Verify the cluster and package configuration files on each node: cmcheckconf -k -v -C /etc/cmcluster/nfs/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf ... 4. Activate the cluster lock volume group (corresponding to the FIRST_CLUSTER_LOCK_VG value) on one node: vgchange -a y /dev/vg_nfsu01 5. Verify and apply the cluster and package configuration files: cmapplyconf -v -C /etc/cmcluster/nfs/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf ...
Serviceguard NFS Modular Package over CFS Packages without File Locking Each active server in the cluster needs to run an export multi-node package and an NFS failover package. An export multi-node package is a package that runs on each server in the cluster and exports all the cluster file systems. Each standby server (i.e. a server that is an adoptive node for NFS failover packages) needs to have an export multi-node package running to be able to become active in the event of a failover.
dependency_name dependency_condition dependency_location f. SG-CFS-MP-2-dep SG-CFS-MP-2=up same_node Comment out the service_name, service_cmd, service_restart, service_fail_fast_enabled and service_halt_timeout variables. # # # # # service_name service_cmd service_restart service_fail_fast_enabled service_halt_timeout nfs.monitor "$SGCONF/scripts/nfs/nfs_upcc.
1. Create the nfs.conf file for each package with the cmmakepkg command. You must create one package for each server. # cd /etc/cmcluster/nfs_modular # cmmakepkg -m sg/all -m nfs/hanfs /etc/cmcluster/nfs_modular/nfs1.conf # cmmakepkg -m sg/all -m nfs/hanfs /etc/cmcluster/nfs_modular/nfs2.conf 2. Edit the nfs.conf files (nfs1.conf, nfs2.conf) as follows: a. Specify the IP address for the package and the subnet to which the IP address belongs: ip_subnet 15.13.112.0 ip_address 15.13.114.
3. Run the failover packages. # cmrunpkg –n thyme –v SG-NFS1 # cmrunpkg –n basil –v SG-NFS2 4.
This IP address is the relocatable IP address for the package. NFS clients that mount the file systems in the package use this IP address to identify the server. You must configure a name for this address in the DNS, NIS, or LDAP database, or in the /etc/hosts file. b. Set the exported directory in nfs1.conf as follows: For nfs1.conf generated on HP Serviceguard A.11.18: XFS “/cfs1” For nfs1.conf generated on HP Serviceguard A.11.19: nfs/hanfs_export/XFS “/cfs1” In nfs2.
dependency_name dependency_condition dependency_location SG-CFS-MP-2-dep SG-CFS-MP-2=UP same_node Starting a Serviceguard NFS failover package Use the following steps to start a failover package. 1. Verify the cluster and package configuration files on each server. # cmcheckconf -k -v –C /etc/cmcluster/cluster.conf –P\ etc/cmcluster/nfs_modular/nfs1.conf –P /etc/cmcluster/nfs_modular/nfs2.conf 2. Verify and apply the cluster package configuration files on a single server.
4 Migration of Serviceguard NFS Legacy Package to Serviceguard NFS Modular Package This chapter explains how to migrate from the current Serviceguard NFS Legacy package to the new Serviceguard NFS Modular package. The migration of the Legacy packages to the Modular package configuration is done using the cmmigratepkg command. The cmmigratepkg command automates the migration of a legacy package to a modular package for standard Serviceguard module parameters.
6. Run cmmigratepkg command to create a temporary configuration file with the standard Serviceguard parameters derived using the information from the legacy package configuration. cmmigratepkg -p pkg01 -o /etc/cmcluster/nfs/modular/param.conf If there are any changes in the customer-defined functions area of the legacy package control script (for example, in customer_defined_run_cmds function of nfs1.cntl file), cmmigratepkg command copies the changes to an external script by using -x option.
9. There are two methods to configure Server-to-Server Cross-Mounts. See the “Configuring Server-to-Server Cross-Mounts (Optional)” (page 50) section for more information. The second method utilizes the external script generated by the -x option in cmmigratepkg command. 10. Check the configuration of the new modular package using cmcheckconf command. cmcheckconf -P /etc/cmcluster/nfs/modular/pkg.conf 11. Apply the new modular package configuration using cmapplyconf command.
5 Sample Configurations for Legacy Package This chapter explains how to create sample configuration files for Serviceguard NFS legacy package. It gives sample cluster configuration files, package configuration files, and control scripts for the following configurations: • Example One - Three-Server Mutual Takeover: This configuration has three servers and three Serviceguard NFS packages. Each server is the primary node for one package and an adoptive node for the other two packages.
Figure 14 Three-Server Mutual Takeover Figure 15 shows the three-server mutual takeover configuration after host basil has failed and host sage has adopted pkg02. Dotted lines indicate which servers are adoptive nodes for the packages.
Figure 15 Three-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME MutTakOvr FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 3 VOLUME_GROUP VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 /dev/nfsu03 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
The hanfs.sh Control Script This section shows the NFS control script (hanfs1.sh) for the pkg01 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu011 NFS_SERVICE_NAME[0]="nfs1.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs.mon" NFS_FILE_LOCK_MIGRATION=0 NFS_FLM_SCRIPT="${0%/*}/nfs.
IP[0]=15.13.112.244 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs.
LV[0]=/dev/nfsu03/lvol1; FS[0]=/hanfs/nfsu031; FS_MOUNT_OPT[0]="-o rw" VXVOL="vxvol -g \$DiskGroup startall" #Default FS_UMOUNT_COUNT=1 FS_MOUNT_RETRY_COUNT=0 IP[0]=15.13.114.245 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs3.sh) for the pkg03 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted.
Figure 17 One Adoptive Node for Two Packages after One Server Fails This sample configuration also enables the File Lock Migration feature. Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample.
VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
The function customer_defined_run_cmds calls the cmmodpkg command with the package control option (-d). This command prevents the host that is running pkg01 from adopting pkg02. The ampersand (&) causes the cmmodpkg command to run in the background. It must run in the background to allow the control script to complete. There is a short time, after one primary node has failed but before the cmmodpkg command has executed, when the other primary node can fail and the adoptive node will adopt its package.
AUTO_RUN YES LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration.
of the script and most of the comments are omitted. This example enables the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs2.mon" NFS_FILE_LOCK_MIGRATION=1 NFS_FLM_SCRIPT="${0%/*}/nfs2.flm" NFS File Lock Migration and Monitor Scripts for pkg02 The nfs.flm Script This section shows the NFS File Lock Migration (nfs2.flm) script for the pkg02 package in this sample configuration.
Figure 18 Cascading Failover with Three Servers Figure 19 shows the cascading failover configuration after host thyme has failed. Host basil is the first adoptive node configured for pkg01, and host sage is the first adoptive node configured for pkg02. Figure 19 Cascading Failover with Three Servers after One Server Fails Cluster Configuration File for Three-Server Cascading Failover This section shows the cluster configuration file (cluster.conf) for this configuration example.
CLUSTER_NAME Cascading FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.168 FIRST_CLUSTER_LOCK_PV /dev/dsk/c1t1d0 NODE_NAME sage NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.115.
NFS Control Scripts for pkg01 The nfs.cntl Control Script This section shows the NFS control script (nfs1.cntl) for the pkg01 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
Figure 20 Two Servers with NFS Cross-Mounts Figure 21 shows two servers with NFS cross-mounted file systems after server thyme has failed. The NFS mounts on server basil are not interrupted.
Figure 21 Two Servers with NFS Cross-Mounts after One Server Fails Cluster Configuration File for Two-Server NFS Cross-Mount This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME XMnt FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
The function customer_defined_run_cmds calls a script called nfs1_xmnt. This script NFS-mounts the file system exported by the package pkg01. If you configured the file system in the /etc/fstab file, the package might not be active yet when the servers tried to mount the file system at system boot. By configuring the NFS control script to NFS-mount the file system, you ensure that the package is active before the mount command is invoked.
NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration.
The client mount point, specified in the CNFS[0] variable, must be different from the location of the file system on the server (SNFS[0]). The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature.
6 Sample Configurations for Modular Package This chapter explains how to create sample configuration files for Serviceguard NFS modular package. It gives sample cluster configuration files and package configuration files for the following configurations: • Example One - Three-Server Mutual Takeover: This configuration has three servers and three Serviceguard NFS packages. Each server is the primary node for one package and an adoptive node for the other two packages.
Figure 22 Three-Server Mutual Takeover Figure 23 shows the three-server mutual takeover configuration after host basil has failed and host sage has adopted pkg02. Dotted lines indicate which servers are adoptive nodes for the packages.
Figure 23 Three-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME MutTakOvr FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 3 VOLUME_GROUP VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 /dev/nfsu03 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
failover_policy failback_policy configured_node manual node_name node_name node_name basil sage thyme auto_run yes local_lan_failover_allowed yes node_fail_fast_enabled no service_name service_fail_fast_enabled service_halt_timeout nfs2.
cvm_activation_cmd vg "vxdg -g \$DiskGroup set activation=exclusivewrite" nfsu03 fs_name /dev/nfsu03/lvol1 fs_directory /hanfs/nfsu031 fs_mount_opt "-o rw" vxvol "vxvol -g \$DiskGroup startall" fs_umount_count_retry fs_mount_retry_count ip_address ip_subnet XFS #Default 1 0 15.13.114.245 15.13.112.0 "/hanfs/nfsu031" FILE_LOCK_MIGRATION 0 Example Two - One Adoptive Node for Two Packages with File Lock Migration This configuration has two packages, each owned by a different server.
Figure 25 One Adoptive Node for Two Packages after One Server Fails This sample configuration also enables the File Lock Migration feature. Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample.
VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
function start_command { cmmodpkg -d -n 'hostname' pkg02 & return 0 } The function start_command in the external script calls the cmmodpkg command with the package control option (-d). This command prevents the host that is running pkg01 from adopting pkg02. The ampersand (&) causes the cmmodpkg command to run in the background. It must run in the background to allow the control script to complete.
XFS "/hanfs/nfsu021" FILE_LOCK_MIGRATION FLM_HOLDING_DIR 1 "/hanfs/nfsu021/sm" PROPAGATE_INTERVAL 5 The external script file for this package can be created in any package specific location using the external script template file. cp /etc/cmcluster/examples/external_script.template /pkg02_location/pkg02_ext Also, specify the external script file location in the package configuration file (nfs2.
Figure 26 Cascading Failover with Three Servers Figure 27 shows the cascading failover configuration after host thyme has failed. Host basil is the first adoptive node configured for pkg01, and host sage is the first adoptive node configured for pkg02. Figure 27 Cascading Failover with Three Servers after One Server Fails Cluster Configuration File for Three-Server Cascading Failover This section shows the cluster configuration file (cluster.conf) for this configuration example.
CLUSTER_NAME Cascading FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.168 FIRST_CLUSTER_LOCK_PV /dev/dsk/c1t1d0 NODE_NAME sage NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.115.
fs_name /dev/nfsu01/lvol1 fs_directory /hanfs/nfsu011 fs_mount_opt "-o rw" vxvol_cmd "vxvol -g \$DiskGroup startall" fs_umount_retry_count fs_mount_retry_count ip_address ip_subnet XFS #Default 1 0 15.13.114.243 15.13.112.0 "/hanfs/nfsu011" FILE_LOCK_MIGRATION 0 Package Configuration File for pkg02 This section shows the package configuration file (nfs2.conf) for the package pkg02 in this sample configuration.
Example Four - Two Servers with NFS Cross-Mounts This configuration has two servers and two packages. The primary node for each package NFS-mounts the file systems from its own package and the other package. Figure 28 illustrates this configuration. If one server fails, the other server adopts its package. The NFS mounts are not interrupted when a package fails over. Figure 28 Two Servers with NFS Cross-Mounts Figure 29 shows two servers with NFS cross-mounted file systems after server thyme has failed.
Figure 29 Two Servers with NFS Cross-Mounts after One Server Fails Cluster Configuration File for Two-Server NFS Cross-Mount This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME XMnt FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample. package_name package_type pkg01 failover failover_policy failback_policy configured_node manual node_name node_name thyme basil auto_run yes local_lan_failover_allowed yes node_fail_fast_enabled no service_name nfs1.
The external script invokes another script called nfs1_xmnt. This script NFS-mounts the file system exported by the package pkg01. If you configured the file system in the /etc/fstab file, the package might not be active yet when the servers tried to mount the file system at system boot. By configuring the external script to NFS-mount the file system, you ensure that the package is active before the mount command is invoked.
vxvol_cmd "vxvol -g \$DiskGroup startall" #Default fs_umount_retry_count fs_mount_retry_count ip_address ip_subnet XFS 1 0 15.13.114.244 15.13.112.0 "/hanfs/nfsu021" FILE_LOCK_MIGRATION 0 The external script file for this package can be created in any package specific location using the external script template file. cp /etc/cmcluster/examples/external_script.template /pkg02_location/pkg02_ext Also, specify the external script file location in the package configuration file (nfs2.
Index Symbols E -d option, cmmodpkg, 25, 27, 72, 73, 93, 94 /etc/cmcluster/nfs directory, 22 /etc/exports file, 22, 26, 27, 44, 48 /etc/fstab file, 82, 83, 101, 102 /etc/group file, 23, 45 /etc/hosts file, 23, 25, 26, 32, 45, 46, 52, 82, 83, 101, 102 /etc/passwd file, 23, 45 /etc/rc.config.d/nfsconf file, 20, 22, 44 executables, where to locate, 23, 45 exported file systems, 19, 22, 44 definition of, 6 naming, 23, 45 specifying in nfs.cntl, 25, 26 specifying in pkg.
interruptible NFS mounts, 23, 45 IP address, for package, 19, 25, 26, 46 mapping to logical name, 23, 45 IP variable, in hanfs.sh script, 25, 46 IP variable, in nfs.cntl script, 26 J journalled file systems (xvfs), 23, 45 L lockd monitoring, 20 restarting, 19 stopping, 19 logging, NFS monitor script, 20 logical volumes configuration, 23, 45 specifying in nfs.cntl, 25, 26 specifying in pkg.conf, 46 LV variable, in nfs.cntl script, 25, 26 LV variable, in pkg1.
stopping, 19 rpc.mountd, starting, 22, 44 rpc.statd monitoring, 20 restarting, 19 stopping, 19 rpcinfo command, 19 RUN_SCRIPT, in nfs.conf, 30 RUN_SCRIPT_TIMEOUT, in nfs.conf, 30 S sample configurations, 63, 85 SD-UX (Software Distributor), 21, 43 servers, multiple active, 13, 63, 85 SERVICE_NAME, in nfs.