Serviceguard NFS Toolkit A.11.31.02, A.11.11.06, and A.11.23.
© Copyright 2001-2008 © Hewlett-Packard Development Company, L. P Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
Table of Contents 1 Overview of Serviceguard NFS....................................................................................................9 Limitations of Serviceguard NFS.........................................................................................9 Overview of Serviceguard NFS Toolkit A.11.31.02 with Serviceguard A.11.18 and Veritas Cluster File System Support...............................................................................................10 Limitations.................
Example One - Three-Server Mutual Takeover..................................................................41 Cluster Configuration File for Three-Server Mutual Takeover.....................................43 Package Configuration File for pkg01...........................................................................44 NFS Control Scripts for pkg01......................................................................................45 The nfs.cntl Control Script...........................................
NFS Control Scripts for pkg01......................................................................................63 The nfs.cntl Control Script.......................................................................................63 The hanfs.sh Control Script......................................................................................65 Package Configuration File for pkg02...........................................................................65 NFS Control Scripts for pkg02..................
List of Figures 1-1 1-2 1-3 1-4 1-5 2-1 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 Simple Failover to an Idle NFS Server........................................................................14 Failover from One Active NFS Server to Another......................................................15 A Host Configured as Adoptive Node for Multiple Packages...................................16 Cascading Failover with Three Adoptive Nodes........................................................
1 Overview of Serviceguard NFS Serviceguard NFS is a tool kit that enables you to use Serviceguard to set up highly available NFS servers. You must set up a Serviceguard cluster before you can set up Highly Available NFS. For instructions on setting up a Serviceguard cluster, see the Managing Serviceguard manual. Serviceguard NFS is a separately purchased set of configuration files and control script, which you customize for your specific needs.
NOTE: Beginning with Serviceguard NFS A.11.11.03 and A.11.23.02, you can address this limitation by enabling the File Lock Migration feature (see “Overview of the NFS File Lock Migration Feature” (page 11)). For HP-UX 11i v2 and 11i v3, the feature functions properly without a patch. • • • If a server is configured to use NFS over TCP and the client is the same machine as the server, which results in a loopback NFS mount, the client may hang for about 5 minutes if the package is moved to another node.
specific control shell script, hanfs.sh, which is associated with a package. For example, if you set the HA_NFS_SCRIPT_EXTENSION variable to hapkg or hapkg.sh, then the NFS specific control script executed by the package corresponding to the nfs.cntl file is hanfs.hapkg.sh. The default name of the shell script for the HA_NFS_SCRIPT_EXTENSION variable is hanfs.sh. Limitations HA Serviceguard NFS Toolkit A.11.31.02 does not support the modular package feature introduced with Serviceguard A.11.18.
• • Any client that holds NFS file locks against files residing in the HA/NFS package (transitioned between servers) sends reclaim requests to the adoptive node (where the exported filesystems currently reside) and reclaims its locks. After rpc.statd sends the crash recovery notification messages, the SM entries in the package holding directory are removed, and the nfs.flm script is started on the adoptive node.
Serviceguard A.11.17 is not available on HP-UX 11i v1 systems, so Serviceguard CFS support is only applicable to HP-UX 11i v2. Limitations The following is a list of limitations when using Serviceguard NFS Toolkit A.11.23.05 with Serviceguard A.11.17: • • Serviceguard A.11.17 introduces a new MULTI_NODEpackage type which is not supported by Serviceguard NFS Toolkit. The only supported package type is FAILOVER. Serviceguard A.11.17 provides a new package configuration file template.
Figure 1-1 Simple Failover to an Idle NFS Server Node_A is the primary node for NFS server package Pkg_1. When Node_A fails, Node_B adopts Pkg_1. This means that Node_B locally mounts the file systems associated with Pkg_1 and exports them. Both Node_A and Node_B must have access to the disks that hold the file systems for Pkg_1. Failover from One Active NFS Server to Another Figure 1-2 shows a failover from one active NFS server node to another active NFS server node.
Figure 1-2 Failover from One Active NFS Server to Another In Figure 1-2, Node_A is the primary node for Pkg_1, and Node_B is the primary node for Pkg_2. When Node_A fails, Node_B adopts Pkg_1 and becomes the server for both Pkg_1 and Pkg_2. A Host Configured as Adoptive Node for Multiple Packages Figure 1-3 shows a three-node configuration where one node is the adoptive node for packages on both of the other nodes. If either Node_A or Node_C fails, Node_B adopts the NFS server package from that node.
Figure 1-3 A Host Configured as Adoptive Node for Multiple Packages When Node_A fails, Node_B becomes the server for Pkg_1. If Node_C fails, Node_B will become the server for Pkg_2. Alternatively, you can set the package control option in the control script, nfs.cntl, to prevent Node_B from adopting more than one package at a time. With the package control option, Node_B may adopt the package of the first node that fails, but if the second node fails, Node_B will not adopt its package.
Figure 1-4 Cascading Failover with Three Adoptive Nodes Server-to-Server Cross Mounting Two NFS server nodes may NFS-mount each other’s file systems and still act as adoptive nodes for each other’s NFS server packages. Figure 1-5 illustrates this configuration.
Figure 1-5 Server-to-Server Cross Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. Each node NFS-mounts the file systems for both packages. If Node_A fails, Node_B mounts the filesystem for Pkg_1, and the NFS mounts are not interrupted.
How the Control and Monitor Scripts Work As with all Serviceguard packages, the control script starts and stops the NFS package and determines how the package operates when it is available on a particular node. The 11i v1 and 11i v2 control script (hanfs.sh) contains three sets of code that operate depending on the parameter - start, stop, or file_lock_migration - with which you call the script. On 11.0, there are two sets of code that you can call with the start or stop parameter.
Halting the NFS Services When called with the stop parameter, the control script does the following: • • • • • • • • Removes the package IP address from the LAN card on the current node. Un-exports all file systems associated with the package so that they can no longer be NFS-mounted by clients. Halts the monitor process. Halts the File Lock Migration synchronization script if you enable the File Lock Migration feature (available on 11i v1 and 11i v2). Halts the rpc.statd and rpc.
NOTE: The file name of theNFS_FLM_SCRIPT script must be limited to 13 characters or fewer. NOTE: Thenfs.mon script uses rpcinfo calls to check the status of various processes. If the rpcbind process is not running, the rpcinfo calls time out after 75 seconds. Because 10 rpcinfocalls are attempted before failover, it takes approximately 12 minutes to detect the failure. This problem has been fixed in release versions 11.11.04 and 11.23.03. The default NFS control script, hanfs.
2 Installing and Configuring Serviceguard NFS This chapter explains how to configure Serviceguard NFS. You must set up your Serviceguard cluster before you can configure Serviceguard NFS. For instructions on setting up an Serviceguard cluster, see the Managing Serviceguard manual.
You can create these two files by running the cmmakepkg command. Perform the following steps to set up the directory for configuring Serviceguard NFS: NOTE: You may want to save any existing Serviceguard NFS configuration files before executing these steps. 1. Run the following command to create the package configuration template file: cmmakepkg -p /opt/cmcluster/nfs/nfs.conf 2. Run the following command to create the package control template file: cmmakepkg -s /opt/cmcluster/nfs/nfs.cntl 3. 4.
1. 2. 3. 4. 5. Enter /usr/sbin/setoncenv NFS_TCP 0 at the command line to sets the NFS_TCP variable in the/etc/rc.config.d/nfsconf to 0. Stop the NFS client with /sbin/init.d/nfs.client stop Stop the NFS server with /sbin/init.d/nfs.server stop Start the NFS server with /sbin/init.d/nfs.server start Start the NFS client with /sbin/init.d/nfs.client start After completing the preceding procedure, NFS will establish only UDP connections on HP-UX 11.0.
variable is set to 1, and you run the NFS monitor script, your NFS package will fail over to an adoptive node if rpc.mountd fails. 5. On the primary node and all adoptive nodes for the NFS package, set the NUM_NFSD variable in the /etc/rc.config.d/nfsconf file to the number of nfsd daemons required to support all the NFS packages that could run on that node at once. It is better to run too many nfsd processes than too few.
adoptive nodes, or use NIS to manage the passwd and group databases. For information on configuring NIS, see the NFS Services Administrator’s Guide. 10. Create an entry for the name of the package in the DNS or NIS name resolution files, or in /etc/hosts, so that users will mount the exported file systems from the correct node. This entry maps the package name to the package’s relocatable IP address. 11. Decide whether to place executables locally on each client or on the NFS server.
• • • • “Editing the NFS Monitor Script (nfs.mon)” “Editing the Package Configuration File (nfs.conf)” “Configuring Server-to-Server Cross-Mounts (Optional)” “Creating the Cluster Configuration File and Bringing Up the Cluster” Copying the Template Files If you will run only one Serviceguard NFS package in your Serviceguard cluster, you do not have to copy the template files.
Editing nfs.cntl for NFS Toolkit A.11.00.05, A.11.11.02 (or above) and A.11.23.01 (or above) Starting with Serviceguard A.11.13, a package can have LVM volume groups, CVM disk groups and VxVM disk groups. Example steps: 1. Create a separateVG[n] variable for each LVM volume group that is used by the package: VG[0]=/dev/vg01 VG[1]=/dev/vg02 ... 2. Create a separate VXVM_DG[n] variable for each VxVM disk group that is used by the package: VXVM_DG[0]=dg01 VXVM_DG[1]=dg02 ... 3.
corresponding to this nfs.cntl file will be hanfs.hapkg.sh. The default shell script name for this variable is hanfs.sh. 6.
LV[0]=/dev/vg01/lvol1;FS[0]=/ha_root LV[1]=/dev/vg01/lvol2;FS[1]=/users/scaf LV[2]=/dev/vg02/lvol1;FS[2]=/ha_data 3. Create a separate XFS[n] variable for each NFS directory to be exported. Specify the directory name and any export options. XFS[0]=”/ha_root” XFS[1]=”/users/scaf” XFS[2]=”-o ro /ha_data” Do not configure these exported directories in the /etc/exports file. When an NFS server boots up, it attempts to export all file systems in its /etc/exports file.
can run on the same node without any problems, and if a package fails over, only the instance associated with that package is killed. If you do not want to run the NFS monitor script, comment out the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables: # NFS_SERVICE_NAME[0]=nfs.monitor # NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs.mon By default, the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables are commented out, and the NFS monitor script is not run. 7.
Do not configure these exported directories in the/etc/exports file. When an NFS server boots up, it attempts to export all file systems in its /etc/exports file. If those file systems are not currently present on the NFS server node, the node cannot boot properly. This happens if the server is an adoptive node for a file system, and the file system is available on the server only after failover of the primary node. 2.
default, this is set to nfs.flm. You must assign a unique name to this script in every HA/NFS package in the cluster (for example, nfs1.flm, nfs2.flm, and so on): NFS_FLM_SCRIPT="${0%/*}/nfs1.flm" If you wish to monitor the File Lock Migration script, then you must also set the NFS_FILE_LOCK_MIGRATION and NFS_FLM_SCRIPT variables in the NFS monitor script. If you enable File Lock Migration, then you can configure the File Lock Migration script (see “Editing the File Lock Migration Script (nfs.
NOTE: If you enable the File Lock Migration feature, an NFS client (or group of clients) may hit a corner case of requesting a file lock on the HA/NFS server and not receiving a crash recovery notification message when the HA/NFS package migrates to an adoptive node.
NOTE: The file name of theNFS_FLM_SCRIPT script must be limited to 13 characters or fewer. NOTE: The nfs.mon script uses rpcinfo calls to check the status of various processes. If the rpcbind process is not running, the rpcinfo calls time out after 75 seconds. Because 10 rpcinfo calls are attempted before failover, it takes approximately 12 minutes to detect the failure. This problem has been fixed in release version 11.11.04 and 11.23.03. 2. You can call the nfs.
NODE_NAME thyme 4. NODE_NAME basil NODE_NAME sage Set the RUN_SCRIPT and HALT_SCRIPT variables to the full path name of the control script (/etc/cmcluster/nfs/nfs.cntl or whatever you have renamed it). You do not have to specify a timeout for either script. RUN_SCRIPT /etc/cmcluster/nfs/nfs1.cntl RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/nfs/nfs1.cntl HALT_SCRIPT_TIMEOUT NO_TIMEOUT 5.
Figure 2-1 Server-to-Server Cross-Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. In order to make a Serviceguard file system available to all servers, all servers must NFS-mount the file system.
file system before it could mount it locally. And in order to unmount the NFS-mounted file system, it would have to kill all processes using the file system. Follow these steps to set up an NFS package with file systems that are NFS-mounted by Serviceguard NFS servers: 1. Make a copy of the /etc/cmcluster/nfs/nfs_xmnt script. cd /etc/cmcluster/nfs cp nfs_xmnt nfs1_xmnt 2.
For an example of a configuration with cross-mounted servers, see “Example Four Two Servers with NFS Cross-Mounts” (page 61). Creating the Cluster Configuration File and Bringing Up the Cluster To create the cluster configuration file, verify the cluster and package configuration files, and run the cluster, perform the following steps: 1. Use the cmquerycl command in the following manner to create the cluster configuration file from your package configuration files.
3 Sample Configurations This chapter gives sample cluster configuration files, package configuration files, and control scripts for the following configurations: • • • • Example One - Three-Server Mutual Takeover: This configuration has three servers and three Serviceguard NFS packages. Each server is the primary node for one package and an adoptive node for the other two packages.
Figure 3-1 Three-Server Mutual Takeover Figure 3-2 shows the three-server mutual takeover configuration after host basil has failed and host sage has adopted pkg02. Dotted lines indicate which servers are adoptive nodes for the packages.
Figure 3-2 Three-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME MutTakOvr FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
HEARTBEAT_IP 15.13.113.168 FIRST_CLUSTER_LOCK_PV /dev/dsk/c1t1d0 NODE_NAME sage NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.115.
SUBNET 15.13.112.0 NFS Control Scripts for pkg01 The nfs.cntl Control Script This section shows the NFS control script (nfs1.cntl) for the pkg01 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
NODE_NAME NODE_NAME NODE_NAME basil sage thyme AUTO_RUN YES LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.
NFS_FILE_LOCK_MIGRATION=0 NFS_FLM_SCRIPT="${0%/*}/nfs.flm" Package Configuration File for pkg03 This section shows the package configuration file (nfs3.conf) for the package pkg03 in this sample configuration. The comments are not shown.
FS_UMOUNT_COUNT=1 FS_MOUNT_RETRY_COUNT=0 IP[0]=15.13.114.245 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs3.sh) for the pkg03 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu031 NFS_SERVICE_NAME[0]="nfs3.
Figure 3-3 One Adoptive Node for Two Packages Figure 3-4 shows this sample configuration after host basil has failed. Host sage has adopted pkg02. The package control option prevents host sage from adopting another package, so host sage is no longer an adoptive node for pkg01.
Figure 3-4 One Adoptive Node for Two Packages after One Server Fails This sample configuration also enables the File Lock Migration feature. Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME PkgCtrl FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
NETWORK_INTERFACE NETWORK_INTERFACE FIRST_CLUSTER_LOCK_PV HEARTBEAT_INTERVAL NODE_TIMEOUT lan2 lan3 /dev/dsk/c0t1d0 1000000 2000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 2 VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments are not shown.
NFS Control Scripts for pkg01 The nfs.cntl Control Script This section shows the NFS control script (nfs1.cntl) for the pkg01 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
NFS_SERVICE_NAME[0]="nfs1.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs1.mon" NFS_FILE_LOCK_MIGRATION=1 NFS_FLM_SCRIPT="${0%/*}/nfs1.flm" NFS File Lock Migration and Monitor Scripts for pkg01 The nfs.flm Script This section shows the NFS File Lock Migration (nfs1.flm) script for the pkg01 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and comments are omitted.
SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
the executable part of the script and most of the comments are omitted. This example enables the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs2.mon" NFS_FILE_LOCK_MIGRATION=1 NFS_FLM_SCRIPT="${0%/*}/nfs2.flm" NFS File Lock Migration and Monitor Scripts for pkg02 The nfs.flm Script This section shows the NFS File Lock Migration (nfs2.flm) script for the pkg02 package in this sample configuration.
Figure 3-5 Cascading Failover with Three Servers Figure 3-6 shows the cascading failover configuration after host thyme has failed. Host basil is the first adoptive node configured for pkg01, and host sage is the first adoptive node configured for pkg02.
Figure 3-6 Cascading Failover with Three Servers after One Server Fails Cluster Configuration File for Three-Server Cascading Failover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME Cascading FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.
NETWORK_INTERFACE NETWORK_INTERFACE FIRST_CLUSTER_LOCK_PV HEARTBEAT_INTERVAL NODE_TIMEOUT lan2 lan3 /dev/dsk/c0t1d0 1000000 2000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES VOLUME_GROUP VOLUME_GROUP 2 /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments are not shown.
NFS Control Scripts for pkg01 The nfs.cntl Control Script This section shows the NFS control script (nfs1.cntl) for the pkg01 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
NODE_NAME basil AUTO_RUN YES LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration.
NFS_FILE_LOCK_MIGRATION=0 NFS_FLM_SCRIPT="${0%/*}/nfs.flm" Example Four - Two Servers with NFS Cross-Mounts This configuration has two servers and two packages. The primary node for each package NFS-mounts the file systems from its own package and the other package. Figure 3-7 illustrates this configuration. If one server fails, the other server adopts its package. The NFS mounts are not interrupted when a package fails over.
Figure 3-8 Two Servers with NFS Cross-Mounts after One Server Fails Cluster Configuration File for Two-Server NFS Cross-Mount This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments are not shown. CLUSTER_NAME XMnt FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
FIRST_CLUSTER_LOCK_PV HEARTBEAT_INTERVAL NODE_TIMEOUT /dev/dsk/c1t1d0 1000000 2000000 AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 2 VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments are not shown.
PATH=/sbin:/usr/bin:/usr/sbin:/etc:/bin VGCHANGE=”vgchange -a e” # Default CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=exclusivewrite" VG[0]=nfsu01 LV[0]=/dev/nfsu01/lvol1; FS[0]=/hanfs/nfsu011; FS_MOUNT_OPT[0]="-o rw" VXVOL="vxvol -g \$DiskGroup startall" #Default FS_UMOUNT_COUNT=1 FS_MOUNT_RETRY_COUNT=0 IP[0]=15.13.114.243 SUBNET[0]=15.13.112.
The client mount point, specified in the CNFS[0] variable, must be different from the location of the file system on the server (SNFS[0]). The hanfs.sh Control Script This section shows the NFS control script (hanfs1.sh) for the pkg01 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature.
NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
the /etc/hosts file). If you do not want to configure a name for the package, you can just specify the IP address in the SNFS[0] variable, as follows: SNFS[0]=”15.13.114.244:/hanfs/nfsu021”; CNFS[0]=”/nfs/nfsu021” The client mount point, specified in the CNFS[0] variable, must be different from the location of the file system on the server (SNFS[0]). The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration.
Index Symbols E -d option, cmmodpkg, 30, 32, 52, 54 /etc/cmcluster/nfs directory, 24 /etc/exports file, 25, 31, 33 /etc/fstab file, 64, 66 /etc/group file, 27 /etc/hosts file, 27, 29, 31, 39, 64, 67 /etc/passwd file, 27 /etc/rc.config.d/nfsconf file, 20, 25 executables, where to locate, 27 exported file systems, 19, 25 definition of, 9 naming, 26 specifying in nfs.
interruptible NFS mounts, 27 IP address, for package, 19, 20, 29, 31 mapping to logical name, 27 IP variable, in hanfs.sh script, 29 IP variable, in nfs.cntl script, 31 J journalled file systems (xvfs), 26 L lockd monitoring, 20 restarting, 20 stopping, 20 logging, NFS monitor script, 21 logical volumes configuration, 26 specifying in nfs.cntl, 29, 30 LV variable, in nfs.cntl script, 29, 30 LVM volume groups, 29 M monitor script (nfs.mon), 20 logging, 21 specified in hanfs.sh, 33 specified in nfs.
restarting, 20 stopping, 20 rpcinfo command, 20 RUN_SCRIPT, in nfs.conf, 37 RUN_SCRIPT_TIMEOUT, in nfs.conf, 37 S sample configurations, 41 SD-UX (Software Distributor), 23 servers, multiple active, 13, 41 SERVICE_NAME, in nfs.conf, 37 SNFS variable, in nfs_xmnt script, 39, 64, 66 Software Distributor (SD-UX), 23 start parameter, control script, 19 START_MOUNTD variable, 25 statd monitoring, 20 restarting, 20 stopping, 20 stop parameter, control script, 20 SUBNET variable in hanfs.sh, 29 in nfs.
*B5140-90035* Printed in the US