NFS Services Administrator's Guide HP-UX 11i version 3 HP Part Number: B1031-90070 Published: March 2010
© Copyright 2010 Hewlett-Packard Development Company, L.P Legal Notices © Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Table of Contents Preface: About This Document .........................................................................................9 Intended Audience.................................................................................................................................9 Publishing History..................................................................................................................................9 What's in This document.....................................................
Configuring and Administering NFS Clients ......................................................................................34 NFS Client Configuration Files and Daemons................................................................................34 Configuring the NFSv4 Client Protocol Version.............................................................................35 Deciding Between Standard-Mounted Directories and Automounted Directories........................36 Enabling an NFS Client ........
Deciding Between Direct and Indirect Automounts............................................................................61 Configuring AutoFS Direct and Indirect Mounts.................................................................................61 Automounting a Remote Directory Using a Direct Map................................................................62 Notes on Direct Maps.................................................................................................................
Disabling Logging in CacheFS........................................................................................................87 Caching a Complete Binary.............................................................................................................87 Packing a Cached Filesystem...........................................................................................................87 Forcing a Cache Consistency Check......................................................................
List of Figures 1-1 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 4-1 Server View of the Shared Directories...........................................................................................14 Symbolic Links in NFS Mounts.....................................................................................................24 WebNFS Session............................................................................................................................32 NFS Mount of manpages..........
List of Tables 1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 3-1 3-2 3-3 3-4 4-1 4-2 4-3 4-4 4-5 5-1 8 Publishing History Details..............................................................................................................9 NFS Server Configuration Files.....................................................................................................21 NFS Server Daemons.....................................................................................................................
Preface: About This Document The latest version of this document can be found on line at: http://www.docs.hp.com This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document's current edition. The printing date will change when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The document part number will change when extensive changes are made.
Chapter 4 Configuring and Administering AutoFS Describes how to configure and administer AutoFS. Chapter 5 Troubleshooting NFS Services Describes detailed procedures and tools for troubleshooting the NFS Services. Typographical Conventions This document uses the following conventions. Italics Identifies titles of documentation, filenames and paths. Bold Text that is strongly emphasized. monotype Identifies program/script, command names, parameters or display.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS. This chapter addresses the following topics: • • • “ONC Services Overview” (page 11) “Network File System (NFS)” (page 11) “New Features in NFS” (page 12) ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment.
be shared by all the systems on the network, instead of duplicating common directories, such as /usr/local on each system. How NFS works The NFS environment consists of the following components: • • • NFS Services NFS Shared Filesystems NFS Servers and Clients NFS Services The NFS services is a collection of daemons and kernel components, and commands that enable systems with different architectures running different operating systems to share filesystems across a network.
NFSv4 uses the COMPOUND RPC procedure to build sequence of requests into a single RPC. All RPC requests are classified as either NULL or COMPOUND. All requests that are part of the COMPOUND procedure are known as operations. An operation is a filesystem action that forms part of a COMPOUND procedure. NFSv4 currently defines 38 operations. The server evaluates and processes operations sequentially.
to the root of the server file tree. If you use the PUTROOTFH operation, the client can traverse the entire file tree using the LOOKUP operation. — Persistent The persistent file handle is an assigned fixed value for the lifetime of the filesystem object that it refers to. When the server creates the file handle for a filesystem object, the server must accept the same file handle for the lifetime of the object. The persistent file handle persists across server reboots and filesystem migrations.
users@domain Or group@domain Where: users specifies the string representation of the user group specifies the string representation of the group domain specifies a registered DNS domain or a sub-domain of a registered domain However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back.
For information on how to share directories with the NFS clients, see “Sharing Directories with NFS Clients” (page 23). For information on how to unshare directories, see “Unsharing (Removing) a Shared Directory” (page 33). Following are the new share features that NFS supports: • Secure sharing of directories Starting with HP-UX 11i v3, NFS enables you to share directories in a secure manner.
For information on how to disable mount access for a single client, see “Unmounting (Removing) a Mounted Directory” (page 40). Starting with HP-UX 11i v3, the mount command is enhanced to provide benefits such as performance improvement of large sequential data transfers and local locking for faster access. The umount command allows forcible unmounting of filesystems. These features can be accessed using specific options of the mount command.
• • File systems that are mounted using CacheFS are not supported for use with failover. If client-side failover is enabled using the command-line option, the listed servers must support the same version of the NFS protocol. For example, onc21 and onc23 must support the same version of NFS protocol, either NFSv2, NFSv3, or NFSv4. For information on how to enable client-side failover, see “Enabling Client-Side Failover” (page 38).
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An >NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
Using the HP-UX System Files If you are using the HP-UX system files, add the users and groups to the /etc/passwd and /etc/group files, respectively. Copy these files to all the systems on the network. For more information on the /etc/passwd and /etc/group files, see passwd(4) and group(4). Using NIS If you are using NIS, all systems on the network request user and group information from the NIS maps on the NIS server. For more information on configuring NIS, see NIS Administrator's Guide (5991-2187).
Configuring and Administering an NFS Server Configuring an NFS server involves completing the following tasks: 1. 2. Identify the set of directories that you want the NFS server to share. For example, consider an application App1 running on an NFS client. Application App1 requires access to the abc filesystem in an NFS server. NFS server should share the abc filesystem.
Table 2-2 NFS Server Daemons (continued) Daemon Name Function rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.statd Maintains a list of clients that have performed the file locking operation over NFS against the server. These clients are monitored and notified in the event of a system crash.
1069640883 ? 0:00 rpc.statd No message is displayed if the daemons are not running. To start the lockd and statd daemons, enter the following command: /sbin/init.d/lockmgr start 4. Enter the following command to run the NFS startup script: /sbin/init.d/nfs.server start The NFS startup script enables the NFS server and uses the variables in the /etc/rc.config.d/nfsconf file to determine which processes to start.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
-o Enables you to use some of the specific options of the share command, such as sec, async, public, and others. -d Enables you to describe the filesystem being shared When NFS is restarted or the system is rebooted, the /etc/dfs/dfstab file is read and all directories are shared automatically. 2.
In this example, the /var/mail directory is shared. Root access is allowed for clients Red, Blue, and Green. Superusers on all other clients are considered as unknown by the NFS server, and are given the access privileges of an anonymous user. Non-superusers on all clients are allowed read-write access to the /var/mail directory if the HP-UX permissions on the /var/mail directory allow them read-write access.
You can combine the different security modes. However, the security mode specified in the host must be supported by the client. If the modes on the client and server are different, the directory cannot be accessed. For example, an NFS server can combine the dh (Diffie-Hellman) and krb5 (Kerberos) security modes as it supports both the modes. However, if the NFS client does not support krb5, the shared directory cannot be accessed using krb5 security mode.
Enter password: Re-enter password for verification: Enter policy name (Press enter key to apply default policy) : Principal added. 4. Copy the /etc/krb5.conf file from the Kerberos server to the NFS server node. onc52# rcp /etc/krb5.conf onc20:/etc/ 5. Extract the key for the NFS service principal on the Kerberos server and store it in the /etc/krb5.keytab file on the NFS server. To extract the key, use the Kerberos administration tool kadminl.
2. date 06101130 to set the date to June 10th and time to 11:30 AM. The time difference between the systems should not be more than 5 minutes. Add a principal for all the NFS client to the Kerberos database. For example, if our NFS client is onc36.ind.hp.com then root principal should be added to the Kerberos database before running the NFS applications. To add principals use the Kerberos administration tool, kadminl, onc52# /opt/krb5/admin/kadminl Connecting as: K/M Connected to krb5v01 in realm ONC52.
sec Enables you to specify the security mode to be used. Specify krb5, krb5p or krb5i as the Security flavor. Enables you to specify the location of the directory. Enables you to specify the mount-point location where the filesystem is mounted. An initial ticket grant is carried out when the user accesses the mounted filesystem. Example onc36# mount -F nfs -o sec=krb5 onc36:/export_krb5 /aaa 1.
The rpc.lockd daemon runs at port 4045 and is not configurable. To determine the port numbers currently used by rpc.statd and rpc.mountd daemons, run the rpcinfo -p command, and configure the firewall accordingly.
Sharing directories across a firewall using the NFSv4 protocol NFSv4 is a single protocol that handles mounting, and locking operations for NFS clients and servers. The NFSv4 protocol runs on port 2049, by default. To override the default port number (2049) for the NFSv4 protocol, modify the port number for the nfsd entry in the/etc/services file. Configure the firewall based on the port number set.
To access the shared directory across a firewall using the WebNFS feature, configure the firewall to allow connections to the port number used by the nfsd daemon. By default the nfsd daemon uses port 2049. Configure the firewall based on the port number configured.
NOTE: To unshare all the directories without restarting the server, use the unshareall command. This command reads the entries in the /etc/dfs/dfstab file and unshares all the shared directories. Use the share command to verify whether all the directories are unshared. To unshare a shared directory and to prevent it from being automatically shared, follow these steps: Automatic Unshare 1. 2.
Configuration Files Table 2-5 describes the NFS configuration files and their functions. Table 2-5 NFS client configuration files File Name Function /etc/mnttab Contains the list of filesystems that are currently mounted. /etc/dfs/fstypes Contains the default distributed filesystem type. /etc/fstab Contains the list of filesystems that are automatically mounted at system boot time. Daemons Table 2-6 describes the NFS client daemons and their functions.
For more information on NFSv4, see nfsd(1m), mount_nfs(1m), nfsmapid(1m), and nfs4cbd(1m). Deciding Between Standard-Mounted Directories and Automounted Directories Before you mount any remote directories on a local system, decide whether you want each directory to be standard-mounted or automounted. You can automount directories using AutoFS. For more information on AutoFS, see Chapter 3: “Configuring and Administering AutoFS” (page 49).
You can mount a filesystem using the following methods: • Automatic Mounting at System Boot time To set up a filesystem for automatic mounting at system boot time, you must configure it in the /etc/fstab file. All filesystems specified in the /etc/fstab file are mounted during system reboot. • Manual Mounting When you manually mount a filesystem, it is not persistent across reboots or when NFS client restarts.
An output similar to the following output is displayed: /opt/nfstest from hpnfsweb:/home/tester Flags: vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,devs, rsize=32768,wsize=32768,retrans=5,timeo=11 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 Lookups: srtt=33 (82ms), dev=33 (165ms), cur=20 (400ms) The directory that you have mounted must be present in this list. For a list of mount options, see mount_nfs(1M).
Figure 2-3 NFS Mount of manpages • Mounting a Home directory mount -r -o nosuid broccoli:/home/broccoli /home/broccoli mount -r -o nosuid cauliflower:/home/cauliflower /home/cauliflower In this example, the NFS client mounts the home directories from NFS servers broccoli and cauliflower . The nosuid option prevents programs with setuid permission from executing on the local client. Figure 2-4 illustrates this example.
mount -r onc21,onc23,onc25:/Casey/Clay /Casey/Clay In this example, the NFS client mounts a single filesystem, /Casey/Clay that has been replicated to a number of servers with the same pathnames. This enables the NFS client to failover to either server onc21, onc23, or onc25 if the current server has become unavailable.
NOTE: Before you unmount a directory, run the fuser -cu command to determine whether the directory is currently in use. The fuser command lists the process IDs and user names of all the processes that are using the mounted directory. If users are accessing the mounted directories, they must exit the directories before you unmount the directory. To unmount a mounted directory and prevent it from being automatically mounted, follow these steps: Automatic Unmount 1.
NFS Client and Server Transport Connections NFS runs over both UDP and TCP transport protocols. The default transport protocol is TCP. Using the TCP protocol increases the reliability of NFS filesystems working across WANs and ensures that the packets are successfully delivered. TCP provides congestion control and error recovery. NFS over TCP and UDP works with NFS Version 2 and Version 3. NOTE: TCP is the only transport protocol supported by NFS Version 4.
If you want to change startup parameters for nfsd, you must login as superuser (root) and make changes to the /etc/default/nfs file or use the setoncenv command. The /etc/default/nfs file provides startup parameters for the nfsd daemon and rpc.lockd daemon. For more information on the /etc/default/nfs file, see nfs(1M). The setoncenv command initializes, displays, and removes the value of NFS configuration variables found in either of the following files: • • • • • • • /etc/rc.config.d/nfsconf /etc/rc.
If an HP-UX host not running NIS exports or shares a directory to the netgroup myfriends, the NIS_domain field is ignored, and all three hosts (sage, cauliflower, and pear) can mount the directory. If the netgroup database contains the following netgroup, mydomain (,,bldg1) and a host in the NIS domain bldg1 shares a directory to the netgroup mydomain, any host in the bldg1 domain may mount the directory, because the host field is blank.
The root-users netgroup is a group of four systems. The mail-users netgroup uses the root-users netgroup as part of a larger group of systems. The blank space in the third field of each triple indicates that the netgroup is valid in any NIS domain. Using Netgroups in Configuration Files Netgroups may be used in the following files: • • • • /etc/dfs/dfstab, in the [access_list], -rw, -ro, and root list /etc/hosts.equiv or $HOME/.
All users except those listed in the vandals netgroup can log in to the local system without supplying a password from any system in the network. CAUTION: Users who are denied privileged access in the /etc/hosts.equiv file can be granted privileged access in a user’s $HOME/.rhosts file. The $HOME/.rhosts file is read after the /etc/hosts.equiv file and overrides it. For more information, see hosts.equiv(4).
For more information on NIS, see NIS Administrator’s Guide (5991-2187). For information on the /etc/group file, see group(4). Configuring RPC-based Services This section describes the following tasks: • • “Enabling Other RPC Services” “Restricting Access to RPC-based Services” Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc” . Following is the list of entries in an /etc/inetd.conf file: #rpc xit tcp nowait root /usr/sbin/rpc.
Table 2-8 RPC Services managed by inetd (continued) RPC Service Description rquotad The rpc.rquotad program responds to requests from the quota command, which displays information about a user’s disk usage and limits. For more information, see rquotad (1M) and quota (1). gssd The gssd program operates between the Kernel RPC and the Generic Security Services Application Program Interface (GSS-API) to generate and validate the GSS-API tokens. For more information, see gssd(1M).
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
The automount Command This command installs the AutoFS mount-points, and associates an automount map with each mount-point. The AutoFS filesystem monitors attempts to access directories within it and notifies the automountd daemon. The daemon locates a filesystem using the map, and then mounts this filesystem at the point of reference within the AutoFS filesystem. The automount map specifies the location of all the AutoFS mount-points.
mounted when not in use, use the -t option of the automount command. For more information on the different options supported by automount, see automount(1M) and automountd(1M). CAUTION: You must maintain filesystems managed by AutoFS, by using the automountd and automount utilities. Manually mounting and unmounting file systems managed by AutoFS can cause disruptive or unpredictable results, including but not limited to commands hanging or not returning expected results.
Figure 3-2 AutoFS In this figure, AFS1, AFS2, and Sage are AutoFS clients. Thyme and Basil are the NFS servers. The NFS servers export directories. The AutoFS clients use maps to access the exported directories. For instance, the NFS server, Basil, exports the /export directory. If you are a user on any of the AutoFS clients, you can use maps to access the /export directory from the NFS server, Basil.
Features This section discusses the features that AutoFS supports on systems running HP-UX 11i v3. On-Demand Mounting In HP-UX 11i v3, the filesystems being accessed are mounted automatically. Filesystems that are hierarchically related to the automounted filesystems are mounted only when necessary. Consider the following scenario where the AutoFS master map, /etc/auto_master, and the direct map, /etc/auto_direct, are on the NFS client, sage.
# /etc/auto_master file # local mount-point map name /nfs/desktop mount options /etc/auto_indirect Following are the contents of the indirect map, /etc/auto_indirect, which contains the local mount-points on the client and the references to the directories on the server: # /etc/auto_indirect file # local mount-point mount options /test /apps -nosuid remote server:directory -nosuid thyme:/export/project/test basil:/export/apps Enter the following commands to view the contents of the /nfs/desktop dir
Supported Filesystems AutoFS enables you to mount different types of filesystems. To mount the filesystems, use the fstype mount option, and specify the location field of the map entry. Following is a list of supported filesystems and the appropriate map entry: AutoFS mount-point -fstype=autofs autofs_map_name NOTE: You can specify another AutoFS map name in the location field of the map-entry. This would enable AutoFS to trigger other AutoFS mounts.
• more information on NIS, see NIS Administrator's Guide (5991–2187) available at: http:// docs.hp.com LDAP: A directory service that stores information, which is retrieved by clients throughout the network. To simplify HP-UX system administration, the LDAP-UX Integration product centralizes user, group, and network information management in an LDAP directory. For more information on the LDAP-UX Integration product, see the LDAP-UX Client Services B.04.
1. If the AutoFS maps are not already migrated, migrate your AutoFS maps to LDAP Directory Interchange Format (LDIF) files using the migration scripts. The LDAP-UX client only supports the new AutoFS schema, and the migration scripts will migrate the maps according to the new schema. For information on the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool.
1. 2. In the /etc/rc.config.d/nfsconf file, the AUTOFS variable is set to 1. Any options you had specified in the AUTO_OPTIONS variable are copied to either the AUTO_OPTIONS or the AUTOMOUNTD_OPTIONS variable. Obsolete options are removed. Table 3-2 lists the options of the old automount command and the equivalent AutoFS command options. It also indicates which automount options are not supported in AutoFS. Table 3-2 Old Automount Command-Line Options Used By AutoFS 3.
Table 3-3 AutoFS Configuration Variables Variable Name Description AUTOFS Specifies if the system uses AutoFS. Set the value to 1 to specify that this system uses AutoFS. Set the value to 0 to specify that this system does not use AutoFS. The default value of AutoFS is 1. NOTE: If you set the value of AUTOFS to 1, the NFS_CORE core configuration variable must also be set to 1. AUTOMOUNT_OPTIONS Specifies a set of options to be passed to the automount command when it is run. The default value is “ ” .
If the AUTOMOUNT_OPTIONS variable does not specify the -f filename option, AutoFS consults the NSS configuration, to determine where to search for the AutoFS master map. For more information on configuring the NSS, see nsswitch.conf (4) and automount(1M) . To configure AutoFS using the /etc/default/autofs file, follow these steps: 1. 2. Log in as superuser. Edit the /etc/default/autofs file.
If the first field specifies the directory as /-, then the second field is the name of the direct map. The master map file, like any other map file, may be distributed by NIS or LDAP by modifying the appropriate configuration files and removing any existing /etc/auto_master master map file. NOTE: If the same mount-point is used in two entries, the first entry is used by the automount command. The second entry is ignored. You must run the automount command after you modify the master map or a direct map.
home directories. The automounts configured in an indirect map are mounted under the same local parent directory. Figure 3-4 shows the difference between direct mounts and indirect mounts on an NFS client. Figure 3-4 Difference Between Direct Mounts and Indirect Mounts In the Mounts in a Direct Map figure, mounts are configured in various places in the local filesystem and not located under the same parent directory.
IMPORTANT: link. Do not automount a remote directory on a local directory, which is a symbolic Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server. If it is the same, and the NFS server also acts as an NFS client and uses AutoFS with these map entries, the exported directory can attempt to mount over itself. This can result in unexpected behavior.
Following are sample entries from an AutoFS direct map on the NFS client, sage. The hash (#) symbol indicates a commented line. # /etc/auto_direct file # local mount-point mount options /auto/project/specs -nosuid /auto/project/budget -nosuid remote server:directory thyme:/export/project/specs basil:/export/FY94/proj1 Figure 3-5 illustrates how AutoFS sets up direct mounts.
IMPORTANT: Ensure that local_parent_directory and local_subdirectory are not already created. AutoFS creates them when it mounts the remote directory. If these directories exist, their files and directories in them are hidden when the remote directory is mounted. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server.
Following are sample lines from an AutoFS indirect map on the NFS client, sage. The hash (#) symbol indicates a commented line. # /etc/auto_desktop file # local mount-point mount options draw write remote server:directory -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Figure 3-6 illustrates how AutoFS sets up indirect mounts.
NOTE: You can also use the /etc/default/autofs file to modify the value assigned to the variable. You can use any environment variable that is set to a value in an AutoFS map. If you do not set the variable either with the -D option in /etc/rc.config.d/nfsconf file or by using the /etc/default/autofs file, AutoFS uses the current value of the environment variable on the local host. For information on some of the pre-defined environment variables, see automount(1M).
If the home directory of the user terry is configured in the /etc/passwd file as /home/basil/terry, AutoFS mounts the remote directory /export/home/basil from the server, basil, on the local directory /home/basil when Terry logs in. The line with the asterisk must be the last line in an indirect map. AutoFS reads the lines in the indirect map sequentially until it finds a match for the requested local subdirectory. If asterisk (*) matches any subdirectory, AutoFS stops reading at the line with the asterisk.
AutoFS substitutes howard for the ampersand (&) character in that line: howard basil:/export/home/howard -nosuid AutoFS mounts /export/home/howard from server basil to the local mount-point /home/howard on the NFS client. Figure 3-7 illustrates this configuration. Figure 3-7 Home Directories Automounted with Wildcards Special Maps There are two types of special maps: -hosts and -null.
Notes on the -hosts Map The -hosts map is a built-in AutoFS map. It enables AutoFS to mount exported directories from any NFS server found in the hosts database, whenever a user or a process requests access to one of the exported directories from that server. CAUTION: You may inadvertently cause an NFS mount over X.25 or SLIP, which is unsupported, or through a slow router or gateway, because the -hosts map allows NFS access to any reachable remote system.
Turning Off an AutoFS Map Using the -null Map To turn off a map using the -null map, follow these steps: 1. Add a line with the following syntax in the AutoFS master map: local_directory -null 2. If AutoFS is running, enter the following command on each client that uses the map, to force AutoFS to reread its maps: /usr/sbin/automount This enables AutoFS to ignore the map entry that does not apply to your host.
• • “Including a Map in Another Map” (page 74) “Creating a Hierarchy of AutoFS Maps” (page 74) Automounting Multiple Directories (Hierarchical Mounts) AutoFS enables you to automount multiple directories simultaneously. Use an editor to create an entry with the following format in a direct or indirect map, and if needed, create the auto_master entry: local_dir /local_subdirectory [-options] \ server:remote_directory \ /local_subdirectory [-options] server:remote_directory \ ...
To configure multiple replicated servers for a directory, follow these steps: 1. Create and configure the /etc/netmasks file. AutoFS requires the /etc/netmasks file to determine the subnets of local clients in a replicated multiple server environment. The /etc/netmasks file contains IP address masks with IP network numbers. It supports both standard subnetting as specified in RFC-950, and variable-length subnetting as specified in RFC-1519.
Including a Map in Another Map If you want your map to refer to an external map, you can do so by including the external map in your map. The entries in the external map are read as if they are part of your map.
Starting with the AutoFS mount at /org, the evaluation of this path dynamically creates additional AutoFS mounts at /org/eng and /org/eng/projects. No action is required for the changes to take effect on the user's system because the AutoFS mounts are created only when required. You need to run the automount command only when you make changes to the master map or to a direct map.
# /etc/auto_desktop file # local mount-point mount options remote server:directory draw write -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Enter the following commands: cd /nfs/desktop ls The ls command displays the following output: draw write The draw and write subdirectories are the potential mount-points (browsability), but are not currently mounted.
IMPORTANT: Do not stop the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it terminates. Use the autofs stop command instead. 5. To ensure that AutoFS is no longer active, enter the ps command: /usr/bin/ps -ef | grep automount If the ps command indicates that AutoFS is still active, ensure that all users have exited the automounted directories and then try again. Do not restart AutoFS until all the automount processes are terminated. 6.
To Stop AutoFS Logging To stop AutoFS logging, stop AutoFS and restart it after removing the “-v” option from AUTOMOUNTD_OPTIONS. AutoFS Tracing AutoFS supports the following Trace levels: Detailed (level 3) Includes traces of all the AutoFS requests and replies, mount attempts, timeouts, and unmount attempts. You can start level 3 tracing while AutoFS is running. Basic (level 1) Includes traces of all the AutoFS requests and replies. You must restart AutoFS to start level 1 tracing.
This command lists the process IDs and user names of all users who are using the mounted directory. 5. Warn users to exit the directory, and kill processes that are using the directory, or wait until all the processes terminate. Enter the following command to kill all the processes using the mounted directory: /usr/sbin/fuser -ck local_mount_point 6. Enter the following command to stop AutoFS: /sbin/init.d/autofs stop CAUTION: Do not kill the automountd daemon with the kill command.
May May May May 13 13 13 13 18:45:09 18:45:09 18:45:09 18:45:09 t5 t5 t5 t5 nfs_args: hpnfs127, , 0x2004060, 0, 0, 0, 0,0, 0, 0, 0, args_temp: hpnfs127, , 0x3004060, 0, 0, 0, 0,0, 0, 0, 0, hpnfs127:/tmp mount hpnfs127:/tmp dev=44000004 rdev=0 OK MOUNT REPLY: status=0, AUTOFS_DONE Unmount Event Tracing Output The general format of an unmount event trace is: UNMOUNT REQUEST:
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
warm cache A cache that contains data in its front filesystem is called a warm cache. In this case, the cached data can be returned to the user without requiring an action from the back filesystem. cache hit A successful attempt to reference data that is cached is called a cache hit. How CacheFS Works Figure 4-1 displays a sample CacheFS environment. Figure 4-1 Sample CacheFS Network In the figure, cachefs1, cachefs2, and cachefs3 are CacheFS clients.
The functionality provided by this command is an alternative to the rpages mount option. For information on how to pre-load or pack files, see “Packing a Cached Filesystem” (page 87). • Complete Binary Caching via the “rpages” mount option CacheFS is commonly used to manage application binaries. The rpages mount option forces the client to cache a complete copy of the accessed application binary file.
• Support for ACLs An Access Control List (ACL) offers stronger file security by enabling the owner of the file to define file permissions for specific users and groups. This version of CacheFS on HP-UX supports ACLs with VxFS and NFS and not with HFS. • Support for Logging A new command, cachefslog, is used to enable or disable logging for a CacheFS mount-point. If logging functionality is enabled, details about the operations performed on the CacheFS mount-point are stored in a logfile.
Mounting an NFS Filesystem Using CacheFS This section describes how to mount an NFS filesystem using CacheFS. The syntax for mounting an NFS filesystem using CacheFS is as follows: mount [-F cachefs] [-rqOV] -o backfstype=file_system_type [specific_options] resource mount_point Consider the following example where the /opt/frame directory is going to be NFS-mounted from the NFS server nfsserver to the local /opt/cframe directory.
mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M). Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS. Before you automount an NFS filesystem using CacheFS, you must configure a directory in a local filesystem as cache. For more information on how to configure a directory as cache, see “Creating a CacheFS Cache” (page 84).
cachefslog /cfs_mnt1 If logging has been enabled, the logfile is displayed. Disabling Logging in CacheFS You can use the cachefslog command to halt or disable logging for a CacheFS mount-point. To disable CacheFS logging, follow these steps: 1. 2. Log in as superuser.
You can pack files using one of the following methods: • Specifying the files or directories to be packed Enter the following command to pack a file in the cache: cachefspack -p filename where: -p Packs the file or files in the cache. filename Name of the file or directory that is to be packed in the cache. NOTE: When you pack a directory, all files in that directory, subdirectories, and files in the subdirectories are packed.
• -u Specifies that certain files are to be unpacked. filename Specifies the file to unpack. Using the -U option To unpack all the packed files in the cache, enter the following command: cachefspack -U cache-dir where: -U Specifies that you want to unpack all the packed files in the cache. cache-dir Specifies the cache directory that is to be unpacked. For more information about the cachefspack command, see cachefspack(1M).
either during system bootup if there is an entry in /etc/fstab, or the first time the cache directory is referenced as part of the mount operation. To manually check the integrity of a cache, enter the following command: fsck_cachefs -F cachefs [-m | -o noclean] cache-directory where: -m Specifies that the cache must be checked without making any repairs. noclean Forces a check on the CacheFS filesystems. cache-directory Specifies the name of the directory where the cache resides.
Deleting a Cache Directory To delete a cache directory that is no longer required you must use the cfsadmin command. The syntax to delete the cache directory is as follows: cfsadmin -d {cacheID | all} cache-directory where: cacheID Specifies the name of the cache filesystem. all Specifies that all cached filesystems in the cache-directory are to be deleted. cache-directory Specifies the name of the cache directory where the cache resides.
maxfiles 91% minfiles 0% threshfiles 85% maxfilesize 3MB srv01:_tmp:_mnt1 In this example, the filesystem with CacheID srv01:_tmp:_mnt filesystem has been deleted. To delete a cache directory and all the CacheFS filesystems in that directory, enter the following command: cfsadmin -d all cache-directory Using CacheFS After the administrator has configured a cache directory and mounted or automounted the CacheFS filesystem, you can use CacheFS as you would any other filesystem.
total for cache Initial size: end size: 640k 40k To view the operations performed in ASCII format, enter the following command: cachefswssize -a /tmp/logfile An output similar to the following is displayed: 1/0 0:00 0 Mount e000000245e31080 411 65536 256 /cfs_mnt1 (cachefs1:_cache_exp:_cfs_mnt1) 1/0 0:00 0 Mdcreate e000000245e31080 517500609495040 1 1/0 0:00 0 Filldir e000000245e31080 517500609495040 8192 1/0 0:00 0 Nocache e000000245e31080 517500609495040 1/0 0:00 0 Mkdir e000000245e31
Table 4-3 Common Error Messages encountered while using the fsck command Error Message Possible Causes Resolution “fsck -F cachefs Cannot open This error message indicates that /c is 1. Delete the cache. lock file /test/c/.cfs_lock” not a cache directory. 2. Recreate the cache directory using the cfsadmin command. “Cache /c is in use and cannot be This error message indicates that the 1. Check if a directory named /c exists cache directory is in use. However, the modified”.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 95) • “Performance Tuning” (page 101) • “Logging and Tracing of NFS Services” (page 103) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
1. Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines: NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start □ Enter the following command on the NFS client to make sure the rpc.mountd process on the NFS server is available and responding to RPC requests: /usr/bin/rpcinfo -u servername mountd If the rpcinfo command returns RPC_TIMED_OUT, the rpc.
If the directory is shared with the [access_list] option, make sure your NFS client is included in the [access_list], either individually or as a member of a netgroup. □ Enter the following commands on the NFS server to make sure your NFS client is listed in its hosts database: nslookup client_name nslookup client_IP_address “Permission Denied” Message □ □ □ Check the mount options in the /etc/fstab file on the NFS client. A directory you are attempting to write to may have been mounted as read-only.
/usr/sbin/fuser -ck local_mount_point 3. □ □ Try again to unmount the directory. Verify that the filesystem you are trying to unmount is not a mount-point of another filesystem. Verify that the filesystem is not exported. In HP-UX 11i v3, an exported filesystem keeps the filesystem busy. “Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing.
3. Warn any users to cd out of the directory, and kill any processes that are using the directory, or wait until the processes terminate. Enter the following command to kill all processes using the directory: /usr/sbin/fuser -ck local_mount_point 4. Enter the following command on the client to unmount all NFS-mounted directories: /usr/sbin/umount -aF nfs 5. Enter the following commands to restart the NFS client: /sbin/init.d/nfs.client stop /sbin/init.d/nfs.
Data is Lost Between the Client and the Server □ □ □ □ Make sure that the directory is not exported from the server with the async option. If the directory is exported with the async option, the NFS server will acknowledge NFS writes before actually writing data to disk. If users or applications are writing to the NFS-mounted directory, make sure it is mounted with the hard option (the default), rather than the soft option.
# see krb5.conf(4) for more details # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.com default_tkt_enctypes = DES-CBC-CRC default_tgs_enctypes = DES-CBC-CRC ccache_type = 2 [realms] krbhost.anyrealm.com = { kdc = krbhost.anyrealm.com:88 admin_server = krbhost.anyrealm.com } [domain_realm] .anyrealm.com = krbhost.anyrealm.com [logging] kdc = FILE:/var/log/krb5kdc.
See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3. If the timeout and badxid values displayed by nfsstat -rc are of the same magnitude, your server is probably slow. Client RPC requests are timing out and being retransmitted before the NFS server has a chance to respond to them. Try doubling the value of the timeo mount option on the NFS clients. See “Changing the Default Mount Options” (page 40)“Changing the Default Mount Options” on page 51.
Improving NFS Client Performance □ □ □ For files and directories that are mounted read-only and never change, set the actimeo mount option to 120 or greater in the /etc/fstab file on your NFS clients. If you see several “server not responding” messages within a few minutes, try doubling the value of the timeo mount option in the /etc/fstab file on your NFS clients.
Each message logged by these daemons can be identified by the date, time, host name, process ID, and name of the function that generated the message. You can direct logging messages from all these NFS services to the same file. To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly.
Tracing With nettl and netfmt 1. Enter the following command to make sure nettl is running: /usr/bin/ps -ef | grep nettl If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2. Enter the following command to start tracing: /usr/sbin/nettl -tn pduin pduout loopback -e all -s 1024 \ -f tracefile 3. 4. Recreate the event you want to trace. Enter the following command to turn tracing off: /usr/sbin/nettl -tf -e all 5.
Index Symbols D + (plus sign) in AutoFS maps, 74 in group file, 46 in passwd file, 46 -hosts map, 36, 69 examples, 70 -null map, 71 32k transfer size, 42 device busy, 97, 98 direct map, 62 advantages, 61 environment variables in, 66 examples, 64 modifying, 57 DNS, 96 A environment variables in AutoFS maps, 66 /etc/auto_master file see auto_master map, 69 /etc/group file see group database, 20 /etc/netgroup file see netgroup file, 43 /etc/rc.config.d/namesvrs file see namesvrs file, 22 /etc/rc.config.
wildcards in, 67, 68 inetd.conf file, 47, 96, 104 inetd.sec file examples, 48 interruptible mounts, 99 intr mount option, 99, 103 L lockd, 95 checking for hung process, 99 restarting, 99 lockf(), 100 logging, 103 AutoFS, 77, 78 nettl and netfmt, 104 rexd, 104 rstatd, 104 rusersd, 104 rwalld, 104 sprayd, 104 lookup, displayed by nfsstat, 102 ls, with AutoFS, 75 NFS, 11 see also client, NFS, 19 see also server, NFS, 19 client, 19, 34 server, 19 starting, 36 stopping, 98 troubleshooting, 95 nfs.
rstatd logging, 104 rusersd logging, 104 rwalld logging, 104 S security in mounted directories, 39 server not responding, NFS, 95, 103 server, NFS, 19 CPU load, 102 memory requirements, 102 too slow, 102 showmount, 96, 97 SIGUSR2 signal to automount, 78 simultaneous mounts, AutoFS, 72 slow server, NFS, 102 soft mount timed out, 103 soft mount option, 100, 103 sprayd logging, 104 stale file handle, 98 avoiding, 98 standard mount, 37 START_MOUNTD variable, 96 statd, 95 checking for hung process, 99 restartin