NFS Services Administrator's Guide HP-UX 11i version 3 HP Part Number: B1031-90072 Published: March 2011
© Copyright 2011 Hewlett-Packard Development Company, L.P Legal Notices © Copyright 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Contents Preface: About This Document ........................................................................7 Intended Audience....................................................................................................................7 Publishing History.....................................................................................................................7 What's in This document.........................................................................................................
Configuring and Administering NFS Clients ...............................................................................32 NFS Client Configuration Files and Daemons.........................................................................33 Configuring the NFSv4 Client Protocol Version.......................................................................33 Deciding Between Standard-Mounted Directories and Automounted Directories..........................34 Enabling an NFS Client ..........................
Deciding Between Direct and Indirect Automounts.......................................................................59 Configuring AutoFS Direct and Indirect Mounts...........................................................................59 Automounting a Remote Directory Using a Direct Map............................................................60 Notes on Direct Maps....................................................................................................
Disabling Logging in CacheFS.............................................................................................85 Caching a Complete Binary................................................................................................85 Packing a Cached Filesystem...............................................................................................86 Forcing a Cache Consistency Check.....................................................................................
Preface: About This Document To locate the latest version of this document, go to the HP-UX Networking docs page at: http:// www.hp.com/go/hpux-networking-docs. On this page, select HP-UX 11i v3 Networking Software. This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document's current edition. The printing date will change when a new edition is printed.
Chapter 3 Configuring the Cache File System (CacheFS) Describes the benefits of using the Cache File System and how to configure it on the HP-UX system. Chapter 4 Configuring and Administering AutoFS Describes how to configure and administer AutoFS. Chapter 5 Troubleshooting NFS Services Describes detailed procedures and tools for troubleshooting the NFS Services. Typographical Conventions This document uses the following conventions. Italics Identifies titles of documentation, filenames and paths.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS. This chapter addresses the following topics: • “ONC Services Overview” (page 9) • “Network File System (NFS)” (page 9) • “New Features in NFS” (page 10) ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment.
all the systems on the network, instead of duplicating common directories, such as /usr/local on each system. How NFS works The NFS environment consists of the following components: • NFS Services • NFS Shared Filesystems • NFS Servers and Clients NFS Services The NFS services is a collection of daemons and kernel components, and commands that enable systems with different architectures running different operating systems to share filesystems across a network.
NFSv4 uses the COMPOUND RPC procedure to build sequence of requests into a single RPC. All RPC requests are classified as either NULL or COMPOUND. All requests that are part of the COMPOUND procedure are known as operations. An operation is a filesystem action that forms part of a COMPOUND procedure. NFSv4 currently defines 38 operations. The server evaluates and processes operations sequentially.
the server file tree. If you use the PUTROOTFH operation, the client can traverse the entire file tree using the LOOKUP operation. ◦ Persistent The persistent file handle is an assigned fixed value for the lifetime of the filesystem object that it refers to. When the server creates the file handle for a filesystem object, the server must accept the same file handle for the lifetime of the object. The persistent file handle persists across server reboots and filesystem migrations.
users@domain Or group@domain Where: users specifies the string representation of the user group specifies the string representation of the group domain specifies a registered DNS domain or a sub-domain of a registered domain However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back.
as if they were part of the local filesystem. NFS disabled the system from exporting directories by using the -u option of the exportfs command. For information on how to share directories with the NFS clients, see “Sharing Directories with NFS Clients” (page 21). For information on how to unshare directories, see “Unsharing (Removing) a Shared Directory” (page 32).
• Mounting an NFS filesystem through a firewall For information on how to mount an NFS filesystem through a firewall, see “Accessing Shared NFS Directories across a Firewall” (page 28). • Mounting a filesystem securely For information on how to mount a filesystem in a secure manner, see “An Example for Securely Mounting a directory” (page 38). For information on how to disable mount access for a single client, see “Unmounting (Removing) a Mounted Directory” (page 38).
Consider the following points before enabling client-side failover: • The filesystem must be mounted with read-only permissions. • The filesystems must be identical on all the redundant servers for the failover to occur successfully. For information on identical filesystems, see “Replicated Filesystems” (page 16). • A static filesystem or one that is not modified often is used for failover. • File systems that are mounted using CacheFS are not supported for use with failover.
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An >NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
You can set user and group IDs in the following methods: • Using the HP-UX System Files (/etc/passwd and /etc/group) • Using NIS • Using LDAP Using the HP-UX System Files If you are using the HP-UX system files, add the users and groups to the /etc/passwd and /etc/ group files, respectively. Copy these files to all the systems on the network. For more information on the /etc/passwd and /etc/group files, see passwd(4) and group(4).
Using LDAP For more information on managing user profiles using LDAP, see the LDAP-UX Client Services B.04.00 Administrator’s Guide (J4269-90064). Configuring and Administering an NFS Server Configuring an NFS server involves completing the following tasks: 1. 2. Identify the set of directories that you want the NFS server to share. For example, consider an application App1 running on an NFS client. Application App1 requires access to the abc filesystem in an NFS server.
Table 3 NFS Server Daemons (continued) Daemon Name Function nfsmapid Maps to and from NFSv4 owner and owner group identification attributes to local UID and GID numbers used by both NFSv4 client and server. nfs4srvkd Supports server side delegation. rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.statd Maintains a list of clients that have performed the file locking operation over NFS against the server.
ps -ae | grep rpc.lockd ps -ae | grep rpc.statd If the daemons are running, an output similar to the following is displayed: 1069548396 ? 1069640883 ? 0:00 rpc.lockd 0:00 rpc.statd No message is displayed if the daemons are not running. To start the lockd and statd daemons, enter the following command: /sbin/init.d/lockmgr start 4. Enter the following command to run the NFS startup script: /sbin/init.d/nfs.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
nfs Specifies that the filesystem type is NFS -o Enables you to use some of the specific options of the share command, such as sec, async, public, and others. -d Enables you to describe the filesystem being shared When NFS is restarted or the system is rebooted, the /etc/dfs/dfstab file is read and all directories are shared automatically. 2.
share -F nfs -o root=Red:Blue:Green /var/mail In this example, the /var/mail directory is shared. Root access is allowed for clients Red, Blue, and Green. Superusers on all other clients are considered as unknown by the NFS server, and are given the access privileges of an anonymous user. Non-superusers on all clients are allowed read-write access to the /var/mail directory if the HP-UX permissions on the /var/mail directory allow them read-write access.
Table 4 Security Modes of the share command (continued) Security Mode Description krb5p Uses Kerberos V5 authentication, integrity checking, and privacy protection (encryption) on the shared filesystems. none Uses NULL authentication (AUTH_NONE). NFS clients using AUTH_NONE are mapped to the anonymous user nobody by NFS. You can combine the different security modes. However, the security mode specified in the host must be supported by the client.
3. Add a principal for all the NFS server to the Kerberos database. For example, if our NFS server is onc20.ind.hp.com then nfs/onc20.ind.hp.com principal should be added to the Kerberos database before running the NFS applications. To add principals use the Kerberos administration tool, kadminl onc52# /opt/krb5/admin/kadminl Connecting as: K/M Connected to krb5v01 in realm ONC52.IND.HP.COM. Command: add nfs/onc20.ind.hp.
# share -o sec=krb5i /aaa share_nfs: Invalid security mode "krb5i" Secure NFS Client Configuration with Kerberos To secure NFS client setup using Kerberos, follow these steps: 1. 2. Synchronize the date & time of server nodes with kerberos server. To change the current date and time use date command followed by the current date and time. For example, enter date 06101130 to set the date to June 10th and time to 11:30 AM. The time difference between the systems should not be more than 5 minutes.
onc36# gsscred -m krb5_mech -a 7. To mount, secure NFS file system, enter the following command: mount -o sec= Where, -o Enables you to use some of the specific options of the share command, such as sec, async, public, and others. sec Enables you to specify the security mode to be used. Specify krb5, krb5p or krb5i as the Security flavor. Enables you to specify the location of the directory.
Sharing directories across a firewall without fixed port numbers (NFSv2 and NFSv3) This is the default method of sharing directories across a firewall. In this method, the rpc.statd and rpc.mountd daemons do not run on fixed ports. The ports used by these daemons are assigned from the anonymous port range. By default, the anonymous port range is configured between 49152 and 65535. The rpc.lockd daemon runs at port 4045 and is not configurable. To determine the port numbers currently used by rpc.
/sbin/init.d/nfs.server stop /sbin/init.d/lockmgr stop /sbin/init.d/lockmgr start /sbin/init.d/nfs.server start 3. Configure the firewall based on the port numbers configured. Sharing directories across a firewall using the NFSv4 protocol NFSv4 is a single protocol that handles mounting, and locking operations for NFS clients and servers. The NFSv4 protocol runs on port 2049, by default.
Removing the additional overhead of the PORTMAP and MOUNT protocols reduces the binding time between the client and the server. The WebNFS protocol reduces the number of over-the-wire requests and makes traversing firewalls easier. WebNFS offers no support for locking files across mounted filesystems. Hence, multiple clients cannot synchronize their locking calls across WebNFS mounted filesystems.
Unsharing (Removing) a Shared Directory NOTE: Before you unshare a directory, run the showmount -a command to verify whether any clients are accessing the shared directory. If users are accessing the shared directories, they must exit the directories before you unshare the directory. A directory that is shared can be unshared. You can temporarily unshare a directory using the unshare command.
procedure calls that enable the client to transparently access the filesystem on the server’s disk. To users, these mounted remote directories appear as if they are a part of the local filesystem. An NFS client can also be an NFS server. NFS filesystems can also be automounted using AutoFS. For information on how to automount a filesystem, see Chapter 3: “Configuring and Administering AutoFS” (page 47).
To configure the NFS client to enable it to mount filesystems using protocol version 4 (NFSv4), follow this step: Set the value of the NFS_CLIENT_VERSMAX variable to 4 in the /etc/default/nfs file, as follows: NFS_CLIENT_VERSMAX = 4 The NFS_CLIENT_VERSMAX variable specifies the maximum version of the NFS protocol for communication. You can also configure the client protocol version to NFSv4 by specifying vers=4 while mounting the directory.
The NFS client startup script starts the necessary NFS client daemons, and mounts the remote directories configured in the /etc/fstab file. Mounting Remote Directories The mount command mounts a shared NFS directory from a remote system (NFS server). You can mount a filesystem using the following methods: • Automatic Mounting at System Boot time To set up a filesystem for automatic mounting at system boot time, you must configure it in the /etc/fstab file.
1. To mount a remote directory manually, enter the following command: mount serv:directory_name directory-name 2.
Figure 4 NFS Mount of manpages • Mounting a Home directory mount -r -o nosuid broccoli:/home/broccoli /home/broccoli mount -r -o nosuid cauliflower:/home/cauliflower /home/cauliflower In this example, the NFS client mounts the home directories from NFS servers broccoli and cauliflower . The nosuid option prevents programs with setuid permission from executing on the local client. Figure 5 illustrates this example.
• Mounting a replicated set of NFS filesystems with same pathnames mount -r onc21,onc23,onc25:/Casey/Clay /Casey/Clay In this example, the NFS client mounts a single filesystem, /Casey/Clay that has been replicated to a number of servers with the same pathnames. This enables the NFS client to failover to either server onc21, onc23, or onc25 if the current server has become unavailable.
NOTE: Before you unmount a directory, run the fuser -cu command to determine whether the directory is currently in use. The fuser command lists the process IDs and user names of all the processes that are using the mounted directory. If users are accessing the mounted directories, they must exit the directories before you unmount the directory. To unmount a mounted directory and prevent it from being automatically mounted, follow these steps: Automatic Unmount 1.
NFS Client and Server Transport Connections NFS runs over both UDP and TCP transport protocols. The default transport protocol is TCP. Using the TCP protocol increases the reliability of NFS filesystems working across WANs and ensures that the packets are successfully delivered. TCP provides congestion control and error recovery. NFS over TCP and UDP works with NFS Version 2 and Version 3. NOTE: TCP is the only transport protocol supported by NFS Version 4.
Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled, the nfsd daemon is started to service all TCP and UDP requests. If you want to change startup parameters for nfsd, you must login as superuser (root) and make changes to the /etc/default/nfs file or use the setoncenv command. The /etc/default/nfs file provides startup parameters for the nfsd daemon and rpc.lockd daemon.
The NIS_domain field specifies the NIS domain in which the triple (host, user, NIS_domain) is valid. For example, if the netgroup database contains the following netgroup: myfriends (sage,-,bldg1) (cauliflower,-,bldg2) (pear,-,bldg3) and an NFS server running NIS in the domain bldg1 shares a directory only to the netgroup myfriends, only the host sage can mount that directory. The other two triples are ignored, because they are not valid in the bldg1 domain.
The first occurrence of it is read for the host name, and the second occurrence is read for the user name. No relationship exists between the host and user in any of the triples. For example, user jane may not even have an account on host sage. A netgroup can contain other netgroups, as in the following example: root-users (dill,-, ) (sage,-, ) (thyme,- , ) (basil,-, ) mail-users (rosemary, , ) (oregano, , ) root-users The root-users netgroup is a group of four systems.
vandals ( ,pat, ) ( ,harriet, ) ( ,reed, ) All users except those listed in the vandals netgroup can log in to the local system without supplying a password from any system in the network. CAUTION: Users who are denied privileged access in the /etc/hosts.equiv file can be granted privileged access in a user’s $HOME/.rhosts file. The $HOME/.rhosts file is read after the /etc/hosts.equiv file and overrides it. For more information, see hosts.equiv(4).
teddybears::23:pooh,paddington -@bears For more information on NIS, see NIS Administrator’s Guide (5991-2187). For information on the /etc/group file, see group(4). Configuring RPC-based Services This section describes the following tasks: • “Enabling Other RPC Services” (page 45) • “Restricting Access to RPC-based Services” (page 46) Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc” .
Table 9 RPC Services managed by inetd (continued) RPC Service Description sprayd The rpc.sprayd program is the server for the spray command, which sends a stream of packets to a specified host and then reports how many were received and how fast. For more information, see sprayd (1M) and spray (1M). rquotad The rpc.rquotad program responds to requests from the quota command, which displays information about a user’s disk usage and limits. For more information, see rquotad (1M) and quota (1).
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
mounts. The filesystem interacts with the automount command and the automountd daemon to mount filesystems automatically. The automount Command This command installs the AutoFS mount-points, and associates an automount map with each mount-point. The AutoFS filesystem monitors attempts to access directories within it and notifies the automountd daemon. The daemon locates a filesystem using the map, and then mounts this filesystem at the point of reference within the AutoFS filesystem.
when not in use, use the -t option of the automount command. For more information on the different options supported by automount, see automount(1M) and automountd(1M). CAUTION: You must maintain filesystems managed by AutoFS, by using the automountd and automount utilities. Manually mounting and unmounting file systems managed by AutoFS can cause disruptive or unpredictable results, including but not limited to commands hanging or not returning expected results.
Figure 7 AutoFS In this figure, AFS1, AFS2, and Sage are AutoFS clients. Thyme and Basil are the NFS servers. The NFS servers export directories. The AutoFS clients use maps to access the exported directories. For instance, the NFS server, Basil, exports the /export directory. If you are a user on any of the AutoFS clients, you can use maps to access the /export directory from the NFS server, Basil.
Features This section discusses the features that AutoFS supports on systems running HP-UX 11i v3. On-Demand Mounting In HP-UX 11i v3, the filesystems being accessed are mounted automatically. Filesystems that are hierarchically related to the automounted filesystems are mounted only when necessary. Consider the following scenario where the AutoFS master map, /etc/auto_master, and the direct map, /etc/auto_direct, are on the NFS client, sage.
# /etc/auto_master file # local mount-point map name /nfs/desktop mount options /etc/auto_indirect Following are the contents of the indirect map, /etc/auto_indirect, which contains the local mount-points on the client and the references to the directories on the server: # /etc/auto_indirect file # local mount-point mount options /test /apps -nosuid remote server:directory -nosuid thyme:/export/project/test basil:/export/apps Enter the following commands to view the contents of the /nfs/desktop dir
Supported Filesystems AutoFS enables you to mount different types of filesystems. To mount the filesystems, use the fstype mount option, and specify the location field of the map entry. Following is a list of supported filesystems and the appropriate map entry: AutoFS mount-point -fstype=autofs autofs_map_name NOTE: You can specify another AutoFS map name in the location field of the map-entry. This would enable AutoFS to trigger other AutoFS mounts.
document, go to the HP-UX Networking docs page at: http://www.hp.com/go/ hpux-networking-docs. On this page, select HP-UX 11i v3 Networking Software. • LDAP: A directory service that stores information, which is retrieved by clients throughout the network. To simplify HP-UX system administration, the LDAP-UX Integration product centralizes user, group, and network information management in an LDAP directory. For more information on the LDAP-UX Integration product, see the LDAP-UX Client Services B.04.
1. If the AutoFS maps are not already migrated, migrate your AutoFS maps to LDAP Directory Interchange Format (LDIF) files using the migration scripts. The LDAP-UX client only supports the new AutoFS schema, and the migration scripts will migrate the maps according to the new schema. For information on the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool.
1. 2. In the /etc/rc.config.d/nfsconf file, the AUTOFS variable is set to 1. Any options you had specified in the AUTO_OPTIONS variable are copied to either the AUTO_OPTIONS or the AUTOMOUNTD_OPTIONS variable. Obsolete options are removed. Table 11 lists the options of the old automount command and the equivalent AutoFS command options. It also indicates which automount options are not supported in AutoFS. Table 11 Old Automount Command-Line Options Used By AutoFS 3.
Table 12 AutoFS Configuration Variables Variable Name Description AUTOFS Specifies if the system uses AutoFS. Set the value to 1 to specify that this system uses AutoFS. Set the value to 0 to specify that this system does not use AutoFS. The default value of AutoFS is 1. NOTE: If you set the value of AUTOFS to 1, the NFS_CORE core configuration variable must also be set to 1. AUTOMOUNT_OPTIONS Specifies a set of options to be passed to the automount command when it is run. The default value is “ ” .
If the AUTOMOUNT_OPTIONS variable does not specify the -f filename option, AutoFS consults the NSS configuration, to determine where to search for the AutoFS master map. For more information on configuring the NSS, see nsswitch.conf (4) and automount(1M) . To configure AutoFS using the /etc/default/autofs file, follow these steps: 1. 2. Log in as superuser. Edit the /etc/default/autofs file.
If the first field specifies the directory as /-, then the second field is the name of the direct map. The master map file, like any other map file, may be distributed by NIS or LDAP by modifying the appropriate configuration files and removing any existing /etc/auto_master master map file. NOTE: If the same mount-point is used in two entries, the first entry is used by the automount command. The second entry is ignored. You must run the automount command after you modify the master map or a direct map.
Figure 9 shows the difference between direct mounts and indirect mounts on an NFS client. Figure 9 Difference Between Direct Mounts and Indirect Mounts In the Mounts in a Direct Map figure, mounts are configured in various places in the local filesystem and not located under the same parent directory. In the Mounts in an Indirect Map figure, all the mounts are configured under the same parent directory. CAUTION: Any filesystems that are being managed by AutoFS should never be manually mounted or unmounted.
IMPORTANT: Do not automount a remote directory on a local directory, which is a symbolic link. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server. If it is the same, and the NFS server also acts as an NFS client and uses AutoFS with these map entries, the exported directory can attempt to mount over itself. This can result in unexpected behavior.
# /etc/auto_direct file # local mount-point mount options /auto/project/specs -nosuid /auto/project/budget -nosuid remote server:directory thyme:/export/project/specs basil:/export/FY94/proj1 Figure 10 illustrates how AutoFS sets up direct mounts. Figure 10 How AutoFS Sets Up Direct Mounts Automounting a Remote Directory Using an Indirect Map This section describes how to automount a remote directory using an indirect map. To automount a remote directory using an indirect map, follow these steps: 1.
IMPORTANT: Ensure that local_parent_directory and local_subdirectory are not already created. AutoFS creates them when it mounts the remote directory. If these directories exist, their files and directories in them are hidden when the remote directory is mounted. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server.
# /etc/auto_desktop file # local mount-point mount options draw write remote server:directory -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Figure 11 illustrates how AutoFS sets up indirect mounts. Figure 11 How AutoFS Sets Up NFS Indirect Mounts Using Environment Variables as Shortcuts in AutoFS Maps This section describes how to use environment variables as shortcuts in AutoFS maps using an example.
/etc/default/autofs file, AutoFS uses the current value of the environment variable on the local host. For information on some of the pre-defined environment variables, see automount(1M). Using Wildcard Characters as Shortcuts in AutoFS Maps Using wildcard characters makes it very easy to mount all the directories from a remote server to an identically named directory on the local host.
The line with the asterisk must be the last line in an indirect map. AutoFS reads the lines in the indirect map sequentially until it finds a match for the requested local subdirectory. If asterisk (*) matches any subdirectory, AutoFS stops reading at the line with the asterisk. For example, if the /etc/auto_home map contains the following lines, * charlie basil:/export/home/& thyme:/export/home/charlie AutoFS attempts to mount /export/home/charlie from the host, basil.
AutoFS mounts /export/home/howard from server basil to the local mount-point /home/howard on the NFS client. Figure 12 illustrates this configuration. Figure 12 Home Directories Automounted with Wildcards Special Maps There are two types of special maps: -hosts and -null. By default, the -hosts map is used with the /net directory and assumes that the map entry is the hostname of the NFS server. The automountd daemon dynamically creates a map entry from the server's list of exported filesystems.
Notes on the -hosts Map The -hosts map is a built-in AutoFS map. It enables AutoFS to mount exported directories from any NFS server found in the hosts database, whenever a user or a process requests access to one of the exported directories from that server. CAUTION: You may inadvertently cause an NFS mount over X.25 or SLIP, which is unsupported, or through a slow router or gateway, because the -hosts map allows NFS access to any reachable remote system.
Turning Off an AutoFS Map Using the -null Map To turn off a map using the -null map, follow these steps: 1. Add a line with the following syntax in the AutoFS master map: local_directory -null 2. If AutoFS is running, enter the following command on each client that uses the map, to force AutoFS to reread its maps: /usr/sbin/automount This enables AutoFS to ignore the map entry that does not apply to your host.
• “Including a Map in Another Map” (page 72) • “Creating a Hierarchy of AutoFS Maps” (page 72) Automounting Multiple Directories (Hierarchical Mounts) AutoFS enables you to automount multiple directories simultaneously. Use an editor to create an entry with the following format in a direct or indirect map, and if needed, create the auto_master entry: local_dir /local_subdirectory [-options] \ server:remote_directory \ /local_subdirectory [-options] server:remote_directory \ ...
To configure multiple replicated servers for a directory, follow these steps: 1. Create and configure the /etc/netmasks file. AutoFS requires the /etc/netmasks file to determine the subnets of local clients in a replicated multiple server environment. The /etc/netmasks file contains IP address masks with IP network numbers. It supports both standard subnetting as specified in RFC-950, and variable-length subnetting as specified in RFC-1519.
Including a Map in Another Map If you want your map to refer to an external map, you can do so by including the external map in your map. The entries in the external map are read as if they are part of your map.
Starting with the AutoFS mount at /org, the evaluation of this path dynamically creates additional AutoFS mounts at /org/eng and /org/eng/projects. No action is required for the changes to take effect on the user's system because the AutoFS mounts are created only when required. You need to run the automount command only when you make changes to the master map or to a direct map.
# /etc/auto_desktop file # local mount-point mount options remote server:directory draw write -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Enter the following commands: cd /nfs/desktop ls The ls command displays the following output: draw write The draw and write subdirectories are the potential mount-points (browsability), but are not currently mounted.
IMPORTANT: Do not stop the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it terminates. Use the autofs stop command instead. 5. To ensure that AutoFS is no longer active, enter the ps command: /usr/bin/ps -ef | grep automount If the ps command indicates that AutoFS is still active, ensure that all users have exited the automounted directories and then try again. Do not restart AutoFS until all the automount processes are terminated. 6.
To Stop AutoFS Logging To stop AutoFS logging, stop AutoFS and restart it after removing the “-v” option from AUTOMOUNTD_OPTIONS. AutoFS Tracing AutoFS supports the following Trace levels: Detailed (level 3) Includes traces of all the AutoFS requests and replies, mount attempts, timeouts, and unmount attempts. You can start level 3 tracing while AutoFS is running. Basic (level 1) Includes traces of all the AutoFS requests and replies. You must restart AutoFS to start level 1 tracing.
5. Warn users to exit the directory, and kill processes that are using the directory, or wait until all the processes terminate. Enter the following command to kill all the processes using the mounted directory: /usr/sbin/fuser -ck local_mount_point 6. Enter the following command to stop AutoFS: /sbin/init.d/autofs stop CAUTION: Do not kill the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it dies. 7.
Unmount Event Tracing Output The general format of an unmount event trace is: UNMOUNT REQUEST:
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
cache miss An attempt to reference data that is not yet cached is called a cache miss. warm cache A cache that contains data in its front filesystem is called a warm cache. In this case, the cached data can be returned to the user without requiring an action from the back filesystem. cache hit A successful attempt to reference data that is cached is called a cache hit. How CacheFS Works Figure 15 displays a sample CacheFS environment.
of these files are always available in the cache. Packing files and directories in the cache enables you to have greater control over the cache contents. The functionality provided by this command is an alternative to the rpages mount option. For information on how to pre-load or pack files, see “Packing a Cached Filesystem” (page 86). • Complete Binary Caching via the “rpages” mount option CacheFS is commonly used to manage application binaries.
• Support for ACLs An Access Control List (ACL) offers stronger file security by enabling the owner of the file to define file permissions for specific users and groups. This version of CacheFS on HP-UX supports ACLs with VxFS and NFS and not with HFS. • Support for Logging A new command, cachefslog, is used to enable or disable logging for a CacheFS mount-point. If logging functionality is enabled, details about the operations performed on the CacheFS mount-point are stored in a logfile.
CacheFS allows more than one filesystem to be cached in the same cache. You need not create a separate cache directory for each CacheFS mount. Mounting an NFS Filesystem Using CacheFS This section describes how to mount an NFS filesystem using CacheFS.
To change the mount option from default to weakconst after unmounting, enter the following command: mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M). Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS. Before you automount an NFS filesystem using CacheFS, you must configure a directory in a local filesystem as cache.
NOTE: When multiple mount-points use the same cache directory, enabling logging for one CacheFS mount-point automatically enables logging for all the other mount-points. 3. To verify if logging is enabled for /cfs_mnt1, enter the following command: cachefslog /cfs_mnt1 If logging has been enabled, the logfile is displayed. Disabling Logging in CacheFS You can use the cachefslog command to halt or disable logging for a CacheFS mount-point. To disable CacheFS logging, follow these steps: 1. 2.
Packing a Cached Filesystem Starting with HP-UX 11i v3, the cachefspack command is introduced to provide greater control over the cache. This command enables you to specify files and directories to be loaded, or packed, in the cache. It also ensures that the current copies are always available in the cache.
You can unpack files that you no longer require, using one of the following methods: • Using the -u option To unpack a specific packed file or files from the cache directory, enter the following command: cachefspack -u filename where: • -u Specifies that certain files are to be unpacked. filename Specifies the file to unpack.
Checking the Integrity of a Cache You can use the fsck command to check the integrity of a cache. The CacheFS version of the fsck command checks the integrity of the cache and automatically corrects any CacheFS problems that it encounters. The CacheFS fsck command is run automatically by the mount command when the cache directory is accessed for the first time after a system reboot.
cfsadmin -c /cache cfsadmin -u -o maxblocks=95 /cache This updates the value of maxblocks from 90 which is the default value to 95. Deleting a Cache Directory To delete a cache directory that is no longer required you must use the cfsadmin command. The syntax to delete the cache directory is as follows: cfsadmin -d {cacheID | all} cache-directory where: cacheID Specifies the name of the cache filesystem. all Specifies that all cached filesystems in the cache-directory are to be deleted.
cfsadmin: list cache FS information maxblocks 90% minblocks 0% threshblocks 85% maxfiles 91% minfiles 0% threshfiles 85% maxfilesize 3MB srv01:_tmp:_mnt1 In this example, the filesystem with CacheID srv01:_tmp:_mnt filesystem has been deleted.
An output similar to the following is displayed: /cfs_mnt1 end size: 40k high water size: 40k total for cache Initial size: end size: 640k 40k To view the operations performed in ASCII format, enter the following command: cachefswssize -a /tmp/logfile An output similar to the following is displayed: 1/0 0:00 0 Mount e000000245e31080 411 65536 256 /cfs_mnt1 (cachefs1:_cache_exp:_cfs_mnt1) 1/0 0:00 0 Mdcreate e000000245e31080 517500609495040 1 1/0 0:00 0 Filldir e000000245e31080 517500609495040
Table 16 Common Error Messages encountered while using the fsck command Error Message Possible Causes Resolution “fsck -F cachefs Cannot open This error message indicates that /c is 1. Delete the cache. lock file /test/c/.cfs_lock” not a cache directory. 2. Recreate the cache directory using the cfsadmin command. “Cache /c is in use and cannot be This error message indicates that the 1. Check if a directory named /c exists cache directory is in use. However, the modified”.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 93) • “Performance Tuning” (page 99) • “Logging and Tracing of NFS Services” (page 101) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
◦ rpc.statd ◦ rpc.lockd If any of these processes is not running, follow these steps: 1. Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines: NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start □ Enter the following command on the NFS client to make sure the rpc.
If the server is not exporting the directory, edit the /etc/dfs/dfstab file on the server so that it allows your NFS client access to the directory. Then, enter the following command to force the server to read its /etc/dfs/dfstab file. shareall -F nfs If the directory is shared with the [access_list] option, make sure your NFS client is included in the [access_list], either individually or as a member of a netgroup.
The fuser(1M) command will return a list of process IDs and user names that are currently using the directory mounted under local_mount_point. This will help you decide whether to kill the processes or wait for them to complete. 2. To kill all processes using the mounted directory, enter the following command: /usr/sbin/fuser -ck local_mount_point 3. Try again to unmount the directory. □ Verify that the filesystem you are trying to unmount is not a mount-point of another filesystem.
/usr/sbin/mount 2. For every NFS-mounted directory listed by the mount command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of everyone using the mounted directory. 3. Warn any users to cd out of the directory, and kill any processes that are using the directory, or wait until the processes terminate.
4. 5. Before retrying the mount that caused the program to hang, wait for a short while, say two minutes. If the problem persists, restart rpc.statd and rpc.lockd daemons and enable tracing. Data is Lost Between the Client and the Server □ Make sure that the directory is not exported from the server with the async option. If the directory is exported with the async option, the NFS server will acknowledge NFS writes before actually writing data to disk.
• Improper krb5.conf This could be because the realm to domain matching is not set in either server or client’s configuration file (krb5.conf). To fix the krb5.conf file for proper domain name to realm matching, modify the file based on the following sample: # # Kerberos configuration # This krb5.conf file is intended as an example only. # see krb5.
nfsstat -rc 2. If the timeout and retrans values displayed by nfsstat -rc are high, but the badxid value is close to zero, packets are being dropped before they get to the NFS server. Try decreasing the values of the wsize and rsize mount options to 4096 or 2048 on the NFS clients. See “Changing the Default Mount Options” (page 38)“Changing the Default Mount Options” on page 51 . See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3.
request to complete before issuing another request. This can be performed only for NFSv2. The default option for NFSv3 is async. Improving NFS Client Performance □ For files and directories that are mounted read-only and never change, set the actimeo mount option to 120 or greater in the /etc/fstab file on your NFS clients. □ If you see several “server not responding” messages within a few minutes, try doubling the value of the timeo mount option in the /etc/fstab file on your NFS clients.
• rpc.rwalld • rpc.sprayd Logging is not available for the rpc.quotad daemon. Each message logged by these daemons can be identified by the date, time, host name, process ID, and name of the function that generated the message. You can direct logging messages from all these NFS services to the same file. To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly.
configuration file, /etc/nettlgen.conf. If the file /var/adm/nettl.LOG00 does not exist on your system, the default logfile may have been changed in /etc/nettlgen.conf. For more information, see nettl(1M) and netfmt(1M). Tracing With nettl and netfmt 1. Enter the following command to make sure nettl is running: /usr/bin/ps -ef | grep nettl If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2.
Index Symbols D + (plus sign) in AutoFS maps, 72 in group file, 44 in passwd file, 44 -hosts map, 34, 67 examples, 68 -null map, 69 32k transfer size, 40 device busy, 95, 96 direct map, 60 advantages, 59 environment variables in, 64 examples, 61 modifying, 55 DNS, 94 A environment variables in AutoFS maps, 64 /etc/auto_master file see auto_master map, 67 /etc/group file see group database, 18 /etc/netgroup file see netgroup file, 41 /etc/rc.config.d/namesvrs file see namesvrs file, 20 /etc/rc.config.
wildcards in, 65, 66 inetd.conf file, 45, 94, 102 inetd.
logging, 102 rwalld logging, 102 wildcards in AutoFS maps, 65, 66 wsize mount option, 100, 101 S ypmake, 41 security in mounted directories, 37 server not responding, NFS, 93, 101 server, NFS, 17 CPU load, 100 too slow, 100 showmount, 94, 95 SIGUSR2 signal to automount, 76 simultaneous mounts, AutoFS, 70 slow server, NFS, 100 soft mount timed out, 101 soft mount option, 98, 101 sprayd logging, 102 stale file handle, 96 avoiding, 96 standard mount, 35 START_MOUNTD variable, 94 statd, 93 checking for hung