NFS Services Administrator’s Guide HP-UX 11i version 3 Manufacturing Part Number : B1031-90061 March 2007 © Copyright 2007 Hewlett-Packard Development Company, L.P.
Legal Notices Copyright 2007 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents 1. Introduction ONC Services Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network File System (NFS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How NFS works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Features in NFS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Deciding Between Standard-Mounted Directories and Automounted Directories . . Enabling an NFS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mounting Remote Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enabling Client-Side Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Mounting of Directories . . . . . . . . . . . . . . . . . . . . . . . .
Contents Notes on Indirect Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample File Entries for NFS Indirect Automounts . . . . . . . . . . . . . . . . . . . . . . . . . Automounting Home Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of Automounting a User’s Home Directory . . . . . . . . . . . . . . . . . . . . . . . . Automounting All Exported Directories from Any Host Using the -hosts Map . . . .
Contents Unmounting a CacheFS Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Deleting a Cache Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 5. Troubleshooting NFS Services Common Problems with NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NFS “Server Not Responding” Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures Figure 1-1. Server View of the Shared Directories. . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Figure 1-2. Interaction among AutoFS components. . . . . . . . . . . . . . . . . . . . . . . . . . .33 Figure 1-3. CacheFS Workflow Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38 Figure 2-1. Symbolic Links in NFS Mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52 Figure 2-2. WebNFS Session . . . . . . . . . . . . . . . . . . . . . . . .
Figures 8
Tables Table 1. Publishing History Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Table 2-1. NFS Server Configuration Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Table 2-2. NFS Server Daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 Table 2-3. Security Modes of the share command . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Table 2-4. NFS Session Versus WebNFS Session . . . . . .
Tables 10
Preface: About This Document The latest version of this document can be found on line at: http://www.docs.hp.com This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document’s current edition. The printing date will change when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The document part number will change when extensive changes are made.
Table 1 Publishing History Details (Continued) Document Manufacturing Part Number Operating Systems Supported Publication Date B1031-90051 11.0, 11i v1, 1.5, 1.6 August, 2003 B1031-90052 11.0, 11i v1, 1.5, 1.6 January, 2004 B1031-90053 11i v2 March, 2004 B1031-90054 11.0, 11i v1, 1.5, 1.6 March, 2004 B1031-90061 11i v3 February, 2007 What’s in This document This manual describes how to install, configure and troubleshoot the NFS Services product.
Typographical Conventions This document uses the following conventions. Italics Identifies titles of documentation, filenames and paths. Bold Text that is strongly emphasized. monotype Identifies program/script, command names, parameters or display. HP Encourages Your Comments HP encourages your comments concerning this document. We are truly committed to providing documentation that meets your needs. Please send comments to: netinfo_feedback@cup.hp.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS.
Introduction This chapter addresses the following topics: 16 • “ONC Services Overview” on page 17 • “Network File System (NFS)” on page 19 • “New Features in NFS” on page 21 • “AutoFS” on page 32 • “New Features in AutoFS” on page 35 • “CacheFS” on page 37 • “New Features in CacheFS” on page 39 Chapter 1
Introduction ONC Services Overview ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment. ONC also includes tools to administer clients and servers. ONC services consists of the following components: • Network File System (NFS) enables you to access files from any location on the network, transparently.
Introduction ONC Services Overview • 18 Remote Procedure Call (RPC) is a mechanism that enables a client application to communicate with a server application. The NFS protocol uses RPC to communicate between NFS clients and NFS servers. You can write your own RPC applications using rpcgen, an RPC compiler that simplifies RPC programming. Transport-Independent RPC (TI-RPC) is supported on HP-UX 11i v3. For information on RPC, see rpc (3N) and rpcgen (1).
Introduction Network File System (NFS) Network File System (NFS) The Network File System (NFS) is a distributed filesystem that provides transparent access to files and directories that are shared by remote systems. It enables you to centralize the administration of these files and directories. NFS provides a single copy of the directory that can be shared by all the systems on the network, instead of duplicating common directories, such as /usr/local on each system.
Introduction Network File System (NFS) Once the filesystem is shared by a server, it can be accessed by a client. Clients access files on the server by mounting the shared filesystem. For users, these mounted filesystems appear as a part of the local filesystem.
Introduction New Features in NFS New Features in NFS This section discusses the new features that NFS supports on systems running HP-UX 11i v3. NOTE All versions of the NFS protocols are supported on HP-UX 11i v3. NFSv4 is not the default protocol. However, it can be enabled. For information on how to enable NFSv4, see “Configuring the NFSv4 Server Protocol Version” on page 48.
Introduction New Features in NFS The server evaluates and processes operations sequentially. If an error is encountered, it is returned by the server for the entire procedure up to the first operation that causes the error. The NFSv4 protocol design enables NFS developers to add new operations that are based on IETF specifications. • Delegation In NFSv4, the server can delegate certain responsibilities to the client.
Introduction New Features in NFS For information on how to secure your systems, see “Secure Sharing of Directories” on page 56. • ACLs An Access Control List (ACL) provides stronger file security, by enabling the owner of a file to define file permissions for the file owner, the group, and other specific users and groups. ACL support is built into the protocol. ACLs can be managed from an NFS client using either the setacl or the getacl command.
Introduction New Features in NFS — Volatile Volatile file handles can be set to expire at a certain time. For example, they can be set to expire during the filesystem migration. This file handle type is useful for servers that cannot implement persistent file handles. However, the volatile file handles do not share the same longevity characteristics of a persistent file handle, because these file handles can become invalid or expire. HP-UX supports only persistent file handles.
Introduction New Features in NFS Figure 1-1 shows the server view of the shared directories. Figure 1-1 Server View of the Shared Directories If the administrator shares /, and /opt/dce, then the NFS client using NFSv2 and NFSv3 can mount / and /opt/dce. Attempts to mount /opt will fail. In NFSv4, the client can mount /, /opt and /opt/dce. If the client mounts /opt and lists the contents of the directory, only the directory dce is seen.
Introduction New Features in NFS However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back. The nfsmapid daemon is used to map the owner and owner_group identification attributes with the local user identification (UID) and group identification (GID) numbers, which are used by both the NFSv4 server and the NFSv4 client.
Introduction New Features in NFS For information on configuring NFSv4 for the server, see “Configuring the NFSv4 Server Protocol Version” on page 48. For information on configuring NFSv4 for the client, see “Configuring the NFSv4 Client Protocol Version” on page 74. Sharing and Unsharing Directories In HP-UX 11i v3, NFS replaces the exportfs command with the share command. The share command is used on the server to share directories and files with clients.
Introduction New Features in NFS — Using random ports (NFSv2 and NFSv3) For information on how to share directories across a firewall using random ports, see “Sharing directories across a firewall without fixed port numbers (NFSv2 and NFSv3)” on page 65. — Using the /etc/default/nfs file For information on how to share directories across a firewall using the /etc/default/nfs file, see “Sharing directories across a firewall using fixed port numbers in the nfs file” on page 67.
Introduction New Features in NFS • Mounting a filesystem securely For information on how to mount a filesystem in a secure manner, see “An Example for Securely Mounting a directory” on page 83. For information on how to disable mount access for a single client, see “Unmounting (Removing) a Mounted Directory” on page 84. Starting with HP-UX 11i v3, the mount command is enhanced to provide benefits such as performance improvement of large sequential data transfers and local locking for faster access.
Introduction New Features in NFS The AUTH_DH authenticating method was introduced to address the vulnerabilities of the AUTH_SYS authentication method. The AUTH_DH security model is stronger, because it authenticates the user by using the user’s private key. Kerberos is an authentication system that provides secure transactions over networks. It offers strong user authentication, integrity and privacy. Kerberos support has been added to provide authentication and encryption capabilities.
Introduction New Features in NFS For information on how to enable client-side failover, see “Enabling Client-Side Failover” on page 79. Replicated Filesystems A replicated filesystem contains the corresponding directory structures and identical files. A replica (identical copy) of a filesystem consists of files of the same size and same file type as the original filesystem. HP recommends that you create these replicated filesystems using the rdist utility.
Introduction AutoFS AutoFS AutoFS is a client-side service that enables automatic mounting of filesystems. It is initialized by the automount command, which runs automatically when a system boots. The automount daemon, automountd, runs continuously. It mounts and unmounts remote directories as required. When a client running automountd attempts to access a remote file or a remote directory, the daemon mounts the filesystem.
Introduction AutoFS AutoFS Filesystem The AutoFS filesystem is a virtual filesystem that provides a directory structure for automatic mounting. It includes autofskd, a kernel-based process that periodically cleans up mounts. The automountd Daemon The automountd daemon is a stateless daemon that accepts RPC requests from the AutoFS filesystem, to mount or unmount directories and filesystems. Figure 1-2 shows the interaction among the AutoFS components.
Introduction AutoFS are not mounted automatically at startup. The automounted filesystems are points under which filesystems are mounted when users request access to them. When AutoFS receives a request to mount a filesystem, it calls the automountd daemon, which mounts the requested filesystem. AutoFS mounts the filesystems at the configured mount points. The automountd daemon is independent of the automount command.
Introduction New Features in AutoFS New Features in AutoFS This section discusses the new features that AutoFS supports on systems running HP-UX 11i v3. • LDAP Support AutoFS supports LDAP directories for AutoFS map storage and distribution. • NFSv4 Support AutoFS supports NFSv4 filesystems. • Secure NFS Support AutoFS supports secure NFS filesystems. • IPv6 Support AutoFS supports filesystem mounting over IPv6 transports.
Introduction New Features in AutoFS 36 • NFS loopback mount – By default, AutoFS uses the Local FileSystems (LOFS) mounts for locally mounted filesystems. AutoFS provides an option to allow loopback NFS mounts for the local mount. This is a useful option in High Availability NFS environments. • Client-side Failover support - AutoFS enables a mounted NFS read-only filesystem to transparently switch over to an alternate server if the current server goes down.
Introduction CacheFS CacheFS The cache filesystem (CacheFS) is a general purpose filesystem caching mechanism that improves the performance of client-side applications when dealing with slow NFS servers or a slow network. By caching the data to a fast local filesystem instead of going over the wire, the client sees better performance. This results in reduced server and network load, and improves NFS response time and scalability.
Introduction CacheFS Figure 1-3 shows the CacheFS workflow process. Figure 1-3 CacheFS Workflow Process The back filesystem (the filesystem that is cached) is mounted on the cache. When the user accesses files that are part of the back filesystem, these files are placed in the client’s local cache. The front filesystem (the local filesystem that stores the cached data) is mounted in the cache and is accessed from the local mount point.
Introduction New Features in CacheFS New Features in CacheFS This section discusses the new features that CacheFS supports on systems running HP-UX 11i v3. • Cachefspack The cachefspack command improves CacheFS performance and allows greater control over the cache contents, because it ensures that the specified files are always present in the cache. This command enables you to pre-load specific files and directories in the cache.
Introduction New Features in CacheFS 40 Chapter 1
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the Chapter 2 41
Configuring and Administering NFS Services command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
Configuring and Administering NFS Services Prerequisites Prerequisites Before you configure your system as an NFS server or as an NFS client, perform the following prerequisite checks: • Verify network connectivity • Verify user IDs and group IDs setup • Verify group restrictions Verifying Network Connectivity Before you configure NFS, you must have already installed and configured the network hardware and software on all the systems that use NFS.
Configuring and Administering NFS Services Prerequisites • Each group has the same group ID on all systems where that group exists. • No two users on the network have the same user ID. • No two groups on the network have the same group ID.
Configuring and Administering NFS Services Prerequisites Using the HP-UX System Files If you are using HP-UX system files to manage your group database, follow these steps: 1. To identify the number of groups that the user belongs to, enter the following command for each user on the system: /usr/bin/grep -x -c username /etc/group This command returns the number of occurrences of username in the /etc/group file. 2.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Configuring and Administering an NFS Server Configuring an NFS server involves completing the following tasks: 1. Identify the set of directories that you want the NFS server to share. For example, consider an application App1 running on an NFS client. Application App1 requires access to the abc filesystem in an NFS server. NFS server should share the abc filesystem.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Table 2-1 NFS Server Configuration Files (Continued) File Name Function /etc/nfssec.conf Contains the valid and supported NFS security modes. /etc/dfs/sharetab Contains the system record of shared filesystems. /etc/rmtab Contains all the entries for hosts that have mounted filesystems from any system. Daemons Table 2-2 describes the NFS server daemons. Table 2-2 NFS Server Daemons Daemon Name Function rpc.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Following are the tasks involved in configuring and administering an NFS server: • Configuring the NFSv4 Server Protocol Version (Optional) • Enabling an NFS Server (Required) • Sharing Directories with NFS Clients (Required) • Configuring an NFS Server for use by a PC NFS client (Optional) • Unsharing (Removing) a Shared Directory (Optional) • Disabling the NFS Server (Optional) Configuring the NFSv4 Server
Configuring and Administering NFS Services Configuring and Administering an NFS Server ps -ae | grep rpcbind If the daemon is running, an output similar to the following is displayed: 778 ? 0:04 rpcbind No message is displayed if the daemon is not running. To start the rpcbind daemon, enter the following command: /sbin/init.d/nfs.core start 3. Enter the following commands to verify whether the lockd and statd daemons are running: ps -ae | grep rpc.lockd ps -ae | grep rpc.
Configuring and Administering NFS Services Configuring and Administering an NFS Server NOTE The exportfs command, used to export directories in versions prior to HP-UX 11i v3, is now a script that calls the share command. HP provides a new exportfs script for backward compatibility to enable you to continue using exportfs with the functionality supported in earlier versions of HP-UX. To use any of the new features provided in HP-UX 11i v3 you must use the share command.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf (1M).
Configuring and Administering NFS Services Configuring and Administering an NFS Server Figure 2-1 Symbolic Links in NFS Mounts NFS Server NFS Client / / /share /nonshare /nfs /file2 /dir1 /dir1 /file1 /link Where is /file2? /file1 /link Symbolic Link Sharing a directory with NFS Clients Before you share your filesystem or directory, determine whether you want the sharing to be automatic or manual.
Configuring and Administering NFS Services Configuring and Administering an NFS Server When NFS is restarted or the system is rebooted, the /etc/dfs/dfstab file is read and all directories are shared automatically. 2. Share all the directories configured in the /etc/dfs/dfstab file without restarting the server by using the following command: shareall This command reads the entries in the /etc/dfs/dfstab file and shares all the directories. 3.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Examples for Sharing directories This section discusses different examples for sharing directories. • Sharing a directory with read-only access share -F nfs -o ro /tmp In this example, all clients are allowed read-only access to the /tmp directory. The /tmp directory needs to be configured to allow read access to users on the clients. For example, specify -r--r--r-permissions for the /tmp directory.
Configuring and Administering NFS Services Configuring and Administering an NFS Server • Sharing directories with anonymous users based on access rights given to the superuser share -F nfs -o rw=Green,root=Green,anon=65535 /vol1/grp1/Green In this example, superusers on host Green use uid 0 and are treated as root. The root users on other hosts (Red and Blue) are considered anonymous and their uids and gids are re-mapped to 65535. The superusers on host Green are allowed read-write access.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Secure Sharing of Directories The share command enables you to specify a security mode for NFS. Use the sec option to specify the different security modes. Table 2-3 describes the security modes of the share command. Table 2-3 Security Modes of the share command Security Mode Description sys Uses the default authentication method, AUTH_SYS.
Configuring and Administering NFS Services Configuring and Administering an NFS Server • The share command uses the AUTH_SYS mode by default, if the sec=mode option is not specified. • If your network consists of clients with differing security requirements, some using highly restrictive security modes and some using less secure modes, use multiple security modes with a single share command. For example, consider an environment where all clients do not require same level of security.
Configuring and Administering NFS Services Configuring and Administering an NFS Server NOTE Add a principal for all machines that are going to use the NFS Service. Also, add a principal for all users who will access the data on the NFS server. For example, the sample/krbsrv39.anyrealm.com principal should be added to the Kerberos database before running the sample applications. 2.
Configuring and Administering NFS Services Configuring and Administering an NFS Server In this example, the following setup was used to run the program: GSS-API Server Host: krbsrv39 GSS-API Client Host: krbcl145 An output similar to the following output is displayed: krbcl145: #/hpsample/gss-client krbcl145 sample@krbsrv39 "hi" Sending init_sec_context token (size=541)...continue needed...
Configuring and Administering NFS Services Configuring and Administering an NFS Server 6. To add the NFS service principal to the NFS server, such as nfs/krbsrv39.anyrealm.com, in the Kerberos database of the Kerberos server, first run the kadmin command-line administrator command and then add a new principal using the add command. Command: add Name of Principal to Add: nfs/krbsrv39.anyrealm.com Enter password: Re-enter password for verification: Principal added.
Configuring and Administering NFS Services Configuring and Administering an NFS Server # The NFS Security Service Configuration File. # Each entry is of the form: # \ # # The "-" in signifies that this is not a GSS mechanism. # A string entry in is required for using RPCSEC_GSS # services. and are optional.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Examples for Securely Sharing Directories This section discusses different examples for sharing directories in a secure manner. • Granting access to shared directories only for AUTH_DES mode users share -F nfs -o sec=dh /var/casey In this example, only clients that use AUTH_DES security mode are granted access.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Secure NFS Client Configuration with Kerberos To secure your NFS client setup using Kerberos, follow these steps: 1. Set up Kerberos client for the same realm as the NFS server. You can copy the krb5.conf file from the NFS server. NOTE Add a principal for all machines that are going to use the NFS Service. Also, add a principal for all users who will access the data on the NFS server. For example, the sample/krbsrv39.
Configuring and Administering NFS Services Configuring and Administering an NFS Server root 1139 1 /opt/krb5/sbin/kdcd 0 Feb 9 ? 0:00 root 1154 1 0 /opt/krb5/sbin/kadmind Feb 9 ? 15:33 This indicates that the Kerberos daemons are running. 5. To verify that the underlying GSS-API framework is working properly, run the sample program /usr/contrib/gssapi/sample.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Accessing Shared NFS Directories across a Firewall To access shared NFS directories across a firewall, you must configure the firewall based on the ports that the NFS service daemons listen on. To access NFS directories, the following daemons are required: rpcbind, nfsd, rpc.lockd, rpc.statd, and rpc.mountd. The rpcbind daemon uses a fixed port, 111, and the nfsd daemon uses 2049 as its default port.
Configuring and Administering NFS Services Configuring and Administering an NFS Server rpcinfo -p An output similar to the following output is displayed: program vers proto port service 100024 1 udp 49157 status 100024 1 tcp 49152 status 100021 2 tcp 4045 nlockmgr 100021 3 udp 4045 nlockmgr 100005 3 udp 49417 mountd 100005 3 tcp 49259 mountd 100003 2 udp 2049 nfs 100003 3 tcp 2049 nfs Each time the rpc.statd and rpc.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Sharing directories across a firewall using fixed port numbers in the nfs file Using the /etc/default/nfs file enables you to specify fixed port numbers for the rpc.statd and rpc.mountd daemons. The rpc.lockd daemon runs at port 4045 and is not configurable. To set the port numbers using the /etc/default/nfs file, follow these steps: 1.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Sharing directories across a firewall using the WebNFS Feature The WebNFS service makes files in a directory available to clients using a public file handle. The ability to use this predefined file handle reduces network traffic, by avoiding the MOUNT protocol. How WebNFS works This section compares the process of communication between an NFS client and an NFS server across LANs and WANs.
Configuring and Administering NFS Services Configuring and Administering an NFS Server Figure 2-2 WebNFS Session Figure 2-2 depicts the following steps: 1. An NFS client uses a LOOKUP request with a PUBLIC file handle to access the foo/index.html file. The NFS client bypasses the portmapper service and contacts the server on port 2049 (the default port). 2. The NFS server responds with the file handle for the foo/index.html file. 3. The NFS client sends a READ request to the server. 4.
Configuring and Administering NFS Services Configuring and Administering an NFS Server To access the shared directory across a firewall using the WebNFS feature, configure the firewall to allow connections to the port number used by the nfsd daemon. By default the nfsd daemon uses port 2049. Configure the firewall based on the port number configured.
Configuring and Administering NFS Services Configuring and Administering an NFS Server 1. In the /etc/rc.config.d/nfsconf file, use a text editor to set the PCNFS_SERVER variable to 1, as follows: PCNFS_SERVER=1 2. To forcibly start the pcnfsd daemon while the server is running, run the following command: /sbin/init.d/nfs.server start For more information on pcnfsd, see pcnfsd (1M).
Configuring and Administering NFS Services Configuring and Administering an NFS Server The directory that you have unshared should not be present in the list displayed. Manual Unshare 1. To remove a directory from the server’s internal list of shared directories, enter the following command: unshare directoryname 2. To verify whether all the directories are unshared, enter the following command: share The directory that you have unshared should not be present in the list displayed.
Configuring and Administering NFS Services Configuring and Administering NFS Clients Configuring and Administering NFS Clients An NFS client is a system that mounts remote directories using NFS. When a client mounts a remote filesystem, it does not make a copy of the filesystem. The mounting process uses a series of remote procedure calls that enable the client to transparently access the filesystem on the server’s disk.
Configuring and Administering NFS Services Configuring and Administering NFS Clients Following are the tasks involved in configuring and administering an NFS client.
Configuring and Administering NFS Services Configuring and Administering NFS Clients You can also configure the client protocol version to NFSv4 by specifying vers=4 while mounting the directory. For example, to set the client protocol version to NFSv4 while mounting the /usr/kc directory, enter the following command: mount -o vers=4 serv:/usr/kc /usr/kc For more information on NFSv4, see nfsd (1m), mount_nfs (1m), nfsmapid (1m), and nfs4cbd (1m).
Configuring and Administering NFS Services Configuring and Administering NFS Clients Table 2-7 76 Standard-Mounted Versus Automounted Directories (Continued) Standard-Mounted Directory Automounted Directory (Using AutoFS) If a directory is configured to be standard-mounted when the client system boots, and the NFS server for the directory is not booted yet, system startup is delayed until the NFS server becomes available.
Configuring and Administering NFS Services Configuring and Administering NFS Clients Enabling an NFS Client To enable an NFS client, follow these steps: 1. In the /etc/rc.config.d/nfsconf file, set the value of NFS_CLIENT variable to 1, as follows: NFS_CLIENT=1 2. Enter the following command to run the NFS client startup script: /sbin/init.d/nfs.client start The NFS client startup script starts the necessary NFS client daemons, and mounts the remote directories configured in the /etc/fstab file.
Configuring and Administering NFS Services Configuring and Administering NFS Clients Mounting a Remote Directory on an NFS client To mount a directory on an NFS client, select one of the following methods: Automatic Mount To configure a remote directory to be automatically mounted at system boot, follow these steps: 1. Add an entry to the /etc/fstab file, for each remote directory you want to mount on the client.
Configuring and Administering NFS Services Configuring and Administering NFS Clients nfsstat -m An output similar to the following output is displayed: /opt/nfstest from hpnfsweb:/home/tester Flags: vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,devs,r size=32768,wsize=32768,retrans=5,timeo=11 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 Lookups: srtt=33 (82ms), dev=33 (165ms), cur=20 (400ms) The directory that you have mounted must be present in this list.
Configuring and Administering NFS Services Configuring and Administering NFS Clients Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,symlink,acl, devs,rsize=32768,wsize=32768,retrans=5,timeo=600 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 Failover: noresponse=0,failover=0,remap=0,currserver=onc21 The Failover line in the above output indicates that the failover is working.
Configuring and Administering NFS Services Configuring and Administering NFS Clients Figure 2-4 illustrates this example.
Configuring and Administering NFS Services Configuring and Administering NFS Clients • Mounting an NFS Version 2 filesystem using the UDP Transport mount -o vers=2,proto=udp onc21:/var/mail /var/mail In this example, the NFS client mounts the /var/mail directory from the NFS server, onc21, using NFSv2 and the UDP protocol.
Configuring and Administering NFS Services Configuring and Administering NFS Clients In this example, the NFS client mounts a replicated set of NFS file systems with different pathnames. Secure Mounting of Directories The mount command enables you to specify the security mode for each NFS mount point. This allows the NFS client to request a specific security mode. However, if the specific mode does not exist on the server, then the mount fails. Use the sec option to specify the security mode.
Configuring and Administering NFS Services Configuring and Administering NFS Clients /usr/sbin/umount local_directory /usr/sbin/mount local_directory 2. If you change the mount options in the AutoFS master map, you must restart AutoFS for the changes to take effect. For information on restarting AutoFS, see “Restarting AutoFS” on page 111. For more information on the different caching mount options, see mount_nfs (1M).
Configuring and Administering NFS Services Configuring and Administering NFS Clients mount The directories that you have unmounted must not be present in the list displayed. 6. If you do not want the directory to be automatically mounted when the system is rebooted, remove the directory entry from the /etc/fstab file. Disabling NFS Client Capability To disable the NFS client, follow these steps: 1.
Configuring and Administering NFS Services NFS Client and Server Transport Connections NFS Client and Server Transport Connections NFS runs over both UDP and TCP transport protocols. The default transport protocol is TCP. Using the TCP protocol increases the reliability of NFS filesystems working across WANs and ensures that the packets are successfully delivered. TCP provides congestion control and error recovery. NFS over TCP and UDP works with NFS Version 2, and Version 3.
Configuring and Administering NFS Services NFS Client and Server Transport Connections Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled then the nfsd daemon only supports TCP for NFSv4. For NFSv2 and NFSv3 the nfsd daemon services both TCP and UDP requests.
Configuring and Administering NFS Services Configuring and Using NFS Netgroups Configuring and Using NFS Netgroups This section describes how to create and use NFS netgroups to restrict NFS access to the client system. It describes the following tasks: • Creating Netgroups in the /etc/netgroup File • Using Netgroups in Configuration Files Creating Netgroups in the /etc/netgroup File To create netgroups in the /etc/netgroup file, follow these steps: 1.
Configuring and Administering NFS Services Configuring and Using NFS Netgroups The NIS_domain field specifies the NIS domain in which the triple (host, user, NIS_domain) is valid. For example, if the netgroup database contains the following netgroup: myfriends (sage,-,bldg1) (cauliflower,-,bldg2) (pear,-,bldg3) and an NFS server running NIS in the domain bldg1 shares a directory only to the netgroup myfriends, only the host sage can mount that directory.
Configuring and Administering NFS Services Configuring and Using NFS Netgroups argument in the /etc/dfs/dfstab file, any host can access the shared directory. If a netgroup is used strictly as a list of users, it is better to put a dash in the host field, as follows: administrators (-,jane, ) (-,art, ) (-,mel, ) The dash indicates that no hosts are included in the netgroup. The trusted_hosts and administrators netgroups can be used together in the /etc/hosts.
Configuring and Administering NFS Services Configuring and Using NFS Netgroups • /etc/hosts.equiv or $HOME/.rhosts, in place of a host name or user name • /etc/passwd, to instruct processes whether to look in the NIS password database, for information about the users in the netgroup • /etc/group, to instruct processes whether to look in the NIS group database, for information about the users in the netgroup The following sections explain how to use netgroups in configuration files.
Configuring and Administering NFS Services Configuring and Using NFS Netgroups Netgroups can also be used to deny privileged access to certain hosts or users in the /etc/hosts.equiv or $HOME/.rhosts file, as in the following example: + -@vandals The plus (+) sign is a wildcard in the /etc/hosts.equiv or $HOME/.rhosts file syntax, allowing privileged access from any host in the network.
Configuring and Administering NFS Services Configuring and Using NFS Netgroups Netgroups can also be used to prevent lookups of certain users in the NIS passwd database. The following sample entries from the /etc/passwd file indicate that if the NIS passwd database contains entries for users in the bears netgroup, these entries cannot be used on the local system. Any other user can be looked up in the NIS database. -@bears For more information on NIS, see NIS Administrator’s Guide (5991-7656).
Configuring and Administering NFS Services Configuring and Using NFS Netgroups For more information on NIS, see NIS Administrator’s Guide(5991-7656). For information on the /etc/group file, see group (4).
Configuring and Administering NFS Services Configuring RPC-based Services Configuring RPC-based Services This section describes the following tasks: • Enabling Other RPC Services • Restricting Access to RPC-based Services Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc”. Following is the list of entries in an /etc/inetd.conf file: #rpc stream tcp nowait root /usr/sbin/rpc.rexd 100017 1 rpc.
Configuring and Administering NFS Services Configuring RPC-based Services Table 2-8 lists the RPC daemons and services that can be started by the inetd daemon. It briefly describes each one and specifies the manpage you can refer to for more information. Table 2-8 RPC Services managed by inetd RPC Service 96 Description rexd The rpc.rexd program is the server for the on command, which starts the Remote Execution Facility (REX). The on command sends a command to be executed on a remote system. The rpc.
Configuring and Administering NFS Services Configuring RPC-based Services Table 2-8 RPC Services managed by inetd (Continued) RPC Service Description rquotad The rpc.rquotad program responds to requests from the quota command, which displays information about a user’s disk usage and limits. For more information, see rquotad (1M) and quota (1).
Configuring and Administering NFS Services Configuring RPC-based Services sprayd allow 15.13.2-12.
3 Configuring and Administering AutoFS This chapter describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
Configuring and Administering AutoFS This chapter addresses the following topics: • “How AutoFS Works” on page 102 — “Supported Filesystems” on page 103 — “On-Demand Mounting” on page 103 — “Browsability for Indirect Maps” on page 104 — “NFS Loopback Mount” on page 105 — “Map Locations (Backends)” on page 106 — “Enabling AutoFS for LDAP Support” on page 107 100 • “Enabling AutoFS” on page 109 • “Disabling AutoFS” on page 110 • “Restarting AutoFS” on page 111 • “AutoFS Configuration Prerequisites”
Configuring and Administering AutoFS Chapter 3 • “Creating a Hierarchy of AutoFS Maps” on page 139 • “Turning Off an AutoFS Map with the -null Map” on page 141 • “Modifying or Removing an Automounted Directory from a Map” on page 142 • “Verifying the AutoFS Configuration” on page 143 101
Configuring and Administering AutoFS How AutoFS Works How AutoFS Works This section describes how AutoFS works. AutoFS mounts directories automatically when users or processes request access to them, and it unmounts directories automatically if they remain idle for a period of time (10 minutes, by default). When deciding if AutoFS is right for your environment, see “Deciding Between Standard-Mounted Directories and Automounted Directories” on page 75.
Configuring and Administering AutoFS How AutoFS Works CAUTION Filesystems under the management of AutoFS must always be maintained using the AutoFS utilities automountd and automount. Manually mounting and unmounting AutoFS-managed filesystems can lead to disruptive or unpredictable results, including but not limited to: commands hanging or not returning expected results, and applications failing because of their dependencies on these mounted filesystems.
Configuring and Administering AutoFS How AutoFS Works A user on the NFS client, sage, enters the following command: cd /auto/project/specs Only the /auto/project/specs subdirectory is mounted. The /auto/project/specs/designs subdirectory is mounted only when accessed using the following command: cd /auto/project/specs/designs Figure 3-1 shows the automounted file structure after the user runs the command.
Configuring and Administering AutoFS How AutoFS Works # /etc/auto_indirect file # local mount point mount options /test /apps -nosuid -nosuid remote server:directory thyme:/export/project/test basil:/export/apps Enter the following commands: cd /nfs/desktop ls The ls command displays the following: test apps The test and apps subdirectories are the potential mount points.
Configuring and Administering AutoFS How AutoFS Works Map Locations (Backends) AutoFS maps can be located in the following: • Files: A local file that stores the AutoFS map information. An example of a map that can be kept on the local system is the master map. The AUTO_MASTER variable in /etc/rc.config.d/nfsconf is set to the name of the master map. The default master map name is /etc/auto_master.
Configuring and Administering AutoFS How AutoFS Works For information on setting up AutoFS maps on NIS, see NIS Administrator’s Guide (5991-7656). For information on setting up AutoFS maps for LDAP, see “Enabling AutoFS for LDAP Support” on page 107. Enabling AutoFS for LDAP Support To enable AutoFS for LDAP support, follow these steps: 1. Migrate your AutoFS maps to LDIF (LDAP Directory Interchange Format) files, using the migration scripts, if the AutoFS maps are not already migrated.
Configuring and Administering AutoFS How AutoFS Works 5. Enter the following command to run the AutoFS shutdown script: /sbin/init.d/autofs stop 6. Enter the following command to run the AutoFS startup script: /sbin/init.
Configuring and Administering AutoFS Enabling AutoFS Enabling AutoFS To enable AutoFS, follow these steps: 1. In the /etc/rc.config.d/nfsconf file, set the value of AUTOFS variables to 1, as follows: AUTOFS=1 2. If you use a local file as the master map, ensure that the AUTO_MASTER variable in /etc/rc.config.d/nfsconf is set to the name of the master map. (The default master map name is /etc/auto_master).
Configuring and Administering AutoFS Disabling AutoFS Disabling AutoFS This section describes how to disable AutoFS. To disable AutoFS, follow these steps: 1. To run the AutoFS shutdown script, enter the following command at the HP-UX prompt: /sbin/init.d/autofs stop 2. In the /etc/rc.config.d/nfsconf file, set the value of AUTOFS variable to 0, as follows: AUTOFS=0 IMPORTANT 110 Do not disable AutoFS by terminating the automountd daemon with the kill command.
Configuring and Administering AutoFS Restarting AutoFS Restarting AutoFS AutoFS rarely needs to be restarted. In case you do need to restart it, follow these steps: 1. To find a list of all the automounted directories on the client, run the following scripts: for FS in $(grep autofs /etc/mnttab | awk ‘{print $2}’) do grep ‘nfs’ /etc/mnttab | awk ‘{print $2}’ | grep ^${FS} done 2.
Configuring and Administering AutoFS Restarting AutoFS If the ps command indicates that AutoFS is still active, ensure that all users are out of the automounted directories and then try again. Do not restart AutoFS until all automount processes are terminated. 6. To start AutoFS, enter the following command at the HP-UX prompt: /sbin/init.
Configuring and Administering AutoFS AutoFS Configuration Prerequisites AutoFS Configuration Prerequisites Consider the following before configuring AutoFS: Chapter 3 • Ensure that the local directory you configure as the mount point is either empty or nonexistent. If the local directory you configured as the mount point is not empty, any local files or directories in it are hidden and become inaccessible if the automounted filesystem is mounted over it.
Configuring and Administering AutoFS Deciding Between Direct and Indirect NFS Automounts Deciding Between Direct and Indirect NFS Automounts Before you automount a remote directory, decide whether you want to use a direct or indirect AutoFS map. If your automounted directory must share the same parent directory with local or standard-mounted directories, you can choose a direct map. Table 3-1 lists the advantages and disadvantages of direct and indirect AutoFS maps.
Configuring and Administering AutoFS Deciding Between Direct and Indirect NFS Automounts Table 3-1 Direct Versus Indirect AutoFS Map Types (Continued) Direct Map Indirect Map Disadvantage: When automount reads a direct map, it creates an entry for each automounted directory in the internal mount table, /etc/mnttab. This can cause the mount table to become very large. Advantage: When automount reads an indirect map, it creates only one entry for the entire map in the internal mount table, /etc/mnttab.
Configuring and Administering AutoFS Automounting a Remote Directory Using a Direct Map Automounting a Remote Directory Using a Direct Map This section describes how to automount a remote directory using a direct map. To mount a remote directory using a direct map, follow these steps: 1. If you are using local files for maps, use an editor to open or create the master map in the /etc directory. Name the master map as /etc/auto_master. If you are using NIS, open the master map on the NIS master server.
Configuring and Administering AutoFS Automounting a Remote Directory Using a Direct Map Ensure that the local mount point specified in the AutoFS map entry is different from the exported directory on the NFS server. If it is the same, and the NFS server also acts as an NFS client and uses AutoFS with these map entries, the exported directory might attempt to mount over itself. As a result, unexpected behavior can occur.
Configuring and Administering AutoFS Automounting a Remote Directory Using a Direct Map If the direct map name in the master map begins with a slash (/), AutoFS assumes that it is a local file. If it does not contain a slash, AutoFS uses the Name Service Switch to determine whether it is a file, LDAP, or an NIS map. For more information on using the name service switch, see nsswitch.conf (4).
Configuring and Administering AutoFS Automounting a Remote Directory Using a Direct Map Figure 3-3 How AutoFS Sets Up Direct Mounts NFS Server basil NFS Server thyme / / /export /export /FY94 /project /proj1 /specs NFS Client sage / /auto /project /specs /targets /ytd /budget /reqmnts /designs /reqmnts /designs /targets /ytd NFS Mounts Chapter 3 119
Configuring and Administering AutoFS Automounting a Remote Directory Using an Indirect Map Automounting a Remote Directory Using an Indirect Map This section describes how to automount a remote directory using an indirect map. To automount a remote directory using an indirect map, follow these steps: 1. If you are using local files for maps, use an editor to open or create the master map in the /etc directory. Name the master map /etc/auto_master.
Configuring and Administering AutoFS Automounting a Remote Directory Using an Indirect Map Notes on Indirect Maps The local_subdirectory specified in the indirect map is the lowest-level subdirectory in the local directory pathname. For example, if you are mounting a remote directory on /nfs/apps/draw, the local_subdirectory specified in the indirect map will be draw.
Configuring and Administering AutoFS Automounting a Remote Directory Using an Indirect Map Indirect maps are usually called /etc/auto_name, where name helps you remember what is configured in the map. If the indirect map name in the AutoFS master map begins with a slash (/), AutoFS assumes that it is a local file. If it does not contain a slash, AutoFS uses the Name Service Switch to determine whether it is a file, LDAP, or an NIS map.
Configuring and Administering AutoFS Automounting a Remote Directory Using an Indirect Map Figure 3-4 How AutoFS Sets Up NFS Indirect Mounts NFS Server basil / NFS Server thyme / /export /export /write /apps NFS Client sage / /nfs /desktop /draw /readme/wordtool /draw /write /pics /bin /pics /bin readme /wordtool NFS Mounts Chapter 3 123
Configuring and Administering AutoFS Automounting Home Directories Automounting Home Directories To automount users’ home directories, follow these steps: 1. Ensure that the systems where users’ home directories are located are set up as NFS servers and are exporting the home directories. For more information, see “Configuring and Administering an NFS Server” on page 46. 2.
Configuring and Administering AutoFS Automounting Home Directories AutoFS reads the auto_home map to find out how to mount Howard’s home directory. It finds the following line: howard basil:/export/home/& -nosuid AutoFS substitutes howard for the ampersand (&) character in that line: howard basil:/export/home/howard -nosuid AutoFS mounts /export/home/howard from server basil to the local mount point /home/howard on the NFS client. Figure 3-5 illustrates this configuration.
Configuring and Administering AutoFS Automounting All Exported Directories from Any Host Using the -hosts Map Automounting All Exported Directories from Any Host Using the -hosts Map To automount all exported directories using the -hosts map, follow these steps: 1.
Configuring and Administering AutoFS Automounting All Exported Directories from Any Host Using the -hosts Map When a user or a process requests a directory from an NFS server, AutoFS creates a subdirectory named after the NFS server, under the local mount point you configured in the AutoFS master map. (The conventional mount point for the -hosts map is /net). AutoFS then mounts the exported or shared directories from that server. These exported or shared directories can now be accessed.
Configuring and Administering AutoFS Automounting All Exported Directories from Any Host Using the -hosts Map For example, the following entry from a map mounts the source code and the data files for a project whenever anyone requests access to both of them. They are mounted for on-demand mounting. The subdirectory /thyme is created under /net, and /exports/proj1 is mounted under /thyme. Figure 3-7 shows the automounted directory structure after the user enters the second command.
Configuring and Administering AutoFS Automounting Multiple Directories (Hierarchical Mounts) Automounting Multiple Directories (Hierarchical Mounts) If the map does not exist, create it, and add it to the AutoFS master map. Use an editor to create an entry with the following format in a direct or indirect map: local_dir /local_subdirectory [-options] \ server:remote_directory \ /local_subdirectory [-options] server:remote_directory The backslash (\) characters instruct AutoFS to ignore the line breaks.
Configuring and Administering AutoFS Configuring Replicated Servers for an AutoFS Directory Configuring Replicated Servers for an AutoFS Directory This section describes how to configure multiple replicated servers for an AutoFS directory. To configure multiple replicated servers for a directory, follow these steps: 1.
Configuring and Administering AutoFS Configuring Replicated Servers for an AutoFS Directory AutoFS reads this entry as one line. The line is broken for readability, and the backslash (\) instructs AutoFS that the line continues after the line break. 3. Create and configure the /etc/netmasks file. AutoFS requires the /etc/netmasks file to determine the subnets of local clients in a replicated multiple server environment. The /etc/netmasks file contains IP address masks with IP network numbers.
Configuring and Administering AutoFS Configuring Replicated Servers for an AutoFS Directory Notes on Configuring Replicated Servers Directories with multiple servers must be mounted read-only, to ensure that the versions remain the same on all the servers. The server chosen for the mount is the one with the strongest preference, based on a sorting order. The sorting order used gives strongest preference to servers on the same local subnet.
Configuring and Administering AutoFS Using Executable Maps Using Executable Maps An executable map is a map whose entries are generated dynamically by a program or a script. AutoFS determines whether a map is executable by checking whether the execute bit is set in its permissions string. If a map is not executable, make sure its execute bit is not set.
Configuring and Administering AutoFS Using Environment Variables as Shortcuts in AutoFS Maps Using Environment Variables as Shortcuts in AutoFS Maps This section describes how to use environment variables as shortcuts in AutoFS maps. To use environment variables as shortcuts in direct and indirect maps, follow these steps: 1. Use an environment variable anywhere in a direct or an indirect AutoFS map except the first field, which specifies the local mount point.
Configuring and Administering AutoFS Using Environment Variables as Shortcuts in AutoFS Maps You can use any environment variable that is set to a value in an AutoFS map. If you do not set the variable with the -D option in /etc/rc.config.d/nfsconf, AutoFS uses the current value of the environment variable on the local host.
Configuring and Administering AutoFS Using Wildcard Characters as Shortcuts in AutoFS Maps Using Wildcard Characters as Shortcuts in AutoFS Maps Consider the following guidelines while using wildcard characters as shortcuts: • Use an asterisk (*) as a wildcard character in an indirect map, to represent the local subdirectory if you want the local subdirectory to be the same as the remote system name, or the remote subdirectory. You cannot use the asterisk (*) wildcard in a direct map.
Configuring and Administering AutoFS Using Wildcard Characters as Shortcuts in AutoFS Maps The users home directory is configured in the /etc/passwd file as /home/username. For example, the home directory of the user terry is /home/terry. When Terry logs in, AutoFS looks up the /etc/auto_home map and substitutes terry for both the asterisk and the ampersand. AutoFS then mounts Terry’s home directory from /export/home/terry on the server, basil, to /home/terry on the local NFS client.
Configuring and Administering AutoFS Including a Map in Another Map Including a Map in Another Map To include the contents of an AutoFS map in another AutoFS map, add a plus (+) sign before the map name, as in the following example: # /etc/auto_home file # local mount point mount options remote server:directory basil -nosuid +auto_home basil:/export/home/basil Suppose /etc/auto_home map is listed in the master map with the following line: /home /etc/auto_home If a user logs in whose home directory is i
Configuring and Administering AutoFS Creating a Hierarchy of AutoFS Maps Creating a Hierarchy of AutoFS Maps Hierarchical AutoFS maps provide a framework within which you can organize large shared filesystems. Together with NIS, which allows you to share information across administrative domains, hierarchical maps enable you to decentralize the maintenance of the shared namespace.
Configuring and Administering AutoFS Creating a Hierarchy of AutoFS Maps Beginning with the AutoFS mount at /org, the evaluation of this path dynamically creates additional AutoFS mounts at /org/eng and /org/eng/projects. On the user’s system, no action is required for the changes to take effect, because the AutoFS mounts are created only when required. You need to run the automount command only when you make changes to the master map or to a direct map.
Configuring and Administering AutoFS The -null Map The -null Map The -null map is used to ignore mapping entries that do not apply to your host, but which would otherwise be inherited from NIS or LDAP maps. The -null option causes AutoFS to ignore AutoFS map entries that affect the specified directory.
Configuring and Administering AutoFS Modifying or Removing an Automounted Directory from a Map Modifying or Removing an Automounted Directory from a Map To modify or remove an automounted directory from a map, follow these steps: 1. Before you remove an automounted directory, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of everyone using the mounted directory. 2.
Configuring and Administering AutoFS Verifying the AutoFS Configuration Verifying the AutoFS Configuration This section describes how to verify the AutoFS configuration. To verify the configuration, follow these steps: 1. Enter the following command to change the current working directory to an automounted directory: /usr/bin/cd local_directory Where: local_directory is the configured mount point in the AutoFS map. 2.
Configuring and Administering AutoFS Verifying the AutoFS Configuration The draw and write subdirectories are the potential mount points (browsability), but are not currently mounted. However, if you enter the following commands, both draw and write subdirectories are mounted. cd /nfs/desktop/write cd /nfs/desktop/draw If AutoFS does not mount the configured directories, see “Troubleshooting NFS Services” on page 165.
Configuring and Administering AutoFS Automount Command Line Options Automount Command Line Options Table 3-2 lists the automount command-line options for AutoFS. Table 3-2 Automount Command-Line Options for AutoFS Option Purpose automount -f Specifies a local master file for initialization. When the -f option is used and the master file specified is not found, then automount tries to use the switch policy for automount in /etc/nsswitch.conf. If it fails to access nsswitch.
Configuring and Administering AutoFS Automountd Command Line Options Automountd Command Line Options Table 3-3 lists the automountd command-line options for AutoFS. Table 3-3 Automountd Command-Line Options for AutoFS Option 146 Purpose automountd -D variable=value Assigns a value to the indicated AutoFS map substitution variable. These assignments cannot be used to substitute variables in the master map. automountd -n Turns off browsing for all AutoFS mount points.
4 Configuring and Administering a Cache Filesystem This chapter describes how to configure and administer the Cache filesystem on a system running HP-UX 11i v3.
Configuring and Administering a Cache Filesystem This chapter addresses the following topics: 148 • “CacheFS Overview” on page 149 • “CacheFS Terms” on page 150 • “Configuring CacheFS” on page 151 • “Administering CacheFS” on page 155 Chapter 4
Configuring and Administering a Cache Filesystem CacheFS Overview CacheFS Overview The Cache FileSystem (CacheFS) is a general purpose filesystem caching mechanism that improves client performance when dealing with slow NFS servers. By caching data to a faster local filesystem, instead of going over the wire to a slow server or a slow network, the CacheFS client sees much better performance.
Configuring and Administering a Cache Filesystem CacheFS Terms CacheFS Terms The following CacheFS terms are used in this chapter: back filesystem The filesystem that is cached is called the back filesystem. HP-UX currently supports only NFS as the back filesystem type. front filesystem The local filesystem that is used to store the cached data is called the front filesystem. HFS and VxFS are the only supported front filesystem types.
Configuring and Administering a Cache Filesystem Configuring CacheFS Configuring CacheFS You can use CacheFS to cache the NFS-mounted or automounted NFS filesystems. To mount a filesystem using CacheFS, you must create a cache directory in the local filesystem.
Configuring and Administering a Cache Filesystem Configuring CacheFS Configuring Cache in a Local Filesystem This section describes how to configure a local filesystem as cache. To configure a local filesystem as a cache, follow these steps: 1. Log in as superuser. 2. If necessary, configure and mount a new HFS or VxFS filesystem to be used as the front filesystem where data will be cached. For more information about filesystems, see HP-UX System Administrator’s Guide (5991-7436).
Configuring and Administering a Cache Filesystem Configuring CacheFS Mounting an NFS Filesystem using CacheFS This section describes how to mount an NFS filesystem using CacheFS. Before you mount an NFS filesystem with CacheFS, you must configure a directory in a local filesystem as cache. For information on how to configure a directory as cache, see “Configuring Cache in a Local Filesystem” on page 152.
Configuring and Administering a Cache Filesystem Configuring CacheFS Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS. Before you automount an NFS filesystem with CacheFS, you must configure a directory in a local filesystem as cache. For more information on how to configure a directory as cache, see “Configuring Cache in a Local Filesystem” on page 152. To automount a filesystem using CacheFS, follow these steps: 1.
Configuring and Administering a Cache Filesystem Administering CacheFS Administering CacheFS This section describes how to administer CacheFS.
Configuring and Administering a Cache Filesystem Administering CacheFS Checking the Integrity of a Cache The integrity of a cache can be checked by using the fsck_cachefs command. The CacheFS version of the fsck command checks the integrity of the cache and automatically corrects any CacheFS problems that it encounters. The CacheFS fsck command is run automatically by the mount command the first time the cache directory is used after system reboot.
Configuring and Administering a Cache Filesystem Administering CacheFS • noconst This option disables cache consistency checking. By default, CacheFS is periodically checked for consistency. However, you must use this option only if you do not intend to modify the back filesystem. • weakconst This option verifies the cache consistency with the NFS client's copy of file attributes. Consistency checks are performed file by file, as files are accessed.
Configuring and Administering a Cache Filesystem Administering CacheFS not cached is made (cache miss). It also includes the number of consistency checks and the number of modification operations, such as writes and creates, that have been performed. Viewing the CacheFS Statistics In the example below, /home/smh is the CacheFS filesystem mount point directory.
Configuring and Administering a Cache Filesystem Administering CacheFS Packing a Cached Filesystem CacheFS is designed to work best with NFS filesystems that contain read-only data that does not change frequently. CacheFS is most commonly used to manage application binaries. In earlier versions of HP-UX, the rpages option was introduced to enable the client to cache complete copies of the application binary.
Configuring and Administering a Cache Filesystem Administering CacheFS For information on the format and rules governing the creation of a packing-list file, see packingrules (4). NOTE 2. Add an entry to the file. Following is an example of an entry in the packing-list file. BASE /net/server/share/home/casey LIST work LIST m/sentitems IGNORE core *.o *.bak Where: BASE Specifies the path to the directory that contains the files that are to be packed.
Configuring and Administering a Cache Filesystem Administering CacheFS filename • Specifies the file that is to be unpacked. Using the -U option You can unpack all files in the specified cache directory where the cache is stored, when you use the -U option with the cachefspack command. To unpack all the files in the cache, enter the following command: cachefspack -U cache-dir Where: -U Use this option to specify that you want to unpack all the files in the cache.
Configuring and Administering a Cache Filesystem Administering CacheFS Specifies that all cached filesystems from the cache-directory are to be deleted. all cache-directory Specifies the name of the cache directory where the cache resides. NOTE The cache directory must not be in use when attempting to delete a cached filesystem or the cache directory. To delete a CacheFS filesystem, do the following: 1.
Configuring and Administering a Cache Filesystem Administering CacheFS cfsadmin -l cache-directory An output similar to the following output is displayed: cfsadmin: list cache FS information maxblocks 90% minblocks 0% threshblocks 85% maxfiles 91% minfiles 0% threshfiles maxfilesize 85% 3MB srv01:_tmp:_mnt1 At the end of the output, the Cache IDs of all the CacheFS filesystem using this cache directory are displayed.
Configuring and Administering a Cache Filesystem Administering CacheFS 164 Chapter 4
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services.
Troubleshooting NFS Services 166 • “Performance Tuning” on page 184 • “Logging and Tracing of NFS Services” on page 189 Chapter 5
Troubleshooting NFS Services Common Problems with NFS Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
Troubleshooting NFS Services Common Problems with NFS NFS “Server Not Responding” Message ❏ Enter the /usr/sbin/ping command on the NFS client to make sure the NFS server is up and is reachable on the network. If the ping command fails, either the server is down, or the network has a problem. If the server is down, reboot it, or wait for it to come back up. For more information on troubleshooting network problems, see HP-UX LAN Administrator's Guide.
Troubleshooting NFS Services Common Problems with NFS 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start ❏ Enter the following command on the NFS client to make sure the rpc.mountd process on the NFS server is available and responding to RPC requests: /usr/bin/rpcinfo -u servername mountd If the rpcinfo command returns RPC_TIMED_OUT, the rpc.mountd process may be hung. Enter the following commands on the NFS server to restart rpc.
Troubleshooting NFS Services Common Problems with NFS ❏ If you are using AutoFS, enter the ps -ef command to make sure the automountd process is running on your NFS client. If it is not, follow these steps: 1. Make sure the AUTOFS variable is set to 1 in the /etc/rc.config.d/nfsconf file on the NFS client. AUTOFS=1 2. Enter the following command on the NFS client to start the AutoFS: /sbin/init.
Troubleshooting NFS Services Common Problems with NFS “Access Denied” Message ❏ Enter the following command on the NFS client to check that the NFS server is exporting the directory you want to mount: /usr/sbin/showmount -e server_name If the server is not exporting the directory, edit the /etc/dfs/dfstab file on the server so that it allows your NFS client access to the directory. Then, enter the following command to force the server to read its /etc/dfs/dfstab file.
Troubleshooting NFS Services Common Problems with NFS “Permission Denied” Message ❏ Check the mount options in the /etc/fstab file on the NFS client. A directory you are attempting to write to may have been mounted as read-only. ❏ Enter the ls -l command to check the HP-UX permissions on the server directory and on the client directory that is the mount point. You may not be allowed access to the directory.
Troubleshooting NFS Services Common Problems with NFS “Device Busy” Message ❏ If you received the “device busy” message while attempting to mount a directory, check whether it is already mounted. ❏ If you received the “device busy” message while attempting to unmount a directory, a user or process is currently using the directory. Wait until the process completes, or follow these steps: 1.
Troubleshooting NFS Services Common Problems with NFS “Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing. The following sequence of events explains how it occurs: Table 5-1 Stale File Handle Sequence of Events NFS client 1 1 NFS client 2 % cd /proj1/source 2 % cd /proj1 3 % rm -Rf source 4 % ls .
Troubleshooting NFS Services Common Problems with NFS allows only one user at a time to modify a file or directory, so one user cannot remove files another user is accessing. For more information on the source code control system, see rcsintro (5). ❏ If someone has restored the server’s file systems from backup or entered the fsirand command on the server, follow these steps on each of the NFS clients to prevent stale file handles by restarting NFS: 1.
Troubleshooting NFS Services Common Problems with NFS A Program Hangs ❏ Check whether the NFS server is up and operating correctly. If you are not sure, see “NFS “Server Not Responding” Message” on page 168. If the server is down, wait until it comes back up, or, if the directory was mounted with the intr mount option (the default), you can interrupt the NFS mount, usually with CTRL-C. ❏ If the program uses file locking, enter the following commands (on either the client or the server) to make sure rpc.
Troubleshooting NFS Services Common Problems with NFS /usr/bin/rpcinfo /usr/bin/rpcinfo /usr/bin/rpcinfo /usr/bin/rpcinfo /usr/bin/rpcinfo /usr/bin/rpcinfo -u -u -u -u -u -u servername servername servername clientname clientname clientname status nlockmgr nfs status nlockmgr nfs 4. Before retrying the mount that caused the program to hang, wait for a short while, say two minutes. 5. If the problem persists, restart rpc.statd and rpc.lockd daemons and enable tracing.
Troubleshooting NFS Services Common Problems with NFS Data is Lost Between the Client and the Server ❏ Make sure that the directory is not exported from the server with the async option. If the directory is exported with the async option, the NFS server will acknowledge NFS writes before actually writing data to disk. ❏ If users or applications are writing to the NFS-mounted directory, make sure it is mounted with the hard option (the default), rather than the soft option.
Troubleshooting NFS Services Common Problems with NFS “Too Many Levels of Remote in Path” Message This message indicates that you are attempting to mount a directory from a server that has NFS-mounted the directory from another server. You cannot “chain” your NFS mounts this way. You must mount the directory from the server that has mounted its directory on a local disk.
Troubleshooting NFS Services Common Problems while using Secure NFS with Kerberos Common Problems while using Secure NFS with Kerberos “Permission Denied” Message This message could be displayed because of one of the following reasons: • The Ticket Granting Ticket (TGT) has expired To renew the ticket, enter the following command: kinit username • Fully qualified hostname resolution problem To verify the hostname resolution, check the following files: — /etc/nsswitch.
Troubleshooting NFS Services Common Problems while using Secure NFS with Kerberos # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.com default_tkt_enctypes = DES-CBC-CRC default_tgs_enctypes = DES-CBC-CRC ccache_type = 2 [realms] krbhost.anyrealm.com = { kdc = krbhost.anyrealm.com:88 admin_server = krbhost.anyrealm.com } [domain_realm] .anyrealm.com = krbhost.anyrealm.
Troubleshooting NFS Services Common Problems while using CacheFS Common Problems while using CacheFS This section lists the following common problems encountered with CacheFS and suggests ways to correct them. • “Cannot open lock file /c/.cfs_lock” “Cache /c is in use and cannot be modified” Message • “cachefsstat: Cannot zero statistics” Message • “cfsadmin: Cannot create lock file /test/mnt/c/.cfs_lock” Message • “Cannot open lock file /test/c/.cfs_lock” Message • “/test/c/.
Troubleshooting NFS Services Common Problems while using CacheFS “/test/c/.cfs_mnt_points is not a valid cache” Message • "c" may not be a valid cache directory. As fsck also has failed, the only way to recover from this problem is to delete the cache and recreate the cache directory using cfsadmin command. “mount failed No space left on device” Message • Cache may not be clean, run fsck and try mount operation. (There could be a message in syslog that states "WARNING: cachefs: cache not clean.
Troubleshooting NFS Services Performance Tuning Performance Tuning This section gives suggestions for identifying performance problems in your network and improving NFS performance on your servers and clients.
Troubleshooting NFS Services Performance Tuning Diagnose NFS Performance Problems 1. Enter the following command on several of your NFS clients: nfsstat -rc 2. If the timeout and retrans values displayed by nfsstat -rc are high, but the badxid value is close to zero, packets are being dropped before they get to the NFS server. Try decreasing the values of the wsize and rsize mount options to 4096 or 2048 on the NFS clients. See “Changing the Default Mount Options” on page 51.
Troubleshooting NFS Services Performance Tuning Improve NFS Server Performance ❏ Enter the following command to check your server’s memory utilization: netstat -m If the number of requests for memory denied is high, your server does not have enough memory, and NFS clients will experience poor performance. Consider adding more memory or using a different host as the NFS server. ❏ Put heavily used directories on different disks on your NFS servers so they can be accessed in parallel.
Troubleshooting NFS Services Performance Tuning When a client requests access to a linked file or directory, two requests are sent to the server: one to look up the path to the link, and another to look up the target of the link. You can improve NFS performance by removing symbolic links from exported directories.
Troubleshooting NFS Services Performance Tuning Improving NFS Client Performance ❏ For files and directories that are mounted read-only and never change, set the actimeo mount option to 120 or greater in the /etc/fstab file on your NFS clients. ❏ If you see several “server not responding” messages within a few minutes, try doubling the value of the timeo mount option in the /etc/fstab file on your NFS clients.
Troubleshooting NFS Services Logging and Tracing of NFS Services Logging and Tracing of NFS Services This section tells you how to start the following tools: • AutoFS Logging • AutoFS Tracing • Logging for the Other NFS Services • Logging With nettl and netfmt • Tracing With nettl and netfmt AutoFS Logging AutoFS logs messages through /usr/sbin/syslogd. By default, syslogd writes messages to the file /var/adm/syslog/syslog.log. See the syslogd (1M) manpage for more information.
Troubleshooting NFS Services Logging and Tracing of NFS Services 5. Enter the following commands to kill the AutoFS: /sbin/init.d/autofs stop CAUTION Do not kill the automountd daemon with the kill command. It does not unmount AutoFS mount points before it dies. Use the nfs.client stop script to ensure that automountd dies. 6. Add -v to AUTOMOUNTD_OPTIONS variable in the /etc/rc.config.d/nfsconf file, as in the following example: AUTOMOUNTD_OPTIONS = “-v” This change will enable AutoFS logging. 7.
Troubleshooting NFS Services Logging and Tracing of NFS Services AutoFS Tracing Two levels of AutoFS tracing are available: Detailed (level 3) Includes traces of all AutoFS requests and replies, mount attempts, timeouts, and unmount attempts. You can start level 3 tracing while AutoFS is running. Basic (level 1) Includes traces of all AutoFS requests and replies. You must restart AutoFS to start level 1 tracing. To Start and Stop AutoFS Detailed Tracing 1. Log in as root to the NFS client. 2.
Troubleshooting NFS Services Logging and Tracing of NFS Services for FS in $(grep autofs /etc/mnttab | awk ‘{print $2}’) do grep ‘nfs’ /etc/mnttab | awk ‘{print $2}’ | grep ^${FS} done 4. For every automounted directory listed by the grep command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of everyone using the mounted directory. 5.
Troubleshooting NFS Services Logging and Tracing of NFS Services Mount Event Tracing Output The general format of a mount event trace is: MOUNT REQUEST:
Troubleshooting NFS Services Logging and Tracing of NFS Services May 13 18:45:09 t5 (nfs,nfs) /n2ktmp_8264/nfs127/tmp hpnfs127:/tmp penalty=0 May 13 18:45:09 t5 nfsmount: input: hpnfs127[other] May 13 18:45:09 t5 nfsmount: standard mount on /n2ktmp_8264/nfs127/tmp : May 13 18:45:09 t5 hpnfs127:/tmp May 13 18:45:09 t5 nfsmount: v3=1[0],v2=0[0] => v3.
Troubleshooting NFS Services Logging and Tracing of NFS Services The following is an example of a typical unmount trace event: May May May May May May May May May May 13 13 13 13 13 13 13 13 13 13 18:46:27 18:46:27 18:46:27 18:46:27 18:46:27 18:46:27 18:46:27 18:46:27 18:46:27 18:46:27 Chapter 5 t1 t1 t1 t1 t1 t1 t1 t1 t1 t1 UNMOUNT REQUEST: Tue May 13 18:46:27 2003 dev=44000004 rdev=0 direct ping: hpnfs127 request vers=3 min=2 pingnfs OK: nfs version=3 nfsunmount: umount /n2ktmp_8264/nfs127/tmp Port n
Troubleshooting NFS Services Logging and Tracing of NFS Services Logging for the Other NFS Services You can configure logging for the following NFS services: • rpc.rexd • rpc.rstatd • rpc.rusersd • rpc.rwalld • rpc.sprayd Logging is not available for the rpc.quotad daemon. Each message logged by these daemons can be identified by the date, time, host name, process ID, and name of the function that generated the message. You can direct logging messages from all these NFS services to the same file.
Troubleshooting NFS Services Logging and Tracing of NFS Services If you do not specify a log file for the other NFS services (with the -l option), they do not log any messages. The NFS services can all share the same log file. For more information, see rexd (1M), rstatd (1M), rusersd (1M), rwalld (1M) , and sprayd (1M).
Troubleshooting NFS Services Logging and Tracing of NFS Services Logging With nettl and netfmt 1. Enter the following command to make sure nettl is running: /usr/bin/ps -ef | grep nettl If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2. Enter the following command to start logging: /usr/sbin/nettl -l i w e d -e all The logging classes are specified following the -l option. They are i (informational), w (warning), e (error), and d (disaster).
Troubleshooting NFS Services Logging and Tracing of NFS Services Tracing With nettl and netfmt 1. Enter the following command to make sure nettl is running: /usr/bin/ps -ef | grep nettl If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2. Enter the following command to start tracing: /usr/sbin/nettl -tn pduin pduout loopback -e all -s 1024 \ -f tracefile 3. Recreate the event you want to trace. 4.
Troubleshooting NFS Services Logging and Tracing of NFS Services 200 Chapter 5
Index Symbols + (plus sign) in AutoFS maps, 138 in group file, 93 in passwd file, 92 Numerics 32k transfer size, 86 A access denied, NFS, 171 attribute caching, 178, 187 auto_master map, 116, 120, 126 AUTO_OPTIONS variable, 190, 191 AutoFS direct vs.
Index H hard mount option, 178, 188 hierarchical mounts, AutoFS, 129 home directories, automounting, 124, 136 hosts database, 169 -hosts map, 77, 126 examples, 127 hung program, 176 hung system, 76 I included files, in AutoFS maps, 138 indirect map, 120 advantages, 114 environment variables in, 134 examples, 122 wildcards in, 124, 136 inetd.conf file, 95, 168, 196 inetd.
Index P passwd database, 44, 172 netgroups in, 92 performance, 184 finding NFS problems, 185 improving NFS client, 149 improving NFS server, 186 permission denied, NFS, 172 ping, 168 plus sign (+) in AutoFS maps, 138 in group file, 93 in passwd file, 92 printer, in pcnfsd.
Index W warm cache, 150 wildcards in AutoFS maps, 124, 136 wsize mount option, 185, 188 Y ypmake, 88 204