NFS Services Administrator Guide HP-UX 11i v3 HP Part Number: 5900-3045 Published: March 2013 Edition: 1
© Copyright 2013 Hewlett-Packard Development Company, L.P Legal Notices Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents Preface: About This Document ........................................................................7 Intended Audience....................................................................................................................7 Publishing History.....................................................................................................................7 What's in This document.........................................................................................................
Deciding Between Standard-Mounted Directories and Automounted Directories..........................34 Enabling an NFS Client .....................................................................................................35 Mounting Remote Directories ..............................................................................................35 Mounting a Remote Directory on an NFS client.................................................................35 Enabling Client-Side Failover.................
Sample File Entries for NFS Direct Automounts..................................................................61 Automounting a Remote Directory Using an Indirect Map........................................................61 Notes on Indirect Maps.................................................................................................62 Sample File Entries for NFS Indirect Automounts................................................................
Forcing a Consistency Check for a Specific Mount-Point......................................................87 Forcing a Consistency Check for all the Mount-Points..........................................................87 Unmounting a Cache Filesystem...........................................................................................88 Checking the Integrity of a Cache........................................................................................88 Updating Resource Parameters.............
Preface: About This Document To locate the latest version of this document, go to the HP-UX Networking docs page at: http:// www.hp.com/go/hpux-networking-docs. On this page, select HP-UX 11i v3 Networking Software. This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document's current edition. The printing date will change when a new edition is printed.
The manual is organized as follows: Chapter 1 Introduction Describes the Network File System (NFS) services such NFS, AutoFS, and CacheFS. It also describes new features available with HP-UX 11i v3. Chapter 2 Configuring and Administering NFS Describes how to configure and administer NFS services. Chapter 3 Configuring the Cache File System (CacheFS) Describes the benefits of using the Cache File System and how to configure it on the HP-UX system.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS. This chapter addresses the following topics: • “ONC Services Overview” (page 9) • “Network File System (NFS)” (page 9) • “New Features in NFS” (page 10) ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment.
all the systems on the network, instead of duplicating common directories, such as /usr/local on each system. How NFS works The NFS environment consists of the following components: • NFS Services • NFS Shared Filesystems • NFS Servers and Clients NFS Services The NFS services is a collection of daemons and kernel components, and commands that enable systems with different architectures running different operating systems to share filesystems across a network.
NFSv4 uses the COMPOUND RPC procedure to build sequence of requests into a single RPC. All RPC requests are classified as either NULL or COMPOUND. All requests that are part of the COMPOUND procedure are known as operations. An operation is a filesystem action that forms part of a COMPOUND procedure. NFSv4 currently defines 38 operations. The server evaluates and processes operations sequentially.
the server file tree. If you use the PUTROOTFH operation, the client can traverse the entire file tree using the LOOKUP operation. ◦ Persistent The persistent file handle is an assigned fixed value for the lifetime of the filesystem object that it refers to. When the server creates the file handle for a filesystem object, the server must accept the same file handle for the lifetime of the object. The persistent file handle persists across server reboots and filesystem migrations.
users@domain Or group@domain Where: users specifies the string representation of the user group specifies the string representation of the group domain specifies a registered DNS domain or a sub-domain of a registered domain However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back.
In earlier versions of HP-UX, the exportfs command was used to export directories and files to other systems over a network. Users and programs accessed the exported files on remote systems as if they were part of the local filesystem. NFS disabled the system from exporting directories by using the -u option of the exportfs command. For information on how to share directories with the NFS clients, see “Sharing Directories with NFS Clients” (page 21).
• Mounting an NFS filesystem through a firewall For information on how to mount an NFS filesystem through a firewall, see “Accessing Shared NFS Directories across a Firewall” (page 28). • Mounting a filesystem securely For information on how to mount a filesystem in a secure manner, see “An Example for Securely Mounting a directory” (page 38). For information on how to disable mount access for a single client, see “Unmounting (Removing) a Mounted Directory” (page 39).
Consider the following points before enabling client-side failover: • The filesystem must be mounted with read-only permissions. • The filesystems must be identical on all the redundant servers for the failover to occur successfully. For information on identical filesystems, see “Replicated Filesystems” (page 16). • A static filesystem or one that is not modified often is used for failover. • File systems that are mounted using CacheFS are not supported for use with failover.
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An >NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
You can set user and group IDs in the following methods: • Using the HP-UX System Files (/etc/passwd and /etc/group) • Using NIS • Using LDAP Using the HP-UX System Files If you are using the HP-UX system files, add the users and groups to the /etc/passwd and /etc/ group files, respectively. Copy these files to all the systems on the network. For more information on the /etc/passwd and /etc/group files, see passwd(4) and group(4).
Using LDAP For more information on managing user profiles using LDAP, see the LDAP-UX Client Services B.04.00 Administrator’s Guide (J4269-90064). Configuring and Administering an NFS Server Configuring an NFS server involves completing the following tasks: 1. Identify the set of directories that you want the NFS server to share. For example, consider an application App1 running on an NFS client. Application App1 requires access to the abc filesystem in an NFS server.
Table 3 NFS Server Daemons (continued) Daemon Name Function nfs4srvkd Supports server side delegation. rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.statd Maintains a list of clients that have performed the file locking operation over NFS against the server. These clients are monitored and notified in the event of a system crash.
1069548396 ? 1069640883 ? 0:00 rpc.lockd 0:00 rpc.statd No message is displayed if the daemons are not running. To start the lockd and statd daemons, enter the following command: /sbin/init.d/lockmgr start 4. Enter the following command to run the NFS startup script: /sbin/init.d/nfs.server start The NFS startup script enables the NFS server and uses the variables in the /etc/rc.config.d/nfsconf file to determine which processes to start.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
nfs Specifies that the filesystem type is NFS -o Enables you to use some of the specific options of the share command, such as sec, async, public, and others. -d Enables you to describe the filesystem being shared When NFS is restarted or the system is rebooted, the /etc/dfs/dfstab file is read and all directories are shared automatically. 2.
In this example, all clients are allowed read-only access to the /tmp directory. The /tmp directory needs to be configured to allow read access to users on the clients. For example, specify -r--r--r-- permissions for the /tmp directory. • Sharing a directory with varying access permissions share -F nfs -o ro=Jan:Feb,rw=Mar /usr/kc In this example, the /usr/kc directory is shared with clients Jan, Feb, and Mar.
Table 4 Security Modes of the share command Security Mode Description sys Uses the default authentication method, AUTH_SYS. The sys mode is a simple authentication method that uses UID/GID UNIX permissions, and is used by NFS servers and NFS clients using the version 2, 3, and 4 protocol. dh Uses the Diffie-Hellman public-key system and uses the AUTH_DES authentication. krb5 Uses Kerberos V5 protocol to authenticate users before granting access to the shared filesystems.
NOTE: In all of this section, the following systems will be used as examples: Kerberos Server: onc52.ind.hp.com NFS Server: onc20.ind.hp.com NFS Client: onc36.ind.hp.com 2. 3. Synchronize the date & time of server nodes with kerberos server. To change the current date and time use date command followed by the current date and time. For example, enter date 06101130 to set the date to June 10th and time to 11:30 AM. The time difference between the systems should not be more than 5 minutes.
9. Re-initialize inetd on NFS servers. inetd –c 10. To create a credential table, enter the following command: onc20# gsscred -m krb5_mech -a 11. Share a directory with the Kerberos security option.
4. Edit the /etc/inetd.conf file and uncomment gssd entry. onc36# cat /etc/inetd.conf | grep gssd rpc 5. xti ticotsord swait root /usr/lib/netsvc/gss/gssd 100234 1 gssd Re-initialize inetd on the NFS servers. onc36# inetd -c 6. To create a credential table, enter the following command: onc36# gsscred -m krb5_mech -a 7.
Shared NFS directories can be accessed across a firewall in the following ways: • Sharing directories across a firewall without fixed port numbers • Sharing directories across a firewall using fixed port numbers in the /etc/default/nfs file • Sharing directories across a firewall using the NFSv4 protocol • Sharing directories across a firewall using the WebNFS feature Sharing directories across a firewall without fixed port numbers (NFSv2 and NFSv3) This is the default method of sharing directories
Where: port_number The port number on which the daemon runs. It can be set to any unique value between 1024 and 65536 for rpc.statd, rpc.mountd and nsf4cbd daemons. STATD_PORT The port on which the rpc.statd daemon runs. MOUNTD_PORT The port on which the rpc.mountd daemon runs. NFS4CBD_PORT The port on which nfs4cbd callback daemon runs. 2. Activate the changes made to the/etc/default/nfs file by restarting the lock manager and NFS server daemons as follows: /sbin/init.d/nfs.server stop /sbin/init.
Table 5 NFS Session Versus WebNFS Session (continued) How NFS works across LANs How WebNFS works across WANs client must issue a request for a file handle corresponding to the requested path. Figure 3 shows a sample WebNFS session. Figure 3 WebNFS Session Figure 3 depicts the following steps: 1. An NFS client uses a LOOKUP request with a PUBLIC file handle to access the foo/ index.html file. The NFS client bypasses the portmapper service and contacts the server on port 2049 (the default port). 2.
The /etc/pcnfsd.conf file is read when the pcnfsd daemon starts. If you make any changes to /etc/pcnfsd.conf file while pcnfsd is running, you must restart pcnfsd before the changes take effect. A PC must have NFS client software installed in order to use the system as a PC NFS server. The PCNFS_SERVER variable, configured using the /etc/rc.config.d/nfsconf file controls whether the pcnfsd daemon is started at system boot time.
unshareall -F nfs 2. Enter the following command to disable NFS server capability: /sbin/init.d/nfs.server stop 3. On the NFS server, edit the /etc/rc.config.d/nfsconf file to set the NFS_SERVER variable to 0, as follows: NFS_SERVER=0 This prevents the NFS server daemons from starting when the system reboots. For more information about forced unmount, unmounting and unsharing, see mount_nfs (1M), unshare(1M), and umount(1M).
• “Unmounting (Removing) a Mounted Directory” (page 39) (Optional) • “Disabling NFS Client Capability” (page 39) (Optional) Configuring the NFSv4 Client Protocol Version IMPORTANT: The nfsmapid daemon must be running on both the NFS server and the client to use NFSv4. For more information on how to configure the NFSv4 server protocol version, see “Configuring the NFSv4 Server Protocol Version ” (page 20).
Table 8 Standard-Mounted Versus Automounted Directories (continued) Standard-Mounted Directory Automounted Directory (Using AutoFS) configure each directory separately. If the NFS servers exported directories from any NFS server on the network on change the directories they are exporting, you must your system whenever anyone requests access to a directory change your local NFS client configuration. on that server. The servers can change which directories they export, but your configuration remains valid.
server:remote_directory local_directory nfs option[,option...] 0 0 2. Mount all the NFS file systems specified in the /etc/fstab file by entering the following command: /usr/sbin/mount -a -F nfs 3.
devs,rsize=32768,wsize=32768,retrans=5,timeo=600 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 Failover: noresponse=0,failover=0,remap=0,currserver=onc23 The Failover line in the above output indicates that the failover is working. Examples of NFS Mounts • Mounting a directory as read-only with no set userid privileges mount -r -o nosuid broccoli:/usr/share/man /usr/share/man In this example, the NFS clients mount the /usr/share/man directory from the NFS server broccoli.
mount -o public nfs://onc31/usr/%A0abc /Casey/Clay If the public option or a URL is specified, the mount command attempts to connect to the server using the public file handle. The daemons rpcbind and mountd are not contacted. In this example, the NFS client mounts /Casey/Clay directory by using a public file handle, and an NFS URL that has a non 7-bit ASCII escape sequence from the NFS server, onc31.
/usr/sbin/umount local_directory /usr/sbin/mount local_directory 2. If you change the mount options in the AutoFS master map, you must restart AutoFS for the changes to take effect. For information on restarting AutoFS, see “Restarting AutoFS” (page 73). For more information on the different caching mount options, see mount_nfs(1M). Unmounting (Removing) a Mounted Directory You can temporarily unmount a directory using the umount command.
5. To disable the NFS client and AutoFS, edit the /etc/rc.config.d/nfsconf file on the client to set the NFS_CLIENT and AUTOFS variables to 0, as follows: NFS_CLIENT=0 AUTOFS=0 This prevents the client processes from starting up again when you reboot the client. 6. Enter the following command to disable NFS client capability: /sbin/init.d/nfs.client stop For more information, see umount (1M), mount(1M), and fuser(1M).
For information on the nfs3_bsize and nfs4_bsize tunables, see nfs3_bsize(1M) and nfs4_bsize(1M). After the tunables have been modified, set the mount option for read and write size to 1MB, as follows: mount -F nfs -o rsize=1048576, wsize=1048576 Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled, the nfsd daemon is started to service all TCP and UDP requests.
used in a configuration file, it represents either a group of hosts or a group of users, but never both. If you are using BIND (DNS) for hostname resolution, hosts must be specified as fully qualified domain names, for example: turtle.bio.nmt.edu. If host, user, or NIS_domain is left blank in a netgroup, that field can take any value. If a dash (-) is specified in any field of a netgroup, that field can take no value.
goodguys (sage,jane, ) (basil,art, ) (thyme,mel, ) If the two netgroups are combined this way, the same netgroup can be used as both the host name and the user name in the /etc/hosts.equiv file, as follows: +@goodguys +@goodguys The first occurrence of it is read for the host name, and the second occurrence is read for the user name. No relationship exists between the host and user in any of the triples. For example, user jane may not even have an account on host sage.
+ -@vandals The plus (+) sign is a wildcard in the /etc/hosts.equiv or $HOME/.rhosts file syntax, allowing privileged access from any host in the network. The netgroup vandals is defined as follows: vandals ( ,pat, ) ( ,harriet, ) ( ,reed, ) All users except those listed in the vandals netgroup can log in to the local system without supplying a password from any system in the network. CAUTION: Users who are denied privileged access in the /etc/hosts.
bears (-,yogi, ) (-,smokey, ) (-,pooh, ) The following entries in the /etc/group file allow user pooh membership in the group teddybears, but not in any other group listed in the NIS database or after the -@bears entry in the /etc/group file: teddybears::23:pooh,paddington -@bears For more information on NIS, see NIS Administrator’s Guide (5991-2187). For information on the /etc/group file, see group(4).
Table 9 RPC Services managed by inetd (continued) RPC Service Description rusersd The rpc.rusersd program responds to requests from the rusers command, which collects and displays information about all users who are logged in to the systems on the local network. For more information, see rusersd (1M) and rusers (1). rwalld The rpc.rwalld program handles requests from the rwall program. The rwall program sends a message to a specified system where the rpc.
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
mounts. The filesystem interacts with the automount command and the automountd daemon to mount filesystems automatically. The automount Command This command installs the AutoFS mount-points, and associates an automount map with each mount-point. The AutoFS filesystem monitors attempts to access directories within it and notifies the automountd daemon. The daemon locates a filesystem using the map, and then mounts this filesystem at the point of reference within the AutoFS filesystem.
Maps Overview Maps define the mount-points that AutoFS will mount. Maps are available either locally or through a distributed network Name Service, such as NIS or LDAP. AutoFS supports different types of maps. Table 10 lists the different types of maps supported by AutoFS. Table 10 Types of AutoFS Maps Type of Map Description Master Map A master list of maps, which associates a directory with a map. The automount command reads the master map at system startup to create the initial set of AutoFS mounts.
For instance, the NFS server, Basil, exports the /export directory. If you are a user on any of the AutoFS clients, you can use maps to access the /export directory from the NFS server, Basil. AutoFS clients can access the exported filesystem using any one of the following map types: • Direct Map • Indirect Map • Special Map The AutoFS client, Sage, uses a direct map to access the /export directory.
Figure 8 (page 51) illustrates the automounted file structure after the user enters the command. Figure 8 Automounted Directories for On-Demand Mounting Browsability for Indirect Maps AutoFS now enables you to view the potential mount-points for indirect maps without mounting each filesystem. Consider the following scenario where the AutoFS master map, /etc/auto_master, and the indirect map, /etc/auto_indirect, are on the NFS client, sage.
NFS Loopback Mount By default, AutoFS uses the Loopback Filesystem (LOFS) mount for locally mounted filesystems. AutoFS provides an option to enable loopback NFS mounts for the local mount. Use the automountd command with the -L option to enable the loopback NFS mounts for locally mounted filesystems. This option is useful when AutoFS is running on a node that is part of a High Availability NFS environment.
To mount a CD-ROM device, follow these steps: 1. Log in as superuser. 2. Update the appropriate AutoFS map, as follows: cdrom -fstype=hfs, ro :/dev/sr0 For mount devices, If the mount resource begins with a “ / ” , it must be preceded by a colon. For instance in the above section, the CD-ROM device, /dev/sr0, is preceded by the colon. Supported Backends (Map Locations) AutoFS maps can be located in the following: • Files: Local files that store the AutoFS map information for that individual system.
1. If the AutoFS maps are not already migrated, migrate your AutoFS maps to LDAP Directory Interchange Format (LDIF) files using the migration scripts. The migrated maps can also be used if you have chosen the older schema. For information on the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool.
the directory is mounted the next time. However, if you change the local directory name in the direct or indirect map, or if you change the master map, these changes do not take effect until you run the automount command to force AutoFS to reread its maps. Updating from Automounter to AutoFS Automounter is a service that automatically mounts filesystems on reference. The service enables you to access the filesystems without taking on the superuser role and use the mount command.
AutoFS Configuration Changes This section describes the various methods to configure your AutoFS environment. Configuring AutoFS Using the nfsconf File You can use the /etc/rc.config.d/nfsconf file to configure your AutoFS environment. The /etc/rc.config.d/nfsconf file is the NFS configuration file. This file consists of the following sets of variables or parameters: 1. Core Configuration Variables 2. Remote Lock Manager Configuration Variables 3. NFS Client and Server Configuration Variables 4.
Configuring AutoFS Using the/etc/default/autofs File You can also use the /etc/default/autofs file to configure your AutoFS environment. The /etc/default/autofs file contains parameters for the automountd daemon and the automount command. Initially, the parameters in the /etc/default/autofs file are commented. You must uncomment the parameter to make the value for that parameter take effect.
the initial set of AutoFS mounts. Subsequent to system startup, the automount command maybe run to install AutoFS mounts for new entries in the master map or a direct map, or to perform unmounts for entries that have been removed from these maps. Following is a syntax of an auto_master file: mount-point map-name [mount-options] where: mount-point Directory on which an AutoFS mount is made. map-name Map associated with mount-point specifying locations of filesystems to mount.
Table 13 Direct Versus Indirect AutoFS Map Types (continued) Direct Map Indirect Map internal mount table, /etc/mnttab. This can cause the mount table to become very large. mount table, /etc/mnttab. Additional entries are created when the directories are actually mounted. The mount table takes up no more space than necessary, because only mounted directories appear in it. Configuring AutoFS Direct and Indirect Mounts A direct map is an automount mount-point.
/- 2. direct_map_name [mount_options] If you are using local files for maps, use an editor to open or create a direct map in the /etc directory. The direct map is commonly called /etc/auto_direct. Add an entry to the direct map with the following syntax: local_directory [mount_options] server:remote_directory If you are using NIS or LDAP to manage maps, add an entry to the direct map on the NIS master server or the LDAP directory. 3. 4.
remote server:directory Location of the directory, on the server, that is to be mounted. If you plan to use NIS or LDAP to manage maps, there can be only one direct map in your configuration. If the direct map name in the master map begins with a slash (/), AutoFS considers it to be a local file. If it does not contain a slash, AutoFS uses the NSS to determine whether it is a file, LDAP, or an NIS map. For more information on using NSS, see nsswitch.conf(4).
local_parent_directory indirect_map_name [mount_options] 2. If you are using local files for your AutoFS maps, use an editor to open or create an indirect map in the /etc directory. Add a line with the following syntax, to the indirect map: local_subdirectory [mount_options] server:remote_directory If you are using NIS or LDAP to manage maps, add an entry to an indirect map on the corresponding NIS master server or the LDAP directory. 3. 4.
Following are sample lines from the AutoFS master map on the NFS client, sage. The master map also includes an entry for the /etc/auto_direct direct map. # /etc/auto_master file # local mount-point //nfs/desktop map name mount options /etc/auto_direct /etc/auto_desktop The local_parent_directory specified in the master map consists of directories, except the lowest-level subdirectory in the local directory pathname.
When AutoFS starts up on the host sage, it assigns the value sage to the HOST variable. When you request access to the local /private_files directory on sage, AutoFS mounts /export/private_files/sage from basil. To use environment variables as shortcuts in direct and indirect maps, follow these steps: 1.
* basil:/export/home/& The user's home directory is configured in the /etc/passwd file as /home/username. For example, the home directory of the user terry is /home/terry. When Terry logs in, AutoFS looks up the /etc/auto_home map and substitutes terry for both the asterisk and the ampersand. AutoFS then mounts Terry’s home directory from /export/home/terry on the server, basil, to /home/terry on the local NFS client.
Example of Automounting a User’s Home Directory User Howard’s home directory is located on the NFS server, basil, where it is called /export/home/howard.
If you are using NIS to manage AutoFS maps, add the previous entry to the master map file on the NIS master server. Rebuild the map and push it out to the slave servers. For more information on NIS and NIS maps, see the NIS Administrator’s Guide (5991-2187). If you are using LDAP, add the entry to the LDAP directory. 2.
Figure 14 Automounted Directories from the -hosts Map—Two Servers Turning Off an AutoFS Map Using the -null Map To turn off a map using the -null map, follow these steps: 1. Add a line with the following syntax in the AutoFS master map: local_directory -null 2. If AutoFS is running, enter the following command on each client that uses the map, to force AutoFS to reread its maps: /usr/sbin/automount This enables AutoFS to ignore the map entry that does not apply to your host.
!/bin/sh Server=$1 showmount -e $1 | awk ‘NR > 1 {print $1 “\t'$Server’:” $1 ” \\ “}’ | sort Advanced AutoFS Administration This section presents advanced AutoFS concepts that enable you to improve mounting efficiency and also help make map building easier.
another network segment, regardless of the weight you assign. The weighting factor is taken into account only when deciding between servers with the same network proximity. • If the remote directory has a different name on different servers, use a syntax such as the following from a direct map: /nfs/proj2/schedule -ro \ broccoli:/export/proj2/schedule, \ cauliflower:/proj2/FY07/schedule To configure multiple replicated servers for a directory, follow these steps: 1.
If the list of multiple servers contains a combination of servers that includes all versions of the NFS protocol, then AutoFS selects a subset of servers with the highest NFS protocol version configured. For example, a list contains a number of servers configured with the NFSv4 protocol, and a few servers configured with the NFSv2 protocol. AutoFS will use the subnet of servers configured with the NFSv4 protocol, unless a server configured with the NFSv2 protocol is closer.
releases tools source projects -fstype=autofs -fstype=autofs bigiron:/export/releases mickey,minnie:/export/tools auto_eng_source auto_eng_projects A user in the blackhole project within engineering can use the following path: /org/eng/projects/blackhole Starting with the AutoFS mount at /org, the evaluation of this path dynamically creates additional AutoFS mounts at /org/eng and /org/eng/projects.
The following example shows an indirect map configuration: # /etc/auto_master file # local mount-point map name mount options /nfs/desktop /etc/auto_desktop # /etc/auto_desktop file # local mount-point mount options remote server:directory draw write -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Enter the following commands: cd /nfs/desktop ls The ls command displays the following output: draw write The draw and write subdirectories are the potential mount-points (browsability), but a
/usr/sbin/fuser -ck local_mount_point 4. To stop AutoFS, enter the following command: /sbin/init.d/autofs stop IMPORTANT: Do not stop the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it terminates. Use the autofs stop command instead. 5.
To Stop AutoFS Logging To stop AutoFS logging, stop AutoFS and restart it after removing the “-v” option from AUTOMOUNTD_OPTIONS. AutoFS Tracing AutoFS supports the following Trace levels: Detailed (level 3) Includes traces of all the AutoFS requests and replies, mount attempts, timeouts, and unmount attempts. You can start level 3 tracing while AutoFS is running. Basic (level 1) Includes traces of all the AutoFS requests and replies. You must restart AutoFS to start level 1 tracing.
This command lists the process IDs and user names of all users who are using the mounted directory. 5. Warn users to exit the directory, and kill processes that are using the directory, or wait until all the processes terminate. Enter the following command to kill all the processes using the mounted directory: /usr/sbin/fuser -ck local_mount_point 6. Enter the following command to stop AutoFS: /sbin/init.d/autofs stop CAUTION: Do not kill the automountd daemon with the kill command.
May 13 18:45:09 t5 May 13 18:45:09 t5 May 13 18:45:09 t5 args_temp: hpnfs127, , 0x3004060, 0, 0, 0, 0,0, 0, 0, 0, hpnfs127:/tmp mount hpnfs127:/tmp dev=44000004 rdev=0 OK MOUNT REPLY: status=0, AUTOFS_DONE Unmount Event Tracing Output The general format of an unmount event trace is: UNMOUNT REQUEST:
Another workaround is to force the filesystem to remain busy by opening a temporary file, so that AutoFS cannot unmount it.
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
cold cache A cache that has not yet been populated with data from the back filesystem is called cold cache. cache miss An attempt to reference data that is not yet cached is called a cache miss. warm cache A cache that contains data in its front filesystem is called a warm cache. In this case, the cached data can be returned to the user without requiring an action from the back filesystem. cache hit A successful attempt to reference data that is cached is called a cache hit.
Features of CacheFS This section discusses the features that CacheFS supports on systems running HP-UX 11i v3. • Cache Pre-Loading via the “cachefspack” Command The cachefspack command enables you to pre-load or pack specific files and directories in the cache, thereby improving the effectiveness of CacheFS. It also ensures that current copies of these files are always available in the cache. Packing files and directories in the cache enables you to have greater control over the cache contents.
• Support for Large Files and Large Filesystems CacheFS supports the maximum file and filesystem sizes supported by the underlying front filesystem and the back filesystem. CacheFS data structures are 64-bit compliant. • Support for ACLs An Access Control List (ACL) offers stronger file security by enabling the owner of the file to define file permissions for specific users and groups. This version of CacheFS on HP-UX supports ACLs with VxFS and NFS and not with HFS.
For example to create a CacheFS directory called /disk2/cache using the following command: cfsadmin -c /disk2/cache This creates a new directory called cache under the /disk2 directory. CacheFS allows more than one filesystem to be cached in the same cache. You need not create a separate cache directory for each CacheFS mount. Mounting an NFS Filesystem Using CacheFS This section describes how to mount an NFS filesystem using CacheFS.
To change the mount option from noconst to the default option, without deleting or rebuilding the cache, enter the following commands: umount /mnt1 mount -F cachefs -o backfstype=nfs,cachedir=/cache CFS1:/tmp /mnt1 To change the mount option from default to weakconst after unmounting, enter the following command: mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M).
/tmp/logfile Specifies the logfile to be used. NOTE: When multiple mount-points use the same cache directory, enabling logging for one CacheFS mount-point automatically enables logging for all the other mount-points. 3. To verify if logging is enabled for /cfs_mnt1, enter the following command: cachefslog /cfs_mnt1 If logging has been enabled, the logfile is displayed. Disabling Logging in CacheFS You can use the cachefslog command to halt or disable logging for a CacheFS mount-point.
Packing a Cached Filesystem Starting with HP-UX 11i v3, the cachefspack command is introduced to provide greater control over the cache. This command enables you to specify files and directories to be loaded, or packed, in the cache. It also ensures that the current copies are always available in the cache.
packing-list Specifies the name of the packing-list file. The files specified in the packing list are now packed in the cache. You can unpack files that you no longer require, using one of the following methods: • Using the -u option To unpack a specific packed file or files from the cache directory, enter the following command: cachefspack -u filename where: -u Specifies that certain files are to be unpacked. filename Specifies the file to unpack.
Unmounting a Cache Filesystem To unmount a Cache filesystem, enter the following command: umount mount-point where: mount-point Specifies the CacheFS mount-point that you want to unmount. Checking the Integrity of a Cache You can use the fsck command to check the integrity of a cache. The CacheFS version of the fsck command checks the integrity of the cache and automatically corrects any CacheFS problems that it encounters.
Table 14 CacheFS Resource Parameters (continued) CacheFS Resource Parameter Default Value minfiles 0 threshfiles 85 For more information on the resource parameters, see cfsadmin(1M). You can update the resource parameters using the -u option of the cfsadmin command. NOTE: All filesystems in the cache directory must be unmounted when you use the -u option. Changes will be effective the next time you mount any filesystem in the specified cache directory.
mount | grep -w "cachedir=/disk2/cache" | awk '{print $1}' An output similar to the following is displayed if CacheFS mount-points are using the cache directory: /mnt /mnt1 3. To unmount the CacheFS mount-points, /_mnt, and /_mnt1, enter the following commands: umount /mnt umount /mnt1 4. To delete the CacheFS filesystem corresponding to the Cache ID from the specified cache directory, enter the following command: cfsadmin -d CacheID cache-directory 5.
modifies: 85727 garbage collection: 0 You can run the cachefsstat command with the -z option to reinitialize CacheFS statistics. You must be a superuser to use the -z option. NOTE: If you do not specify the CacheFS mount-point, statistics for all CacheFS mount-points are displayed. For more information about the cachefsstat command, see cachefsstat(1M).
Table 15 Common Error Messages encountered while using the cfsadmin command Error Message Possible Causes “cfsadmin: Cannot create lock file /test/ mnt/c/.cfs_lock” Indicates that you do not have write 1. Delete the cache. permissions to the filesystem, where 2. Recreate the cache directory the cache directory is being using the cfsadmin command. created. “mount failed No space left on device” The Cache directory may not be clean.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 93) • “Performance Tuning” (page 100) • “Logging and Tracing of NFS Services” (page 102) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
Here, the share command assigns unique device-ids 222 and 333 (in decimal) to the /lo/mnt1 and /lo/mnt2 filesystems, respectively, within a LOFS filesystem (/lo). This prevents the problem. NOTE: Ensure the device-id value is unique for each filesystem that is shared using the share command. If the same device-id is assigned to more than one filesystem, the problems described earlier might occur.
If the rpcinfo command returns RPC_TIMED_OUT, the rpc.mountd process may be hung. Enter the following commands on the NFS server to restart rpc.mountd (PID is the process ID returned by the ps command) : /usr/bin/ps -ef | /usr/bin/grep mountd /usr/bin/kill PID/usr/sbin/rpc.mountd □ You can receive “server not responding” messages when the server or network is heavily loaded and the RPC requests are timing out. NOTE: For TCP, the default timeout is 600 while for UDP, the default timeout is 11.
“Permission Denied” Message □ Check the mount options in the /etc/fstab file on the NFS client. A directory you are attempting to write to may have been mounted as read-only. □ Enter the ls -l command to check the HP-UX permissions on the server directory and on the client directory that is the mount-point. You may not be allowed access to the directory.
“Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing. The following sequence of events explains how it occurs: Table 19 Stale File Handle Sequence of Events NFS client 1 1 NFS client 2 % cd /proj1/source 2 % cd /proj1 3 % rm -Rf source 4 % ls .:Stale File Handle If a server stops exporting a directory that a client has mounted, the client will receive a stale file handle error.
/usr/sbin/umount -aF nfs 5. Enter the following commands to restart the NFS client: /sbin/init.d/nfs.client stop /sbin/init.d/nfs.client start A Program Hangs □ Check whether the NFS server is up and operating correctly. If you are not sure, see “NFS “Server Not Responding” Message” (page 94). If the server is down, wait until it comes back up, or, if the directory was mounted with the intr mount option (the default), you can interrupt the NFS mount, usually with CTRL-C.
□ If you have a small number of NFS applications that require absolute data integrity, add the O_SYNC flag to the open() calls in your applications. When you open a file with the O_SYNC flag, a write() call will not return until the write request has been sent to the NFS server and acknowledged. The O_SYNC flag degrades write performance for applications that use it.
To fix the krb5.conf file for proper domain name to realm matching, modify the file based on the following sample: # # Kerberos configuration # This krb5.conf file is intended as an example only. # see krb5.conf(4) for more details # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.
See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3. If the timeout and badxid values displayed by nfsstat -rc are of the same magnitude, your server is probably slow. Client RPC requests are timing out and being retransmitted before the NFS server has a chance to respond to them. Try doubling the value of the timeo mount option on the NFS clients. See “Changing the Default Mount Options” (page 38)“Changing the Default Mount Options” on page 51.
□ If you frequently see the following message when attempting access to a soft-mounted directory, NFS operation failed for server servername: Timed out try increasing the value of the retrans mount option in the /etc/fstab file on the NFS clients. Or, change the soft mount to an interruptible hard mount, by specifying the hard and intr options (the defaults).
To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly. Following is an example crontab entry that empties the logfile at 1:00 AM every Monday, Wednesday, and Friday: 0 1 * * 1,3,5 cat /dev/null > log_file For more information, type man 1M cron or man 1 crontab at the HP-UX prompt. To Configure Logging for the Other NFS Services 1. Add the -l logfile option to the lines in /etc/inetd.
If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2. Enter the following command to start tracing: /usr/sbin/nettl -tn pduin pduout loopback -e all -s 1024 \ -f tracefile 3. 4. Recreate the event you want to trace. Enter the following command to turn tracing off: /usr/sbin/nettl -tf -e all 5.
Index Symbols D + (plus sign) in AutoFS maps, 71 in group file, 44 in passwd file, 44 -hosts map, 35, 66 examples, 67 -null map, 68 32k transfer size, 40 device busy, 96 direct map, 59 advantages, 58 environment variables in, 64 examples, 61 modifying, 55 DNS, 95 A environment variables in AutoFS maps, 64 /etc/auto_master file see auto_master map, 66 /etc/group file see group database, 18 /etc/netgroup file see netgroup file, 41 /etc/rc.config.d/namesvrs file see namesvrs file, 20 /etc/rc.config.
wildcards in, 64, 65 inetd.conf file, 45, 94, 103 inetd.
logging, 103 rwalld logging, 103 wildcards in AutoFS maps, 64, 65 wsize mount option, 100, 102 S ypmake, 41 Y security in mounted directories, 37 server not responding, NFS, 94, 101 server, NFS, 17 CPU load, 101 too slow, 101 showmount, 95, 96 SIGUSR2 signal to automount, 75 simultaneous mounts, AutoFS, 69 slow server, NFS, 101 soft mount timed out, 102 soft mount option, 98, 102 sprayd logging, 103 stale file handle, 97 avoiding, 97 standard mount, 35 START_MOUNTD variable, 94 statd, 94 checking for h