NFS Services Administrator’s Guide HP-UX 11i version 3 HP Part Number: B1031-90064 Published: January 2008
© Copyright 2008 Hewlett-Packard Development Company, L.P Legal Notices © Copyright 2008 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license.
Table of Contents Preface: About This Document ......................................................................................11 Intended Audience................................................................................................................................11 Publishing History................................................................................................................................11 What’s in This document........................................................
Configuring and Administering NFS Clients ......................................................................................38 NFS Client Configuration Files and Daemons................................................................................38 Configuring the NFSv4 Client Protocol Version.............................................................................39 Deciding Between Standard-Mounted Directories and Automounted Directories........................39 Enabling an NFS Client ........
Deciding Between Direct and Indirect Automounts............................................................................65 Configuring AutoFS Direct and Indirect Mounts.................................................................................66 Automounting a Remote Directory Using a Direct Map................................................................66 Notes on Direct Maps.................................................................................................................
Disabling Logging in CacheFS........................................................................................................93 Caching a Complete Binary.............................................................................................................93 Packing a Cached Filesystem...........................................................................................................93 Forcing a Cache Consistency Check......................................................................
List of Figures 1-1 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 4-1 Server View of the Shared Directories...........................................................................................16 Symbolic Links in NFS Mounts.....................................................................................................26 WebNFS Session............................................................................................................................36 NFS Mount of manpages..........
List of Tables 1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 3-1 3-2 3-3 3-4 4-1 4-2 4-3 4-4 4-5 5-1 Publishing History Details............................................................................................................11 NFS Server Configuration Files.....................................................................................................23 NFS Server Daemons.....................................................................................................................
Preface: About This Document The latest version of this document can be found on line at: http://www.docs.hp.com This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document’s current edition. The printing date will change when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The document part number will change when extensive changes are made.
Typographical Conventions This document uses the following conventions. Italics Bold monotype Identifies titles of documentation, filenames and paths. Text that is strongly emphasized. Identifies program/script, command names, parameters or display. HP Encourages Your Comments HP encourages your comments concerning this document. We are truly committed to providing documentation that meets your needs. Please send comments to: netinfo_feedback@cup.hp.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS.
be shared by all the systems on the network, instead of duplicating common directories, such as /usr/local on each system. How NFS works The NFS environment consists of the following components: • • • NFS Services NFS Shared Filesystems NFS Servers and Clients NFS Services The NFS services is a collection of daemons and kernel components, and commands that enable systems with different architectures running different operating systems to share filesystems across a network.
NFSv4 uses the COMPOUND RPC procedure to build sequence of requests into a single RPC. All RPC requests are classified as either NULL or COMPOUND. All requests that are part of the COMPOUND procedure are known as operations. An operation is a filesystem action that forms part of a COMPOUND procedure. NFSv4 currently defines 38 operations. The server evaluates and processes operations sequentially.
PUTROOTFH operation. This operation instructs the server to set the current file handle to the root of the server file tree. If you use the PUTROOTFH operation, the client can traverse the entire file tree using the LOOKUP operation. — Persistent The persistent file handle is an assigned fixed value for the lifetime of the filesystem object that it refers to. When the server creates the file handle for a filesystem object, the server must accept the same file handle for the lifetime of the object.
In NFSv4, the client can mount /, /opt and /opt/dce. If the client mounts /opt and lists the contents of the directory, only the directory dce is seen. If you change directory (cd) to dce, the contents of dce is not visible. The client must specifically mount /opt/dce to see the contents of dce. Future versions of HP-UX will allow the client to cross this filesystem boundary on the server.
Sharing and Unsharing Directories In HP-UX 11i v3, NFS replaces the exportfs command with the share command. The share command is used on the server to share directories and files with clients. You can use the unshare command to disable the sharing of directories with other systems. In earlier versions of HP-UX, the exportfs command was used to export directories and files to other systems over a network.
Mounting and Unmounting Directories NFS clients can mount any filesystem or a part of a filesystem that is shared by the NFS server. Filesystems can be mounted automatically when the system boots, from the command line, or through the automounter. The different ways to mount a filesystem are as follows: • Mounting a filesystem at boot time and using the mount command For information on how to mount a filesystem at boot time, see “Mounting a Remote Directory on an NFS client” (page 40).
authentication and encryption capabilities. For information on how to share directories in a secure manner, see “Secure Sharing of Directories ” (page 28). Client Failover By using client-side failover, an NFS client can specify redundant servers that are making the same data available and switch to an alternate server when the current server becomes unavailable.
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
Using the HP-UX System Files If you are using the HP-UX system files, add the users and groups to the /etc/passwd and /etc/group files, respectively. Copy these files to all the systems on the network. For more information on the /etc/passwd and /etc/group files, see passwd(4) and group(4). Using NIS If you are using NIS, all systems on the network request user and group information from the NIS maps on the NIS server. For more information on configuring NIS, see NIS Administrator's Guide (5991-7656).
Configuring and Administering an NFS Server Configuring an NFS server involves completing the following tasks: 1. 2. Identify the set of directories that you want the NFS server to share. For example, consider an application App1 running on an NFS client. Application App1 requires access to the abc filesystem in an NFS server. NFS server should share the abc filesystem.
Table 2-2 NFS Server Daemons (continued) Daemon Name Function rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.statd Maintains a list of clients that have performed the file locking operation over NFS against the server. These clients are monitored and notified in the event of a system crash.
1069640883 ? 0:00 rpc.statd No message is displayed if the daemons are not running. To start the lockd and statd daemons, enter the following command: /sbin/init.d/lockmgr start 4. Enter the following command to run the NFS startup script: /sbin/init.d/nfs.server start The NFS startup script enables the NFS server and uses the variables in the /etc/rc.config.d/nfsconf file to determine which processes to start.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
-F nfs -o -d Specifies the filesystem type Specifies that the filesystem type is NFS Enables you to use some of the specific options of the share command, such as sec, async, public, and others. Enables you to describe the filesystem being shared When NFS is restarted or the system is rebooted, the /etc/dfs/dfstab file is read and all directories are shared automatically. 2.
• Sharing a directory with root access for clients share -F nfs -o root=Red:Blue:Green /var/mail In this example, the /var/mail directory is shared. Root access is allowed for clients Red, Blue, and Green. Superusers on all other clients are considered as unknown by the NFS server, and are given the access privileges of an anonymous user. Non-superusers on all clients are allowed read-write access to the /var/mail directory if the HP-UX permissions on the /var/mail directory allow them read-write access.
Table 2-3 Security Modes of the share command (continued) Security Mode Description krb5p Uses Kerberos V5 authentication, integrity checking, and privacy protection (encryption) on the shared filesystems. none Uses NULL authentication (AUTH_NONE). NFS clients using AUTH_NONE are mapped to the anonymous user nobody by NFS. You can combine the different security modes. However, the security mode specified in the host must be supported by the client.
The password prompt is displayed. Enter the password for the root principal that is added to the Kerberos database. 3. To verify the TGT, enter the following command: klist An output similar to the following output is displayed: Ticket cache: /tmp/krb5cc_0 Default principal: root@krbhost.anyrealm.com Valid starting Expires Service principal Fri 16 Jan 2007 01:44:08 PM PDT Sat 17 Jan 2007 01:44:08 PM PDT krbtgt/krbhost.anyrealm.com@krbhost.anyrealm.com 4.
NOTE: 6. Step 6 and Step 7 are to be performed on the Kerberos Server. To add the NFS service principal to the NFS server, such as nfs/krbsrv39.anyrealm.com, in the Kerberos database of the Kerberos server, first run the kadmin command-line administrator command and then add a new principal using the add command. Command: add Name of Principal to Add: nfs/krbsrv39.anyrealm.com Enter password: Re-enter password for verification: Principal added. NOTE: 7. 8.
default 1 default is AUTH_SYS - - # 10. To create a credential table, enter the following command: gsscred -m krb5_mech -a 11. Share a directory with the Kerberos security option as described in the following section. Examples for Securely Sharing Directories This section discusses different examples for sharing directories in a secure manner.
Ticket cache: /tmp/krb5cc_0 Default principal: root@krbhost.anyrealm.com Valid starting Expires Service principal Fri 16 Jan 2007 01:44:08 PM PDT Sat 17 Jan 2007 01:44:08 PM PDT krbtgt/krbhost.anyrealm.COM@krbhost.anyrealm.com 4.
NOTE: This section does not document how to configure a firewall. This section documents the considerations to keep in mind while sharing a directory across a firewall.
STATD_PORT = port_number MOUNTD_PORT = port_number Where: The port number on which the daemon runs. It can be set to any unique value between 1024 and 65536. STATD_PORT The port on which the rpc.statd daemon runs. MOUNTD_PORT The port on which the rpc.mountd daemon runs. port_number 2. Activate the changes made to the/etc/default/nfs file by restarting the lock manager and NFS server daemons as follows: /sbin/init.d/nfs.server stop /sbin/init.d/lockmgr stop /sbin/init.d/lockmgr start /sbin/init.d/nfs.
Figure 2-2 WebNFS Session Figure 2-2 depicts the following steps: 1. 2. 3. 4. An NFS client uses a LOOKUP request with a PUBLIC file handle to access the foo/index.html file. The NFS client bypasses the portmapper service and contacts the server on port 2049 (the default port). The NFS server responds with the file handle for the foo/index.html file. The NFS client sends a READ request to the server. The NFS server responds with the data.
uidrange 80-60005 • You want to provide PC users a different set of default print options. For example, add an entry to the /etc/pcnfsd.conf file which defines raw as a default print option for PC users submitting jobs to the printer lj3_2 as follows: printer lj3_2 lj3_2 lp -dlj3_2 -oraw The /etc/pcnfsd.conf file is read when the pcnfsd daemon starts. If you make any changes to /etc/pcnfsd.conf file while pcnfsd is running, you must restart pcnfsd before the changes take effect.
share The directory that you have unshared should not be present in the list displayed. Disabling the NFS Server To disable the NFS server, follow these steps: 1. On the NFS server, enter the following command to unshare all the shared directories: unshareall -F nfs 2. Enter the following command to disable NFS server capability: /sbin/init.d/nfs.server stop 3. On the NFS server, edit the /etc/rc.config.
Following are the tasks involved in configuring and administering an NFS client.
Table 2-7 Standard-Mounted Versus Automounted Directories (continued) Standard-Mounted Directory Automounted Directory (Using AutoFS) You must maintain the configuration file for standard You can manage AutoFS configuration files (maps) centrally mounts (/etc/fstab) separately on each NFS client. through NIS and LDAP.
Automatic Mount To configure a remote directory to be automatically mounted at system boot, follow these steps: 1. Add an entry to the /etc/fstab file, for each remote directory you want to mount on your system. Following is the syntax for the entry in the /etc/fstab file: server:remote_directory local_directory nfs defaults 0 0 or server:remote_directory local_directory nfs option[,option...] 0 0 2.
If the NFS server onc21 is down, the client accesses NFS server onc23.
• Mounting an NFS Version 2 filesystem using the UDP Transport mount -o vers=2,proto=udp onc21:/var/mail /var/mail In this example, the NFS client mounts the /var/mail directory from the NFS server, onc21, using NFSv2 and the UDP protocol. • Mounting an NFS filesystem using an NFS URL mount nfs://onc31/Casey/mail /Casey/mail In this example, the NFS client mounts the /Casey/mail directory from NFS server, onc31, using the WebNFS protocol.
NOTE: For specific configuration information, see “Secure NFS Setup with Kerberos” (page 29). Changing the Default Mount Options To change the default mount options, follow these steps: 1. Modify the NFS mount options in the /etc/fstab file, or the AutoFS map, as needed. For more information on the different mount options that NFS supports, see mount_nfs(1M). If you changed the mount options for a directory that is currently mounted, you must unmount and remount it before the changes take effect.
1. On the NFS client, enter the following command to get a list of all the mounted NFS filesystems on the client /usr/sbin/nfsstat -m 2. For every NFS mounted directory listed by the nfsstat command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of all processes currently using the mounted directory. 3.
Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled, the nfsd daemon is started to service all TCP and UDP requests. If you want to change startup parameters for nfsd, you must login as superuser (root) and make changes to the /etc/default/nfs file or use the setoncenv command. The /etc/default/nfs file provides startup parameters for the nfsd daemon and rpc.lockd daemon.
and an NFS server running NIS in the domain bldg1 shares a directory only to the netgroup myfriends, only the host sage can mount that directory. The other two triples are ignored, because they are not valid in the bldg1 domain. If an HP-UX host not running NIS exports or shares a directory to the netgroup myfriends, the NIS_domain field is ignored, and all three hosts (sage, cauliflower, and pear) can mount the directory.
root-users (dill,-, ) (sage,-, ) (thyme,- , ) (basil,-, ) mail-users (rosemary, , ) (oregano, , ) root-users The root-users netgroup is a group of four systems. The mail-users netgroup uses the root-users netgroup as part of a larger group of systems. The blank space in the third field of each triple indicates that the netgroup is valid in any NIS domain.
All users except those listed in the vandals netgroup can log in to the local system without supplying a password from any system in the network. CAUTION: Users who are denied privileged access in the /etc/hosts.equiv file can be granted privileged access in a user’s $HOME/.rhosts file. The $HOME/.rhosts file is read after the /etc/hosts.equiv file and overrides it. For more information, see hosts.equiv(4).
For more information on NIS, see NIS Administrator’s Guide (5991-7656). For information on the /etc/group file, see group(4). Configuring RPC-based Services This section describes the following tasks: • • “Enabling Other RPC Services” “Restricting Access to RPC-based Services” Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc” . Following is the list of entries in an /etc/inetd.conf file: #rpc stream tcp nowait root /usr/sbin/rpc.
Restricting Access to RPC-based Services To restrict access to RPC-based services, create an entry with the following syntax in the /var/adm/inetd.sec file for each service to which you want to restrict access: service {allow} host_or_network [host_or_network...] {deny} If the /var/adm/inetd.sec file does not exist, you may have to create it. The service must match one of the service names in the /etc/rpc file. Specify either allow or deny, but not both. Enter only one entry per service.
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
The automount Command This command installs the AutoFS mount-points, and associates an automount map with each mount-point. The AutoFS filesystem monitors attempts to access directories within it and notifies the automountd daemon. The daemon locates a filesystem using the map, and then mounts this filesystem at the point of reference within the AutoFS filesystem. The automount map specifies the location of all the AutoFS mount-points.
mounted when not in use, use the -t option of the automount command. For more information on the different options supported by automount, see automount(1M) and automountd(1M). CAUTION: You must maintain filesystems managed by AutoFS, by using the automountd and automount utilities. Manually mounting and unmounting file systems managed by AutoFS can cause disruptive or unpredictable results, including but not limited to commands hanging or not returning expected results.
Figure 3-2 AutoFS In this figure, AFS1, AFS2, and Sage are AutoFS clients. Thyme and Basil are the NFS servers. The NFS servers export directories. The AutoFS clients use maps to access the exported directories. For instance, the NFS server, Basil, exports the /export directory. If you are a user on any of the AutoFS clients, you can use maps to access the /export directory from the NFS server, Basil.
The AFS2 client uses a special map to automount the /export directory. The AFS2 client includes an entry similar to the following in its map: /net -host -nosuid, soft The AFS2 client mounts the /export directory at the /net/Basil/export location. Features This section discusses the features that AutoFS supports on systems running HP-UX 11i v3. On-Demand Mounting In HP-UX 11i v3, the filesystems being accessed are mounted automatically.
Figure 3-3 Automounted Directories for On-Demand Mounting NFS Server thyme NFS Client sage / / /auto /export /project /project /specs /specs /reqmnts /designs /reqmnts /designs On-Demand Mounting Browsability for Indirect Maps AutoFS now enables you to view the potential mount-points for indirect maps without mounting each filesystem. Consider the following scenario where the AutoFS master map, /etc/auto_master, and the indirect map, /etc/auto_indirect, are on the NFS client, sage.
NFS Loopback Mount By default, AutoFS uses the Loopback Filesystem (LOFS) mount for locally mounted filesystems. AutoFS provides an option to enable loopback NFS mounts for the local mount. Use the automountd command with the -L option to enable the loopback NFS mounts for locally mounted filesystems. This option is useful when AutoFS is running on a node that is part of a High Availability NFS environment.
1. 2. Log in as superuser. Update the appropriate AutoFS map, as follows: cdrom -fstype=hsfs, ro :/dev/sr0 For mount devices, If the mount resource begins with a “ / ” , it must be preceded by a colon. For instance in the above section, the CD-ROM device, /dev/sr0, is preceded by the colon. Supported Backends (Map Locations) AutoFS maps can be located in the following: • Files: Local files that store the AutoFS map information for that individual system.
1. If the AutoFS maps are not already migrated, migrate your AutoFS maps to LDAP Directory Interchange Format (LDIF) files using the migration scripts. The migrated maps can also be used if you have chosen the older schema. For information on the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool.
when the directory is mounted the next time. However, if you change the local directory name in the direct or indirect map, or if you change the master map, these changes do not take effect until you run the automount command to force AutoFS to reread its maps. Updating from Automounter to AutoFS Automounter is a service that automatically mounts fileystems on reference. The service enables you to access the filesystems without taking on the superuser role and use the mount command.
3. Scripts that kill and restart automount are modified. AutoFS Configuration Changes This section describes the various methods to configure your AutoFS environment. Configuring AutoFS Using the nfsconf File You can use the /etc/rc.config.d/nfsconf file to configure your AutoFS environment. The /etc/rc.config.d/nfsconf file is the NFS configuration file. This file consists of the following sets of variables or parameters: 1. 2. 3. 4.
AUTOMOUNT_OPTIONS="-t 720" 3. Enter the following command to start AutoFS. /sbin/init.d/autofs start Configuring AutoFS Using the/etc/default/autofs File You can also use the /etc/default/autofs file to configure your AutoFS environment. The /etc/default/autofs file contains parameters for the automountd daemon and the automount command. Initially, the parameters in the /etc/default/autofs file are commented. You must uncomment the parameter to make the value for that parameter take effect.
/sbin/init.d/autofs start Notes on AutoFS Master Map The AutoFS master map file, /etc/auto_master by default, determines the locations of all AutoFS mount-points. At system startup, the automount command reads the master map to create the initial set of AutoFS mounts. Subsequent to system startup, the automount command maybe run to install AutoFS mounts for new entries in the master map or a direct map, or to perform unmounts for entries that have been removed from these maps.
Table 3-4 Direct Versus Indirect AutoFS Map Types (continued) Direct Map Indirect Map Disadvantage: If you add or remove mounts in a direct map, or if you change the local mount-point for an existing mount in a direct map, you must force AutoFS to reread its maps. Advantage: If you modify an indirect map, AutoFS will view the changes the next time it mounts the directory. You need not force AutoFS to reread its maps.
1. If you are using local files for maps, use an editor to edit the master map in the /etc directory. The master map is commonly called /etc/auto_master. If you are using NIS, open the master map on the NIS master server. If you are using LDAP, the map must be modified on the LDAP server. For information on modifying maps, see the LDAP-UX Client Services B.04.00 Administrator’s Guide (J4269-90064).
You can configure all the direct automounts in the same map. Most users use the file name, /etc/auto_direct, for their direct map. Following is the syntax for a direct map: local mount-point mount options remote server:directory where: local mount-point mount options remote server:directory The path name of the mount-point Options you want to apply to this mount. Location of the directory, on the server, that is to be mounted.
1. If you are using local files for maps, use an editor to edit the master map in the /etc directory. The master map is commonly called /etc/auto_master. If you are using NIS, open the master map on the corresponding master server. If you are using LDAP, the map must be modified on the LDAP server. For information on modifying the map, see theLDAP-UX Client Services B.04.00 Administrator’s Guide (J4269-90064).
Sample File Entries for NFS Indirect Automounts The local_subdirectory specified in an indirect map is the lowest-level subdirectory in the local directory path name. For example, if you are mounting a remote directory on /nfs/apps/draw, draw is the local_subdirectory specified in the indirect map. Following are sample lines from the AutoFS master map on the NFS client, sage. The master map also includes an entry for the /etc/auto_direct direct map.
IMPORTANT: You cannot use environment variables in the AutoFS master map. In this example `the NFS serverbasil, contains subdirectories in its /export/private_files directory, which are named after the hosts in its network. Every host in the network can use the same AutoFS map and the same AUTOMOUNTD_OPTIONS definition to mount its private files from basil. When AutoFS starts up on the host sage, it assigns the value sage to the HOST variable.
/home /etc/auto_home -nosuid The following line from the /etc/auto_home indirect map mounts the user's home directories on demand: # /etc/auto_home file # local mount-point mount options remote server:directory * basil:/export/home/& The user's home directory is configured in the /etc/passwd file as /home/username. For example, the home directory of the user terry is /home/terry. When Terry logs in, AutoFS looks up the /etc/auto_home map and substitutes terry for both the asterisk and the ampersand.
4. If you are using local files for maps, add the following entry to the AutoFS master map, /etc/auto_master, on the NFS clients: /home 5. /etc/auto_home Enter the following command on each NFS client that uses these maps to force AutoFS to reread the maps: /usr/sbin/automount Example of Automounting a User’s Home Directory User Howard’s home directory is located on the NFS server, basil, where it is called /export/home/howard.
Automounting All Exported Directories from Any Host Using the -hosts Map To automount all exported directories using the -hosts map, follow these steps: 1. If you are using local files for AutoFS maps, use an editor to add the following entry to the /etc/auto_master master map file: /net -hosts -nosuid,soft,nobrowse NOTE: The nobrowse option in the local default master map file for a newly installed system is specified for the /net map.
Figure 3-8 Automounted Directories from the -hosts Map—One Server /net /sage /opt In the following example, the server thyme exports the directory /exports/proj1, and a user enters the following command: more /net/thyme/exports/proj1/readme The subdirectory /thyme is created under /net, and /exports/proj1 is mounted under /thyme. Figure 3-9 shows the automounted directory structure after the user enters the second command.
NOTE: The -null entry must precede the included map entry to be effective. Using Executable Maps An executable map is a map whose entries are generated dynamically by a program or a script. AutoFS determines whether a map is executable, by checking whether the execute bit is set in its permissions string. If a map is not executable, ensure that its execute bit is not set. An executable map is an indirect map.
Configuring Replicated Servers for an AutoFS Directory This section describes how to configure multiple replicated servers for an AutoFS directory.
Notes on Configuring Replicated Servers Directories with multiple servers must be mounted as read-only to ensure that the versions remain the same on all servers. The server selected for the mount is the one with the highest preference, based on a sorting order. The sorting order used gives highest preference to servers on the same local subnet. Servers on the local network are given the second strongest preference.
Sample Map Hierarchy In the following example, an organization consisting of many departments, wants to organize a shared automounted directory structure. The shared top-level directory is called /org. The /org directory contains several subdirectories, which are listed in the auto_org map. Each department administers its own map for its subdirectory.
/usr/sbin/automount CAUTION: You must maintain filesystems managed by AutoFS, by using the automountd and automount utilities. Manually mounting and unmounting file systems managed by AutoFS can cause disruptive or unpredictable results, including but not limited to commands hanging or not returning expected results. Applications can also fail because of their dependencies on these mounted filesystems. Verifying the AutoFS Configuration This section describes how to verify the AutoFS configuration.
/sbin/init.d/autofs stop 2. In the /etc/rc.config.d/nfsconf file, set the value of AUTOFS variable to 0, as follows: AUTOFS=0 IMPORTANT: Do not disable AutoFS by terminating the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it terminates. Use the autofs stop command instead. Restarting AutoFS AutoFS rarely needs to be restarted. In case you do need to restart it, follow these steps: 1.
To Start AutoFS Logging To start AutoFS Logging, follow these steps: 1. 2. Log in as superuser to the NFS client. Enter the following command to obtain a list of all the automounted directories on the client: for FS in $(grep autofs /etc/mnttab | awk ‘{print $2}’) do /etc/mnttab | awk ‘{print $2}’ | grep ^${FS} done 3.
NOTE: The command, kill -SIGUSR2 PID, works only if tracing is not already on. To stop level 3 tracing, enter the same commands listed above to send the SIGUSR2 signal to automountd. The SIGUSR2 signal is a toggle that turns tracing on or off depending on its current state. If basic (level 1) tracing is turned on when you send the SIGUSR2 signal to automountd, the SIGUSR2 signal turns tracing off. To Start AutoFS Basic Tracing To start AutoFS tracing Level 1, follow these steps: 1. 2.
Mount Event Tracing Output The general format of a mount event trace is: MOUNT REQUEST:
Troubleshooting AutoFS 85
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
warm cache cache hit A cache that contains data in its front filesystem is called a warm cache. In this case, the cached data can be returned to the user without requiring an action from the back filesystem. A successful attempt to reference data that is cached is called a cache hit. How CacheFS Works Figure 4-1 displays a sample CacheFS environment. Figure 4-1 Sample CacheFS Network In the figure, cachefs1, cachefs2, and cachefs3 are CacheFS clients.
The functionality provided by this command is an alternative to the rpages mount option. For information on how to pre-load or pack files, see “Packing a Cached Filesystem” (page 93). • Complete Binary Caching via the “rpages” mount option CacheFS is commonly used to manage application binaries. The rpages mount option forces the client to cache a complete copy of the accessed application binary file.
• Support for ACLs An Access Control List (ACL) offers stronger file security by enabling the owner of the file to define file permissions for specific users and groups. This version of CacheFS on HP-UX supports ACLs with VxFS and NFS and not with HFS. • Support for Logging A new command, cachefslog, is used to enable or disable logging for a CacheFS mount-point. If logging functionality is enabled, details about the operations performed on the CacheFS mount-point are stored in a logfile.
Mounting an NFS Filesystem Using CacheFS This section describes how to mount an NFS filesystem using CacheFS. The syntax for mounting an NFS filesystem using CacheFS is as follows: mount [-F cachefs] [-rqOV] -o backfstype=file_system_type [specific_options] resource mount_point Consider the following example where the /opt/frame directory is going to be NFS-mounted from the NFS server nfsserver to the local /opt/cframe directory.
mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M). Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS. Before you automount an NFS filesystem using CacheFS, you must configure a directory in a local filesystem as cache. For more information on how to configure a directory as cache, see “Creating a CacheFS Cache” (page 90).
cachefslog /cfs_mnt1 If logging has been enabled, the logfile is displayed. Disabling Logging in CacheFS You can use the cachefslog command to halt or disable logging for a CacheFS mount-point. To disable CacheFS logging, follow these steps: 1. 2. Log in as superuser.
You can pack files using one of the following methods: • Specifying the files or directories to be packed Enter the following command to pack a file in the cache: cachefspack -p filename where: -p filename Packs the file or files in the cache. Name of the file or directory that is to be packed in the cache. NOTE: When you pack a directory, all files in that directory, subdirectories, and files in the subdirectories are packed.
Specifies that certain files are to be unpacked. Specifies the file to unpack. -u filename • Using the -U option To unpack all the packed files in the cache, enter the following command: cachefspack -U cache-dir where: Specifies that you want to unpack all the packed files in the cache. Specifies the cache directory that is to be unpacked. -U cache-dir For more information about the cachefspack command, see cachefspack(1M).
either during system bootup if there is an entry in /etc/fstab, or the first time the cache directory is referenced as part of the mount operation. To manually check the integrity of a cache, enter the following command: fsck_cachefs -F cachefs [-m | -o noclean] cache-directory where: -m noclean cache-directory Specifies that the cache must be checked without making any repairs. Forces a check on the CacheFS filesystems. Specifies the name of the directory where the cache resides.
Deleting a Cache Directory To delete a cache directory that is no longer required you must use the cfsadmin command. The syntax to delete the cache directory is as follows: cfsadmin -d {cacheID | all} cache-directory where: cacheID all cache-directory Specifies the name of the cache filesystem. Specifies that all cached filesystems in the cache-directory are to be deleted. Specifies the name of the cache directory where the cache resides.
maxfiles 91% minfiles 0% threshfiles 85% maxfilesize 3MB srv01:_tmp:_mnt1 In this example, the filesystem with CacheID srv01:_tmp:_mnt filesystem has been deleted. To delete a cache directory and all the CacheFS filesystems in that directory, enter the following command: cfsadmin -d all cache-directory Using CacheFS After the administrator has configured a cache directory and mounted or automounted the CacheFS filesystem, you can use CacheFS as you would any other filesystem.
total for cache Initial size: end size: 640k 40k To view the operations performed in ASCII format, enter the following command: cachefswssize -a /tmp/logfile An output similar to the following is displayed: 1/0 0:00 0 Mount e000000245e31080 411 65536 256 /cfs_mnt1 (cachefs1:_cache_exp:_cfs_mnt1) 1/0 0:00 0 Mdcreate e000000245e31080 517500609495040 1 1/0 0:00 0 Filldir e000000245e31080 517500609495040 8192 1/0 0:00 0 Nocache e000000245e31080 517500609495040 1/0 0:00 0 Mkdir e000000245e31
Table 4-3 Common Error Messages encountered while using the fsck command Error Message Possible Causes Resolution “fsck -F cachefs Cannot open This error message indicates that /c is 1. Delete the cache. lock file /test/c/.cfs_lock” not a cache directory. 2. Recreate the cache directory using the cfsadmin command. “Cache /c is in use and cannot be This error message indicates that the 1. Check if a directory named /c exists cache directory is in use. However, the modified”.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 101) • “Performance Tuning” (page 107) • “Logging and Tracing of NFS Services” (page 109) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
1. Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines: NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start □ Enter the following command on the NFS client to make sure the rpc.mountd process on the NFS server is available and responding to RPC requests: /usr/bin/rpcinfo -u servername mountd If the rpcinfo command returns RPC_TIMED_OUT, the rpc.
If the directory is shared with the [access_list] option, make sure your NFS client is included in the [access_list], either individually or as a member of a netgroup. □ Enter the following commands on the NFS server to make sure your NFS client is listed in its hosts database: nslookup client_name nslookup client_IP_address “Permission Denied” Message □ □ □ Check the mount options in the /etc/fstab file on the NFS client. A directory you are attempting to write to may have been mounted as read-only.
/usr/sbin/fuser -ck local_mount_point 3. □ □ Try again to unmount the directory. Verify that the filesystem you are trying to unmount is not a mount-point of another filesystem. Verify that the filesystem is not exported. In HP-UX 11i v3, an exported filesystem keeps the filesystem busy. “Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing.
3. Warn any users to cd out of the directory, and kill any processes that are using the directory, or wait until the processes terminate. Enter the following command to kill all processes using the directory: /usr/sbin/fuser -ck local_mount_point 4. Enter the following command on the client to unmount all NFS-mounted directories: /usr/sbin/umount -aF nfs 5. Enter the following commands to restart the NFS client: /sbin/init.d/nfs.client stop /sbin/init.d/nfs.
Data is Lost Between the Client and the Server □ □ □ □ Make sure that the directory is not exported from the server with the async option. If the directory is exported with the async option, the NFS server will acknowledge NFS writes before actually writing data to disk. If users or applications are writing to the NFS-mounted directory, make sure it is mounted with the hard option (the default), rather than the soft option.
# see krb5.conf(4) for more details # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.com default_tkt_enctypes = DES-CBC-CRC default_tgs_enctypes = DES-CBC-CRC ccache_type = 2 [realms] krbhost.anyrealm.com = { kdc = krbhost.anyrealm.com:88 admin_server = krbhost.anyrealm.com } [domain_realm] .anyrealm.com = krbhost.anyrealm.com [logging] kdc = FILE:/var/log/krb5kdc.
See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3. If the timeout and badxid values displayed by nfsstat -rc are of the same magnitude, your server is probably slow. Client RPC requests are timing out and being retransmitted before the NFS server has a chance to respond to them. Try doubling the value of the timeo mount option on the NFS clients. See “Changing the Default Mount Options” (page 44)“Changing the Default Mount Options” on page 51.
Improving NFS Client Performance □ □ □ For files and directories that are mounted read-only and never change, set the actimeo mount option to 120 or greater in the /etc/fstab file on your NFS clients. If you see several “server not responding” messages within a few minutes, try doubling the value of the timeo mount option in the /etc/fstab file on your NFS clients.
Each message logged by these daemons can be identified by the date, time, host name, process ID, and name of the function that generated the message. You can direct logging messages from all these NFS services to the same file. To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly.
Tracing With nettl and netfmt 1. Enter the following command to make sure nettl is running: /usr/bin/ps -ef | grep nettl If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2. Enter the following command to start tracing: /usr/sbin/nettl -tn pduin pduout loopback -e all -s 1024 \ -f tracefile 3. 4. Recreate the event you want to trace. Enter the following command to turn tracing off: /usr/sbin/nettl -tf -e all 5.
Index Symbols D + (plus sign) in AutoFS maps, 78 in group file, 49 in passwd file, 49 -hosts map, 40, 74 examples, 74 -null map, 75 32k transfer size, 45 device busy, 103, 104 direct map, 67 advantages, 65 environment variables in, 71 examples, 68 modifying, 62 DNS, 102 A environment variables in AutoFS maps, 71 /etc/auto_master file see auto_master map, 74 /etc/group file see group database, 22 /etc/netgroup file see netgroup file, 46 /etc/rc.config.d/namesvrs file see namesvrs file, 24 /etc/rc.
wildcards in, 71, 72 inetd.conf file, 50, 102, 110 inetd.sec file examples, 51 interruptible mounts, 105 intr mount option, 105, 109 L lockd, 101 checking for hung process, 105 restarting, 105 lockf(), 106 logging, 109 AutoFS, 81, 82 nettl and netfmt, 110 rexd, 110 rstatd, 110 rusersd, 110 rwalld, 110 sprayd, 110 lookup, displayed by nfsstat, 108 ls, with AutoFS, 80 NFS, 13 see also client, NFS, 21 see also server, NFS, 21 client, 21, 38 server, 21 starting, 40 stopping, 104 troubleshooting, 101 nfs.
rstatd logging, 110 rusersd logging, 110 rwalld logging, 110 S security in mounted directories, 42 server not responding, NFS, 101, 109 server, NFS, 21 CPU load, 108 memory requirements, 108 too slow, 108 showmount, 102, 103 SIGUSR2 signal to automount, 82 simultaneous mounts, AutoFS, 76 slow server, NFS, 108 soft mount timed out, 109 soft mount option, 106, 109 sprayd logging, 110 stale file handle, 104 avoiding, 104 standard mount, 41 START_MOUNTD variable, 102 statd, 101 checking for hung process, 105 r