NFS Services Administrator's Guide HP-UX 11i version 3 HP Part Number: 5900-1632 Published: September 2011 Edition: 1
© Copyright 2011 Hewlett-Packard Development Company, L.P Legal Notices Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents Preface: About This Document ........................................................................7 Intended Audience....................................................................................................................7 Publishing History.....................................................................................................................7 What's in This document.........................................................................................................
Deciding Between Standard-Mounted Directories and Automounted Directories..........................34 Enabling an NFS Client .....................................................................................................34 Mounting Remote Directories ..............................................................................................34 Mounting a Remote Directory on an NFS client.................................................................35 Enabling Client-Side Failover.................
Sample File Entries for NFS Direct Automounts..................................................................61 Automounting a Remote Directory Using an Indirect Map........................................................62 Notes on Indirect Maps.................................................................................................63 Sample File Entries for NFS Indirect Automounts................................................................
Forcing a Consistency Check for a Specific Mount-Point......................................................87 Forcing a Consistency Check for all the Mount-Points..........................................................87 Unmounting a Cache Filesystem...........................................................................................87 Checking the Integrity of a Cache........................................................................................87 Updating Resource Parameters.............
Preface: About This Document To locate the latest version of this document, go to the HP-UX Networking docs page at: http:// www.hp.com/go/hpux-networking-docs. On this page, select HP-UX 11i v3 Networking Software. This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document's current edition. The printing date will change when a new edition is printed.
Chapter 2 Configuring and Administering NFS Describes how to configure and administer NFS services. Chapter 3 Configuring the Cache File System (CacheFS) Describes the benefits of using the Cache File System and how to configure it on the HP-UX system. Chapter 4 Configuring and Administering AutoFS Describes how to configure and administer AutoFS. Chapter 5 Troubleshooting NFS Services Describes detailed procedures and tools for troubleshooting the NFS Services.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS. This chapter addresses the following topics: • “ONC Services Overview” (page 9) • “Network File System (NFS)” (page 9) • “New Features in NFS” (page 10) ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment.
all the systems on the network, instead of duplicating common directories, such as /usr/local on each system. How NFS works The NFS environment consists of the following components: • NFS Services • NFS Shared Filesystems • NFS Servers and Clients NFS Services The NFS services is a collection of daemons and kernel components, and commands that enable systems with different architectures running different operating systems to share filesystems across a network.
NFSv4 uses the COMPOUND RPC procedure to build sequence of requests into a single RPC. All RPC requests are classified as either NULL or COMPOUND. All requests that are part of the COMPOUND procedure are known as operations. An operation is a filesystem action that forms part of a COMPOUND procedure. NFSv4 currently defines 38 operations. The server evaluates and processes operations sequentially.
the server file tree. If you use the PUTROOTFH operation, the client can traverse the entire file tree using the LOOKUP operation. ◦ Persistent The persistent file handle is an assigned fixed value for the lifetime of the filesystem object that it refers to. When the server creates the file handle for a filesystem object, the server must accept the same file handle for the lifetime of the object. The persistent file handle persists across server reboots and filesystem migrations.
users@domain Or group@domain Where: users specifies the string representation of the user group specifies the string representation of the group domain specifies a registered DNS domain or a sub-domain of a registered domain However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back.
For information on how to share directories with the NFS clients, see “Sharing Directories with NFS Clients” (page 21). For information on how to unshare directories, see “Unsharing (Removing) a Shared Directory” (page 31). Following are the new share features that NFS supports: • Secure sharing of directories Starting with HP-UX 11i v3, NFS enables you to share directories in a secure manner.
• Mounting an NFS filesystem through a firewall For information on how to mount an NFS filesystem through a firewall, see “Accessing Shared NFS Directories across a Firewall” (page 28). • Mounting a filesystem securely For information on how to mount a filesystem in a secure manner, see “An Example for Securely Mounting a directory” (page 38). For information on how to disable mount access for a single client, see “Unmounting (Removing) a Mounted Directory” (page 38).
Consider the following points before enabling client-side failover: • The filesystem must be mounted with read-only permissions. • The filesystems must be identical on all the redundant servers for the failover to occur successfully. For information on identical filesystems, see “Replicated Filesystems” (page 16). • A static filesystem or one that is not modified often is used for failover. • File systems that are mounted using CacheFS are not supported for use with failover.
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An >NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
You can set user and group IDs in the following methods: • Using the HP-UX System Files (/etc/passwd and /etc/group) • Using NIS • Using LDAP Using the HP-UX System Files If you are using the HP-UX system files, add the users and groups to the /etc/passwd and /etc/ group files, respectively. Copy these files to all the systems on the network. For more information on the /etc/passwd and /etc/group files, see passwd(4) and group(4).
Using LDAP For more information on managing user profiles using LDAP, see the LDAP-UX Client Services B.04.00 Administrator’s Guide (J4269-90064). Configuring and Administering an NFS Server Configuring an NFS server involves completing the following tasks: 1. Identify the set of directories that you want the NFS server to share. For example, consider an application App1 running on an NFS client. Application App1 requires access to the abc filesystem in an NFS server.
Table 3 NFS Server Daemons (continued) Daemon Name Function nfs4srvkd Supports server side delegation. rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.statd Maintains a list of clients that have performed the file locking operation over NFS against the server. These clients are monitored and notified in the event of a system crash.
1069548396 ? 1069640883 ? 0:00 rpc.lockd 0:00 rpc.statd No message is displayed if the daemons are not running. To start the lockd and statd daemons, enter the following command: /sbin/init.d/lockmgr start 4. Enter the following command to run the NFS startup script: /sbin/init.d/nfs.server start The NFS startup script enables the NFS server and uses the variables in the /etc/rc.config.d/nfsconf file to determine which processes to start.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
-o Enables you to use some of the specific options of the share command, such as sec, async, public, and others. -d Enables you to describe the filesystem being shared When NFS is restarted or the system is rebooted, the /etc/dfs/dfstab file is read and all directories are shared automatically. 2.
In this example, the /var/mail directory is shared. Root access is allowed for clients Red, Blue, and Green. Superusers on all other clients are considered as unknown by the NFS server, and are given the access privileges of an anonymous user. Non-superusers on all clients are allowed read-write access to the /var/mail directory if the HP-UX permissions on the /var/mail directory allow them read-write access.
You can combine the different security modes. However, the security mode specified in the host must be supported by the client. If the modes on the client and server are different, the directory cannot be accessed. For example, an NFS server can combine the dh (Diffie-Hellman) and krb5 (Kerberos) security modes as it supports both the modes. However, if the NFS client does not support krb5, the shared directory cannot be accessed using krb5 security mode.
Re-enter password for verification: Enter policy name (Press enter key to apply default policy) : Principal added. 4. Copy the /etc/krb5.conf file from the Kerberos server to the NFS server node. onc52# rcp /etc/krb5.conf onc20:/etc/ 5. Extract the key for the NFS service principal on the Kerberos server and store it in the /etc/krb5.keytab file on the NFS server. To extract the key, use the Kerberos administration tool kadminl.
2. 06101130 to set the date to June 10th and time to 11:30 AM. The time difference between the systems should not be more than 5 minutes. Add a principal for all the NFS client to the Kerberos database. For example, if our NFS client is onc36.ind.hp.com then root principal should be added to the Kerberos database before running the NFS applications. To add principals use the Kerberos administration tool, kadminl, onc52# /opt/krb5/admin/kadminl Connecting as: K/M Connected to krb5v01 in realm ONC52.IND.HP.
Enables you to specify the security mode to be used. Specify krb5, krb5p or krb5i as the Security flavor. Enables you to specify the location of the directory. Enables you to specify the mount-point location where the filesystem is mounted. An initial ticket grant is carried out when the user accesses the mounted filesystem. Example onc36# mount -F nfs -o sec=krb5 onc36:/export_krb5 /aaa 1.
For example, to determine the port numbers, enter the following command: rpcinfo -p An output similar to the following output is displayed: program vers proto port 100024 1 udp 49157 100024 1 tcp 49152 100021 2 tcp 4045 100021 3 udp 4045 100005 3 udp 49417 100005 3 tcp 49259 100003 2 udp 2049 100003 3 tcp 2049 service status status nlockmgr nlockmgr mountd mountd nfs nfs Each time the rpc.statd and rpc.
Sharing directories across a firewall using the NFSv4 protocol NFSv4 is a single protocol that handles mounting, and locking operations for NFS clients and servers. The NFSv4 protocol runs on port 2049, by default. To override the default port number (2049) for the NFSv4 protocol, modify the port number for the nfsd entry in the/etc/services file. Configure the firewall based on the port number set.
To access the shared directory across a firewall using the WebNFS feature, configure the firewall to allow connections to the port number used by the nfsd daemon. By default the nfsd daemon uses port 2049. Configure the firewall based on the port number configured.
NOTE: To unshare all the directories without restarting the server, use the unshareall command. This command reads the entries in the /etc/dfs/dfstab file and unshares all the shared directories. Use the share command to verify whether all the directories are unshared. To unshare a shared directory and to prevent it from being automatically shared, follow these steps: Automatic Unshare 1. 2.
Configuration Files Table 6 describes the NFS configuration files and their functions. Table 6 NFS client configuration files File Name Function /etc/mnttab Contains the list of filesystems that are currently mounted. /etc/dfs/fstypes Contains the default distributed filesystem type. /etc/fstab Contains the list of filesystems that are automatically mounted at system boot time. Daemons Table 7 describes the NFS client daemons and their functions.
You can also configure the client protocol version to NFSv4 by specifying vers=4 while mounting the directory. For example, to set the client protocol version to NFSv4 while mounting the /usr/ kc directory, enter the following command: mount -o vers=4 serv:/usr/kc /usr/kc For more information on NFSv4, see nfsd(1m), mount_nfs(1m), nfsmapid(1m), and nfs4cbd(1m).
You can mount a filesystem using the following methods: • Automatic Mounting at System Boot time To set up a filesystem for automatic mounting at system boot time, you must configure it in the /etc/fstab file. All filesystems specified in the /etc/fstab file are mounted during system reboot. • Manual Mounting When you manually mount a filesystem, it is not persistent across reboots or when NFS client restarts.
/opt/nfstest from hpnfsweb:/home/tester Flags: vers=3,proto=udp,sec=sys,hard,intr,link,symlink,acl,devs, rsize=32768,wsize=32768,retrans=5,timeo=11 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 Lookups: srtt=33 (82ms), dev=33 (165ms), cur=20 (400ms) The directory that you have mounted must be present in this list. For a list of mount options, see mount_nfs(1M).
Figure 4 NFS Mount of manpages • Mounting a Home directory mount -r -o nosuid broccoli:/home/broccoli /home/broccoli mount -r -o nosuid cauliflower:/home/cauliflower /home/cauliflower In this example, the NFS client mounts the home directories from NFS servers broccoli and cauliflower . The nosuid option prevents programs with setuid permission from executing on the local client. Figure 5 illustrates this example.
• Mounting a replicated set of NFS filesystems with same pathnames mount -r onc21,onc23,onc25:/Casey/Clay /Casey/Clay In this example, the NFS client mounts a single filesystem, /Casey/Clay that has been replicated to a number of servers with the same pathnames. This enables the NFS client to failover to either server onc21, onc23, or onc25 if the current server has become unavailable.
NOTE: Before you unmount a directory, run the fuser -cu command to determine whether the directory is currently in use. The fuser command lists the process IDs and user names of all the processes that are using the mounted directory. If users are accessing the mounted directories, they must exit the directories before you unmount the directory. To unmount a mounted directory and prevent it from being automatically mounted, follow these steps: Automatic Unmount 1.
NFS Client and Server Transport Connections NFS runs over both UDP and TCP transport protocols. The default transport protocol is TCP. Using the TCP protocol increases the reliability of NFS filesystems working across WANs and ensures that the packets are successfully delivered. TCP provides congestion control and error recovery. NFS over TCP and UDP works with NFS Version 2 and Version 3. NOTE: TCP is the only transport protocol supported by NFS Version 4.
Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled, the nfsd daemon is started to service all TCP and UDP requests. If you want to change startup parameters for nfsd, you must login as superuser (root) and make changes to the /etc/default/nfs file or use the setoncenv command. The /etc/default/nfs file provides startup parameters for the nfsd daemon and rpc.lockd daemon.
The NIS_domain field specifies the NIS domain in which the triple (host, user, NIS_domain) is valid. For example, if the netgroup database contains the following netgroup: myfriends (sage,-,bldg1) (cauliflower,-,bldg2) (pear,-,bldg3) and an NFS server running NIS in the domain bldg1 shares a directory only to the netgroup myfriends, only the host sage can mount that directory. The other two triples are ignored, because they are not valid in the bldg1 domain.
The first occurrence of it is read for the host name, and the second occurrence is read for the user name. No relationship exists between the host and user in any of the triples. For example, user jane may not even have an account on host sage. A netgroup can contain other netgroups, as in the following example: root-users (dill,-, ) (sage,-, ) (thyme,- , ) (basil,-, ) mail-users (rosemary, , ) (oregano, , ) root-users The root-users netgroup is a group of four systems.
vandals ( ,pat, ) ( ,harriet, ) ( ,reed, ) All users except those listed in the vandals netgroup can log in to the local system without supplying a password from any system in the network. CAUTION: Users who are denied privileged access in the /etc/hosts.equiv file can be granted privileged access in a user’s $HOME/.rhosts file. The $HOME/.rhosts file is read after the /etc/hosts.equiv file and overrides it. For more information, see hosts.equiv(4).
teddybears::23:pooh,paddington -@bears For more information on NIS, see NIS Administrator’s Guide (5991-2187). For information on the /etc/group file, see group(4). Configuring RPC-based Services This section describes the following tasks: • “Enabling Other RPC Services” (page 45) • “Restricting Access to RPC-based Services” (page 46) Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc” .
Table 9 RPC Services managed by inetd (continued) RPC Service Description sprayd The rpc.sprayd program is the server for the spray command, which sends a stream of packets to a specified host and then reports how many were received and how fast. For more information, see sprayd (1M) and spray (1M). rquotad The rpc.rquotad program responds to requests from the quota command, which displays information about a user’s disk usage and limits. For more information, see rquotad (1M) and quota (1).
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
mounts. The filesystem interacts with the automount command and the automountd daemon to mount filesystems automatically. The automount Command This command installs the AutoFS mount-points, and associates an automount map with each mount-point. The AutoFS filesystem monitors attempts to access directories within it and notifies the automountd daemon. The daemon locates a filesystem using the map, and then mounts this filesystem at the point of reference within the AutoFS filesystem.
when not in use, use the -t option of the automount command. For more information on the different options supported by automount, see automount(1M) and automountd(1M). CAUTION: You must maintain filesystems managed by AutoFS, by using the automountd and automount utilities. Manually mounting and unmounting file systems managed by AutoFS can cause disruptive or unpredictable results, including but not limited to commands hanging or not returning expected results.
Figure 7 AutoFS In this figure, AFS1, AFS2, and Sage are AutoFS clients. Thyme and Basil are the NFS servers. The NFS servers export directories. The AutoFS clients use maps to access the exported directories. For instance, the NFS server, Basil, exports the /export directory. If you are a user on any of the AutoFS clients, you can use maps to access the /export directory from the NFS server, Basil.
Features This section discusses the features that AutoFS supports on systems running HP-UX 11i v3. On-Demand Mounting In HP-UX 11i v3, the filesystems being accessed are mounted automatically. The AutoFS file system provides a mechanism for automatically mounting NFS file systems on demand and for automatically unmounting these file systems after a predetermined period of inactivity. NOTE: The AutoFS mount points must not be hierarchically related.
Consider the following scenario where the AutoFS master map, /etc/auto_master, and the indirect map, /etc/auto_indirect, are on the NFS client, sage.
Supported Filesystems AutoFS enables you to mount different types of filesystems. To mount the filesystems, use the fstype mount option, and specify the location field of the map entry. Following is a list of supported filesystems and the appropriate map entry: AutoFS mount-point -fstype=autofs autofs_map_name NOTE: You can specify another AutoFS map name in the location field of the map-entry. This would enable AutoFS to trigger other AutoFS mounts.
document, go to the HP-UX Networking docs page at: http://www.hp.com/go/ hpux-networking-docs. On this page, select HP-UX 11i v3 Networking Software. • LDAP: A directory service that stores information, which is retrieved by clients throughout the network. To simplify HP-UX system administration, the LDAP-UX Integration product centralizes user, group, and network information management in an LDAP directory. For more information on the LDAP-UX Integration product, see the LDAP-UX Client Services B.04.
1. If the AutoFS maps are not already migrated, migrate your AutoFS maps to LDAP Directory Interchange Format (LDIF) files using the migration scripts. The LDAP-UX client only supports the new AutoFS schema, and the migration scripts will migrate the maps according to the new schema. For information on the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool.
1. 2. In the /etc/rc.config.d/nfsconf file, the AUTOFS variable is set to 1. Any options you had specified in the AUTO_OPTIONS variable are copied to either the AUTO_OPTIONS or the AUTOMOUNTD_OPTIONS variable. Obsolete options are removed. Table 11 lists the options of the old automount command and the equivalent AutoFS command options. It also indicates which automount options are not supported in AutoFS. Table 11 Old Automount Command-Line Options Used By AutoFS 3.
Table 12 AutoFS Configuration Variables Variable Name Description AUTOFS Specifies if the system uses AutoFS. Set the value to 1 to specify that this system uses AutoFS. Set the value to 0 to specify that this system does not use AutoFS. The default value of AutoFS is 1. NOTE: If you set the value of AUTOFS to 1, the NFS_CORE core configuration variable must also be set to 1. AUTOMOUNT_OPTIONS Specifies a set of options to be passed to the automount command when it is run. The default value is “ ” .
For more information on configuring the NSS, see nsswitch.conf (4) and automount(1M) . To configure AutoFS using the /etc/default/autofs file, follow these steps: 1. Log in as superuser. 2. Edit the /etc/default/autofs file. For instance, to change the default time for which a filesystem remains mounted when not in use, modify the AUTOMOUNT_TIMEOUT variable, as follows: AUTOMOUNT_TIMEOUT=720 For the list of parameters supported in the /etc/default/autofs file, see autofs(1M).
NOTE: If the same mount-point is used in two entries, the first entry is used by the automount command. The second entry is ignored. You must run the automount command after you modify the master map or a direct map.
Figure 9 Difference Between Direct Mounts and Indirect Mounts In the Mounts in a Direct Map figure, mounts are configured in various places in the local filesystem and not located under the same parent directory. In the Mounts in an Indirect Map figure, all the mounts are configured under the same parent directory. CAUTION: Any filesystems that are being managed by AutoFS should never be manually mounted or unmounted.
IMPORTANT: Do not automount a remote directory on a local directory, which is a symbolic link. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server. If it is the same, and the NFS server also acts as an NFS client and uses AutoFS with these map entries, the exported directory can attempt to mount over itself. This can result in unexpected behavior.
/auto/project/specs -nosuid /auto/project/budget -nosuid thyme:/export/project/specs basil:/export/FY94/proj1 Figure 10 illustrates how AutoFS sets up direct mounts. Figure 10 How AutoFS Sets Up Direct Mounts Automounting a Remote Directory Using an Indirect Map This section describes how to automount a remote directory using an indirect map. To automount a remote directory using an indirect map, follow these steps: 1.
IMPORTANT: Ensure that local_parent_directory and local_subdirectory are not already created. AutoFS creates them when it mounts the remote directory. If these directories exist, their files and directories in them are hidden when the remote directory is mounted. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server.
# /etc/auto_desktop file # local mount-point mount options draw write remote server:directory -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Figure 11 illustrates how AutoFS sets up indirect mounts. Figure 11 How AutoFS Sets Up NFS Indirect Mounts Using Environment Variables as Shortcuts in AutoFS Maps This section describes how to use environment variables as shortcuts in AutoFS maps using an example.
For information on some of the pre-defined environment variables, see automount(1M). Using Wildcard Characters as Shortcuts in AutoFS Maps Using wildcard characters makes it very easy to mount all the directories from a remote server to an identically named directory on the local host.
* charlie basil:/export/home/& thyme:/export/home/charlie AutoFS attempts to mount /export/home/charlie from the host, basil. If the asterisk is a match for charlie, AutoFS looks no further and never reads the second line. However, if the /etc/auto_home map contains the following lines, charlie * thyme:/export/home/charlie basil:/export/home/& AutoFS mounts Charlie’s home directory from host thyme and other home directories from host basil.
Figure 12 Home Directories Automounted with Wildcards Special Maps There are two types of special maps: -hosts and -null. By default, the -hosts map is used with the /net directory and assumes that the map entry is the hostname of the NFS server. The automountd daemon dynamically creates a map entry from the server's list of exported filesystems. For example, a reference to /net/Casey/usr initiates an automatic mount of all the exported filesystems from Casey which can be mounted by the client.
Notes on the -hosts Map The -hosts map is a built-in AutoFS map. It enables AutoFS to mount exported directories from any NFS server found in the hosts database, whenever a user or a process requests access to one of the exported directories from that server. CAUTION: You may inadvertently cause an NFS mount over X.25 or SLIP, which is unsupported, or through a slow router or gateway, because the -hosts map allows NFS access to any reachable remote system.
Turning Off an AutoFS Map Using the -null Map To turn off a map using the -null map, follow these steps: 1. Add a line with the following syntax in the AutoFS master map: local_directory -null 2. If AutoFS is running, enter the following command on each client that uses the map, to force AutoFS to reread its maps: /usr/sbin/automount This enables AutoFS to ignore the map entry that does not apply to your host.
• “Including a Map in Another Map” (page 72) • “Creating a Hierarchy of AutoFS Maps” (page 72) Automounting Multiple Directories (Multiple Mounts) AutoFS enables you to automount multiple directories simultaneously. Use an editor to create an entry with the following format in a direct or indirect map, and if needed, create the auto_master entry: local_dir /local_subdirectory [-options] \ server:remote_directory \ /local_subdirectory [-options] server:remote_directory \ ...
To configure multiple replicated servers for a directory, follow these steps: 1. Create and configure the /etc/netmasks file. AutoFS requires the /etc/netmasks file to determine the subnets of local clients in a replicated multiple server environment. The /etc/netmasks file contains IP address masks with IP network numbers. It supports both standard subnetting as specified in RFC-950, and variable-length subnetting as specified in RFC-1519.
Including a Map in Another Map If you want your map to refer to an external map, you can do so by including the external map in your map. The entries in the external map are read as if they are part of your map.
Starting with the AutoFS mount at /org, the evaluation of this path dynamically creates additional AutoFS mounts at /org/eng and /org/eng/projects. No action is required for the changes to take effect on the user's system because the AutoFS mounts are created only when required. You need to run the automount command only when you make changes to the master map or to a direct map.
draw write -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Enter the following commands: cd /nfs/desktop ls The ls command displays the following output: draw write The draw and write subdirectories are the potential mount-points (browsability), but are not currently mounted.
IMPORTANT: Do not stop the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it terminates. Use the autofs stop command instead. 5. To ensure that AutoFS is no longer active, enter the ps command: /usr/bin/ps -ef | grep automount If the ps command indicates that AutoFS is still active, ensure that all users have exited the automounted directories and then try again. Do not restart AutoFS until all the automount processes are terminated. 6.
To Stop AutoFS Logging To stop AutoFS logging, stop AutoFS and restart it after removing the “-v” option from AUTOMOUNTD_OPTIONS. AutoFS Tracing AutoFS supports the following Trace levels: Detailed (level 3) Includes traces of all the AutoFS requests and replies, mount attempts, timeouts, and unmount attempts. You can start level 3 tracing while AutoFS is running. Basic (level 1) Includes traces of all the AutoFS requests and replies. You must restart AutoFS to start level 1 tracing.
5. Warn users to exit the directory, and kill processes that are using the directory, or wait until all the processes terminate. Enter the following command to kill all the processes using the mounted directory: /usr/sbin/fuser -ck local_mount_point 6. Enter the following command to stop AutoFS: /sbin/init.d/autofs stop CAUTION: Do not kill the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it dies. 7.
Unmount Event Tracing Output The general format of an unmount event trace is: UNMOUNT REQUEST:
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
warm cache A cache that contains data in its front filesystem is called a warm cache. In this case, the cached data can be returned to the user without requiring an action from the back filesystem. cache hit A successful attempt to reference data that is cached is called a cache hit. How CacheFS Works Figure 15 displays a sample CacheFS environment. Figure 15 Sample CacheFS Network In the figure, cachefs1, cachefs2, and cachefs3 are CacheFS clients.
The functionality provided by this command is an alternative to the rpages mount option. For information on how to pre-load or pack files, see “Packing a Cached Filesystem” (page 85). • Complete Binary Caching via the “rpages” mount option CacheFS is commonly used to manage application binaries. The rpages mount option forces the client to cache a complete copy of the accessed application binary file.
• Support for ACLs An Access Control List (ACL) offers stronger file security by enabling the owner of the file to define file permissions for specific users and groups. This version of CacheFS on HP-UX supports ACLs with VxFS and NFS and not with HFS. • Support for Logging A new command, cachefslog, is used to enable or disable logging for a CacheFS mount-point. If logging functionality is enabled, details about the operations performed on the CacheFS mount-point are stored in a logfile.
CacheFS allows more than one filesystem to be cached in the same cache. You need not create a separate cache directory for each CacheFS mount. Mounting an NFS Filesystem Using CacheFS This section describes how to mount an NFS filesystem using CacheFS.
To change the mount option from default to weakconst after unmounting, enter the following command: mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M). Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS. Before you automount an NFS filesystem using CacheFS, you must configure a directory in a local filesystem as cache.
cachefslog /cfs_mnt1 If logging has been enabled, the logfile is displayed. Disabling Logging in CacheFS You can use the cachefslog command to halt or disable logging for a CacheFS mount-point. To disable CacheFS logging, follow these steps: 1. Log in as superuser. 2.
You can pack files using one of the following methods: • Specifying the files or directories to be packed Enter the following command to pack a file in the cache: cachefspack -p filename where: -p Packs the file or files in the cache. filename Name of the file or directory that is to be packed in the cache. NOTE: When you pack a directory, all files in that directory, subdirectories, and files in the subdirectories are packed.
Specifies the file to unpack. filename • Using the -U option To unpack all the packed files in the cache, enter the following command: cachefspack -U cache-dir where: -U Specifies that you want to unpack all the packed files in the cache. cache-dir Specifies the cache directory that is to be unpacked. For more information about the cachefspack command, see cachefspack(1M).
fsck_cachefs -F cachefs [-m | -o noclean] cache-directory where: -m Specifies that the cache must be checked without making any repairs. noclean Forces a check on the CacheFS filesystems. cache-directory Specifies the name of the directory where the cache resides. The cache directory must not be in use while performing a cache integrity check using the fsck command.
cacheID Specifies the name of the cache filesystem. all Specifies that all cached filesystems in the cache-directory are to be deleted. cache-directory Specifies the name of the cache directory where the cache resides. NOTE: The cache directory must not be in use when attempting to delete a cached filesystem or the cache directory. To delete the cache directory, follow these steps: 1.
To delete a cache directory and all the CacheFS filesystems in that directory, enter the following command: cfsadmin -d all cache-directory Using CacheFS After the administrator has configured a cache directory and mounted or automounted the CacheFS filesystem, you can use CacheFS as you would any other filesystem. The first time you access data through CacheFS, an over-the-wire call is made to the NFS server and the data is copied to your local cache. The first request is slow.
An output similar to the following is displayed: 1/0 0:00 0 Mount e000000245e31080 411 65536 256 /cfs_mnt1 (cachefs1:_cache_exp:_cfs_mnt1) 1/0 0:00 0 Mdcreate e000000245e31080 517500609495040 1 1/0 0:00 0 Filldir e000000245e31080 517500609495040 8192 1/0 0:00 0 Nocache e000000245e31080 517500609495040 1/0 0:00 0 Mkdir e000000245e31080 517264386293760 0 Where: _cfs_mnt1 Specifies the CacheFS mount-point cachefs1 Specifies the server name _cache_exp Specifies the mounted director
Table 17 Common Error Messages encountered while using the mount command Error Message Possible Causes Resolution “mount -F cachefs: /test/c/ .cfs_mnt_points is not a valid cache” The /c directory may not be a valid cache directory. 1. Delete the cache. 2. Recreate the cache directory using the cfsadmin command.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 93) • “Performance Tuning” (page 99) • “Logging and Tracing of NFS Services” (page 101) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
◦ rpc.statd ◦ rpc.lockd If any of these processes is not running, follow these steps: 1. Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines: NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start □ Enter the following command on the NFS client to make sure the rpc.
shareall -F nfs If the directory is shared with the [access_list] option, make sure your NFS client is included in the [access_list], either individually or as a member of a netgroup. □ Enter the following commands on the NFS server to make sure your NFS client is listed in its hosts database: nslookup client_name nslookup client_IP_address “Permission Denied” Message □ Check the mount options in the /etc/fstab file on the NFS client.
/usr/sbin/fuser -ck local_mount_point 3. Try again to unmount the directory. □ Verify that the filesystem you are trying to unmount is not a mount-point of another filesystem. □ Verify that the filesystem is not exported. In HP-UX 11i v3, an exported filesystem keeps the filesystem busy. “Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing.
3. Warn any users to cd out of the directory, and kill any processes that are using the directory, or wait until the processes terminate. Enter the following command to kill all processes using the directory: /usr/sbin/fuser -ck local_mount_point 4. Enter the following command on the client to unmount all NFS-mounted directories: /usr/sbin/umount -aF nfs 5. Enter the following commands to restart the NFS client: /sbin/init.d/nfs.client stop /sbin/init.d/nfs.
Data is Lost Between the Client and the Server □ Make sure that the directory is not exported from the server with the async option. If the directory is exported with the async option, the NFS server will acknowledge NFS writes before actually writing data to disk. □ If users or applications are writing to the NFS-mounted directory, make sure it is mounted with the hard option (the default), rather than the soft option.
# # Kerberos configuration # This krb5.conf file is intended as an example only. # see krb5.conf(4) for more details # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.com default_tkt_enctypes = DES-CBC-CRC default_tgs_enctypes = DES-CBC-CRC ccache_type = 2 [realms] krbhost.anyrealm.com = { kdc = krbhost.anyrealm.com:88 admin_server = krbhost.anyrealm.com } [domain_realm] .
See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3. If the timeout and badxid values displayed by nfsstat -rc are of the same magnitude, your server is probably slow. Client RPC requests are timing out and being retransmitted before the NFS server has a chance to respond to them. Try doubling the value of the timeo mount option on the NFS clients. See “Changing the Default Mount Options” (page 38)“Changing the Default Mount Options” on page 51.
□ If you frequently see the following message when attempting access to a soft-mounted directory, NFS operation failed for server servername: Timed out try increasing the value of the retrans mount option in the /etc/fstab file on the NFS clients. Or, change the soft mount to an interruptible hard mount, by specifying the hard and intr options (the defaults).
To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly. Following is an example crontab entry that empties the logfile at 1:00 AM every Monday, Wednesday, and Friday: 0 1 * * 1,3,5 cat /dev/null > log_file For more information, type man 1M cron or man 1 crontab at the HP-UX prompt. To Configure Logging for the Other NFS Services 1. Add the -l logfile option to the lines in /etc/inetd.
If nettl is not running, enter the following command to start it: /usr/sbin/nettl -start 2. Enter the following command to start tracing: /usr/sbin/nettl -tn pduin pduout loopback -e all -s 1024 \ -f tracefile 3. 4. Recreate the event you want to trace. Enter the following command to turn tracing off: /usr/sbin/nettl -tf -e all 5.
Index Symbols D + (plus sign) in AutoFS maps, 72 in group file, 44 in passwd file, 44 -hosts map, 34, 67 examples, 68 -null map, 69 32k transfer size, 40 device busy, 95, 96 direct map, 60 advantages, 59 environment variables in, 64 examples, 61 modifying, 55 DNS, 94 A environment variables in AutoFS maps, 64 /etc/auto_master file see auto_master map, 67 /etc/group file see group database, 18 /etc/netgroup file see netgroup file, 41 /etc/rc.config.d/namesvrs file see namesvrs file, 20 /etc/rc.config.
wildcards in, 65, 66 inetd.conf file, 45, 94, 102 inetd.
logging, 102 rwalld logging, 102 wildcards in AutoFS maps, 65, 66 wsize mount option, 99, 101 S ypmake, 41 security in mounted directories, 37 server not responding, NFS, 93, 100 server, NFS, 17 CPU load, 100 too slow, 100 showmount, 94, 95 SIGUSR2 signal to automount, 76 simultaneous mounts, AutoFS, 70 slow server, NFS, 100 soft mount timed out, 101 soft mount option, 98, 101 sprayd logging, 102 stale file handle, 96 avoiding, 96 standard mount, 35 START_MOUNTD variable, 94 statd, 93 checking for hung