NFS Services Administrator's Guide HP-UX 11i version 3 HP Part Number: B1031-90067 Published: July 2008
© Copyright 2008 Hewlett-Packard Development Company, L.P Legal Notices © Copyright 2008 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Table of Contents Preface: About This Document ......................................................................................................11 Intended Audience.............................................................................................................11 Publishing History..............................................................................................................11 What's in This document............................................................................
Examples for Securely Sharing Directories........................................................40 Accessing Shared NFS Directories across a Firewall....................................................42 Sharing directories across a firewall without fixed port numbers (NFSv2 and NFSv3)......................................................................................................................42 Sharing directories across a firewall using fixed port numbers in the nfs file........
AutoFS Filesystem.........................................................................................................68 The automount Command..........................................................................................68 The automountd Daemon ..........................................................................................68 Maps Overview..................................................................................................................69 Features ...................
Example of using the /etc/netmasks File...........................................................98 Notes on Configuring Replicated Servers...............................................................99 Including a Map in Another Map.................................................................................99 Creating a Hierarchy of AutoFS Maps........................................................................100 Sample Map Hierarchy.............................................................
Displaying CacheFS Statistics......................................................................................124 Viewing the CacheFS Statistics..............................................................................124 Viewing the Working Set (Cache) Size........................................................................125 Common Problems While Using CacheFS.......................................................................126 5 Troubleshooting NFS Services.............................
List of Figures 1-1 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 4-1 8 Server View of the Shared Directories........................................................................18 Symbolic Links in NFS Mounts...................................................................................32 WebNFS Session..........................................................................................................45 NFS Mount of manpages..............................................................
List of Tables 1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 3-1 3-2 3-3 3-4 4-1 4-2 4-3 4-4 4-5 5-1 Publishing History Details..........................................................................................11 NFS Server Configuration Files..................................................................................28 NFS Server Daemons..................................................................................................28 Security Modes of the share command..................................
Preface: About This Document The latest version of this document can be found on line at: http://www.docs.hp.com This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document's current edition. The printing date will change when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The document part number will change when extensive changes are made.
What's in This document This manual describes how to install, configure and troubleshoot the NFS Services product. The manual is organized as follows: Chapter 1 Introduction Describes the Network File System (NFS) services such NFS, AutoFS, and CacheFS. It also describes new features available with HP-UX 11i v3. Chapter 2 Configuring and Administering NFS Describes how to configure and administer NFS services.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS. This chapter addresses the following topics: • • • “ONC Services Overview” (page 13) “Network File System (NFS)” (page 14) “New Features in NFS” (page 15) ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment.
implements a lock recovery service used by KLM. It enables rpc.lockd daemon to recover locks after the NFS service restarts. Files can be locked using the lockf() or fcntl() system calls. For more information on daemons and system calls that enable you to lock and synchronize your files, see lockd(1M), statd(1M), lockf(2), and fcntl(2). • Remote Procedure Call (RPC) is a mechanism that enables a client application to communicate with a server application.
NFS Servers and Clients In the NFS context, a system that shares its filesystems over a network is known as a server, and a system that mounts and accesses these shared filesystems is known as a client. The NFS service enables a system to access a filesystem located on a remote system. Once the filesystem is shared by a server, it can be accessed by a client. Clients access files on the server by mounting the shared filesystem. For users, these mounted filesystems appear as a part of the local filesystem.
The NFSv4 protocol design enables NFS developers to add new operations that are based on IETF specifications. • Delegation In NFSv4, the server can delegate certain responsibilities to the client. Delegation enables a client to locally service operations, such as OPEN, CLOSE, LOCK, LOCKU, READ, and WRITE, without immediate interactions with the server. The server grants either a READ or a WRITE delegation to a client at OPEN. After the delegation is granted, the client can perform all operations locally.
NOTE: In NFSv2 and NFSv3, ACLs are manipulated using NFSACL protocol. If systems in your environment do not support the NFSACL protocol, then ACLs cannot be manipulated using this feature. • File Handle Types File handles are created on the server and contain information that uniquely identify files and directories. Following are the different file handle types: — ROOT The ROOT file handle represents the conceptual root of the file system namespace on an NFS server.
filesystems without having to mount them, to access the shared points from a single common root. The NFSv4 specification does not require a client to traverse the NFS server’s namespace. For example, a server shares /opt/dce, /opt/hpsmh, and /doc/archives directories. In this list, the shared list hierarchy is not connected. The /doc/ archives directory is neither a parent nor a child directory of /opt. Figure 1-1 shows the server view of the shared directories.
However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back. The nfsmapid daemon is used to map the owner and owner_group identification attributes with the local user identification (UID) and group identification (GID) numbers, which are used by both the NFSv4 server and the NFSv4 client.
In earlier versions of HP-UX, the exportfs command was used to export directories and files to other systems over a network. Users and programs accessed the exported files on remote systems as if they were part of the local filesystem. NFS disabled the system from exporting directories by using the -u option of the exportfs command. For information on how to share directories with the NFS clients, see “Sharing Directories with NFS Clients” (page 30).
Mounting and Unmounting Directories NFS clients can mount any filesystem or a part of a filesystem that is shared by the NFS server. Filesystems can be mounted automatically when the system boots, from the command line, or through the automounter. The different ways to mount a filesystem are as follows: • Mounting a filesystem at boot time and using the mount command For information on how to mount a filesystem at boot time, see “Mounting a Remote Directory on an NFS client” (page 51).
Secure Sharing of Directories In earlier versions of HP-UX, NFS used the AUTH_SYS authentication, which uses UNIX style authentication, (uid/gid), to allow access to the shared files. It is fairly simple to develop an application or server that can masquerade as a user because the gid/uid ownership of a file can be viewed. The AUTH_DH authenticating method was introduced to address the vulnerabilities of the AUTH_SYS authentication method.
Replicated Filesystems A replicated filesystem contains the corresponding directory structures and identical files. A replica (identical copy) of a filesystem consists of files of the same size and same file type as the original filesystem. HP recommends that you create these replicated filesystems using the rdist utility. The rdist utility enables you to maintain identical copies of files on multiple hosts.
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An >NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
NOTE: NFSv4 uses string identifiers that map to user IDs and group IDs in the standard integer format. For more information on string identifiers supported on NFSv4, see “New Features in NFS” (page 15). Consider the following points when you set user IDs and group IDs: • • • • Each user must have the same user ID on all systems where that user has an account. Each group has the same group ID on all systems where that group exists. No two users on the network have the same user ID.
use the following procedures depending on the user and group configuration method you use: • • • Using the HP-UX System Files Using NIS Using LDAP Using the HP-UX System Files If you are using HP-UX system files to manage your group database, follow these steps: 1. To identify the number of groups that the user belongs to, enter the following command for each user on your system: /usr/bin/grep -x -c username /etc/group This command returns the number of occurrences of username in the /etc/group file. 2.
2. filesystem. The decision of sharing directories by an NFS server is driven by the applications running on NFS clients that require access to those directories. Specify access restrictions and security modes for the shared directories. For example, you can use Kerberos (a security product) that is already configured on your system to specify access restrictions and security modes for the NFS shared directories.
Table 2-2 NFS Server Daemons (continued) Daemon Name Function nfslogkd Flushes nfslog information from the kernel to a file. nfsmapid Maps to and from NFSv4 owner and owner group identification attributes to local UID and GID numbers used by both NFSv4 client and server. nfs4srvkd Supports server side delegation. rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.
NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command to verify whether rpcbind daemon is running: ps -ae | grep rpcbind If the daemon is running, an output similar to the following is displayed: 778 ? 0:04 rpcbind No message is displayed if the daemon is not running. To start the rpcbind daemon, enter the following command: /sbin/init.d/nfs.core start 3. Enter the following commands to verify whether the lockd and statd daemons are running: ps -ae | grep rpc.lockd ps -ae | grep rpc.
NOTE: The exportfs command, used to export directories in versions prior to HP-UX 11i v3, is now a script that calls the share command. HP provides a new exportfs script for backward compatibility to enable you to continue using exportfs with the functionality supported in earlier versions of HP-UX. To use any of the new features provided in HP-UX 11i v3 you must use the share command.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
Sharing a directory with NFS Clients Before you share your filesystem or directory, determine whether you want the sharing to be automatic or manual. To share a directory with NFS clients, select one of the following methods: • • Automatic Share Manual Share Automatic Share To share your directories automatically, follow these steps: 1. Add an entry to the /etc/dfs/dfstab file for each directory you want to share with the NFS clients.
share -F nfs directory_name 2. Enter the following command to verify if your filesystem is shared: share An output similar to the following output is displayed: /tmp rw=hpdfs001.cup.hp.com ““ /mail rw ““ /var rw ““ The directory that you have shared must be present in this list. For more information on the share command and a list of share options, see share_nfs(1M) and share(1M). Examples for Sharing directories This section discusses different examples for sharing directories.
In this example, the /var/mail/Red directory is shared. Only the superuser on client Red is granted root access to the directory. All other users on client Red have read-write access if they are provided read-write access by the regular HP-UX permissions. Users on other clients have read-only access if they are allowed read access through the HP-UX permissions.
Table 2-3 Security Modes of the share command (continued) Security Mode Description krb5i Uses Kerberos V5 authentication with integrity checking to verify that the data is not tampered with, while in transit between the NFS clients and servers. krb5p Uses Kerberos V5 authentication, integrity checking, and privacy protection (encryption) on the shared filesystems. none Uses NULL authentication (AUTH_NONE). NFS clients using AUTH_NONE are mapped to the anonymous user nobody by NFS.
To configure your secure NFS server, follow these steps: 1. Set up the host as a Kerberos client. For more information on setting up the NFS server as a Kerberos client, see Configuration Guide for Kerberos Client Products on HP-UX (5991-7718). NOTE: Add a principal for all machines that are going to use the NFS Service. Also, add a principal for all users who will access the data on the NFS server. For example, the sample/krbsrv39.anyrealm.
An output similar to the following output is displayed: krbcl145: #/hpsample/gss-client krbcl145 sample@krbsrv39 "hi" Sending init_sec_context token (size=541)...continue needed ...length = 106 context flag: GSS_C_MUTUAL_FLAG context flag: GSS_C_REPLAY_FLAG context flag: GSS_C_CONF_FLAG context flag: GSS_C_INTEG_FLAG "root/krbcl145.anyrealm.com@krbhost.anyrealm.com" to "sample/krbsrv39.anyrealm.com@krbhost.anyrealm.
An output similar to the following output is displayed: Keytab name: FILE:/etc/krb5.keytab KVNO Principal -------------------------------------------------------1 nfs/krbsrv39.anyrealm.com@krbhost.anyrealm.com If you did not add the NFS service principal with the fully qualified hostname, an error similar to the following error is displayed: share -o sec=krb5i /export_krb5 share_nfs: /export_krb5: Invalid argument 9. Modify the /etc/nfssec.conf file.
Examples for Securely Sharing Directories This section discusses different examples for sharing directories in a secure manner. • Granting access to shared directories only for AUTH_DES mode users share -F nfs -o sec=dh /var/casey In this example, only clients that use AUTH_DES security mode are granted access. • Sharing directories using a combination of security modes share -F nfs -o sec=dh,rw,sec=sys,rw=onc21 /var/Casey In this example, the security modes dh and sys are combined.
NOTE: Add a principal for all machines that are going to use the NFS Service. Also, add a principal for all users who will access the data on the NFS server. For example, the sample/krbsrv39.anyrealm.com principal should be added to the Kerberos database before running the sample applications. 2. 3. To get the initial TGT to request a service from the application server, enter the following command: # kinit username The password prompt is displayed.
mount –o sec= Where, -o Enables you to use some of the specific options of the share command, such as sec, async, public, and others. sec Enables you to specify the security mode to be used. Specify krb5 as the Kerberos protocol version. Enables you to specify the location of the directory. Enables you to specify the mount-point location where the filesystem is mounted.
An output similar to the following output is displayed: program vers proto port 100024 1 udp 49157 100024 1 tcp 49152 100021 2 tcp 4045 100021 3 udp 4045 100005 3 udp 49417 100005 3 tcp 49259 100003 2 udp 2049 100003 3 tcp 2049 service status status nlockmgr nlockmgr mountd mountd nfs nfs Each time the rpc.statd and rpc.mountd daemons are stopped and restarted they may be assigned a different port from the anonymous port range. The firewall must be reconfigured each time the NFS service is restarted.
/sbin/init.d/nfs.server stop /sbin/init.d/lockmgr stop /sbin/init.d/lockmgr start /sbin/init.d/nfs.server start 3. Configure the firewall based on the port numbers configured. Sharing directories across a firewall using the NFSv4 protocol NFSv4 is a single protocol that handles mounting, and locking operations for NFS clients and servers. The NFSv4 protocol runs on port 2049, by default.
Figure 2-2 WebNFS Session Figure 2-2 depicts the following steps: 1. 2. 3. 4. An NFS client uses a LOOKUP request with a PUBLIC file handle to access the foo/index.html file. The NFS client bypasses the portmapper service and contacts the server on port 2049 (the default port). The NFS server responds with the file handle for the foo/index.html file. The NFS client sends a READ request to the server. The NFS server responds with the data.
uidrange 80-60005 • You want to provide PC users a different set of default print options. For example, add an entry to the /etc/pcnfsd.conf file which defines raw as a default print option for PC users submitting jobs to the printer lj3_2 as follows: printer lj3_2 lj3_2 lp -dlj3_2 -oraw The /etc/pcnfsd.conf file is read when the pcnfsd daemon starts. If you make any changes to /etc/pcnfsd.conf file while pcnfsd is running, you must restart pcnfsd before the changes take effect.
Automatic Unshare 1. 2. Use a text editor to comment or remove the entries in the /etc/dfs/dfstab file for each directory that you want to unshare. Users on clients cannot mount the unshared directory after the server is rebooted. To verify whether all the directories are unshared, enter the following command: share The directory that you have unshared should not be present in the list displayed. Manual Unshare 1.
how to automount a filesystem, see Chapter 3: “Configuring and Administering AutoFS” (page 67). NFS Client Configuration Files and Daemons This section describes the NFS client configuration files and daemons. Configuration Files Table 2-5 describes the NFS configuration files and their functions. Table 2-5 NFS client configuration files File Name Function /etc/mnttab Contains the list of filesystems that are currently mounted. /etc/dfs/fstypes Contains the default distributed filesystem type.
Configuring the NFSv4 Client Protocol Version IMPORTANT: The nfsmapid daemon must be running on both the NFS server and the client to use NFSv4. For more information on how to configure the NFSv4 server protocol version, see “Configuring the NFSv4 Server Protocol Version ” (page 29). By default, the version of the NFS protocol used between the client and the server is the highest one available on both systems. On HP-UX 11i v3, the default maximum protocol version of the NFS server and client is 3.
Table 2-7 Standard-Mounted Versus Automounted Directories Standard-Mounted Directory Automounted Directory (Using AutoFS) The directory stays mounted. You do not have Automounted directories stay mounted until they are to wait for it to be mounted after you issue a left idle for 10 minutes. The 10 minute time interval is read or write request. the default value and is configurable.
Mounting Remote Directories The mount command mounts a shared NFS directory from a remote system (NFS server). You can mount a filesystem using the following methods: • Automatic Mounting at System Boot time To set up a filesystem for automatic mounting at system boot time, you must configure it in the /etc/fstab file. All filesystems specified in the /etc/fstab file are mounted during system reboot.
/mnt/nfs149 from nfs149:/ Flags: vers=4,proto=tcp,sec=sys,hard,intr,link,symlink,devs,rsize= 32768,wsize=32768,retrans=5,timeo=600 Attr cache:acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 The directory that you have mounted must be present in this list. Manual Mount To mount your directories manually, follow these steps: 1. To mount a remote directory manually, enter the following command: mount serv:directory_name directory-name 2.
nfsstat -m An output similar to the following output is displayed: /Clay from onc21:/home/Casey,onc23:/home/Casey Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,symlink,acl, devs,rsize=32768,wsize=32768,retrans=5,timeo=600 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 Failover: noresponse=0,failover=0,remap=0,currserver=onc23 The Failover line in the above output indicates that the failover is working.
Figure 2-4 NFS Mount of Home Directories • Mounting an NFS Version 2 filesystem using the UDP Transport mount -o vers=2,proto=udp onc21:/var/mail /var/mail In this example, the NFS client mounts the /var/mail directory from the NFS server, onc21, using NFSv2 and the UDP protocol. • Mounting an NFS filesystem using an NFS URL mount nfs://onc31/Casey/mail /Casey/mail In this example, the NFS client mounts the /Casey/mail directory from NFS server, onc31, using the WebNFS protocol.
NFS client to failover to either server onc21, onc23, or onc25 if the current server has become unavailable. • Mounting replicated set of NFS file systems with different pathnames mount -r onc21:/Casey/Clay,onc23:/Var/Clay,nfs://srv-z/Clay /Casey/Clay In this example, the NFS client mounts a replicated set of NFS file systems with different pathnames. Secure Mounting of Directories The mount command enables you to specify the security mode for each NFS mount-point.
/usr/sbin/umount local_directory /usr/sbin/mount local_directory 2. If you change the mount options in the AutoFS master map, you must restart AutoFS for the changes to take effect. For information on restarting AutoFS, see “Restarting AutoFS” (page 103). For more information on the different caching mount options, see mount_nfs(1M). Unmounting (Removing) a Mounted Directory You can temporarily unmount a directory using the umount command.
Disabling NFS Client Capability To disable the NFS client, follow these steps: 1. On the NFS client, enter the following command to get a list of all the mounted NFS filesystems on the client /usr/sbin/nfsstat -m 2. For every NFS mounted directory listed by the nfsstat command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of all processes currently using the mounted directory.
To enable support of 1MB transfers for TCP mounts, you must first modify the following tunables: • nfs3_bsize (for NFS version 3) This tunable controls the logical block size used by NFSv3 clients. The block size represents the amount of data the client attempts to read from or write to the server during an I/O operation. • nfs4_bsize (for NFS version 4) This tunable controls the logical block size used by NFSv4 clients.
Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled, the nfsd daemon is started to service all TCP and UDP requests. If you want to change startup parameters for nfsd, you must login as superuser (root) and make changes to the /etc/default/nfs file or use the setoncenv command. The /etc/default/nfs file provides startup parameters for the nfsd daemon and rpc.lockd daemon.
A netgroup can be used in most NFS and NIS configuration files, instead of a host name or a user name. A netgroup does not create a relationship between users and hosts. When a netgroup is used in a configuration file, it represents either a group of hosts or a group of users, but never both. If you are using BIND (DNS) for hostname resolution, hosts must be specified as fully qualified domain names, for example: turtle.bio.nmt.edu.
this netgroup in an [access_list] argument in the /etc/dfs/dfstab file, any host can access the shared directory. If a netgroup is used strictly as a list of users, it is better to put a dash in the host field, as follows: administrators (-,jane, ) (-,art, ) (-,mel, ) The dash indicates that no hosts are included in the netgroup. The trusted_hosts and administrators netgroups can be used together in the /etc/hosts.
[access_list]=mail_clients The mail_clients netgroup is defined, as follows: mail_clients (cauliflower, , ) (broccoli, , ) (cabbage, , ) Only the host names from the netgroup are used. If the netgroup also contains user names, these are ignored. This netgroup is valid in any NIS domain, because the third field in each triple is left blank. Using Netgroups in the /etc/hosts.equiv or $HOME/.rhosts File In the /etc/hosts.equiv file, or in a .
The following sample entry from the /etc/passwd file indicates that users in the netgroup animals must be looked up in the NIS passwd database: +@animals The animals netgroup is defined in the /etc/netgroup file, as follows: animals (-,mickey, ) (-,daffy, ) (-,porky, ) (-,bugs, ) The /etc/passwd file is searched sequentially. As a result, user mickey, daffy, porky, or bugs appear before the animals netgroup in the /etc/passwd file. The NIS database is not consulted for information on that user.
For information on the /etc/group file, see group(4). Configuring RPC-based Services This section describes the following tasks: • • “Enabling Other RPC Services” “Restricting Access to RPC-based Services” Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc” . Following is the list of entries in an /etc/inetd.conf file: #rpc xit tcp nowait root /usr/sbin/rpc.rexd 100017 1 rpc.rexd #rpc dgram udp wait root /usr/lib/netsvc/rstat/rpc.
Table 2-8 RPC Services managed by inetd RPC Service Description rexd The rpc.rexd program is the server for the on command, which starts the Remote Execution Facility (REX). The on command sends a command to be executed on a remote system. The rpc.rexd program on the remote system executes the command, simulating the environment of the user who issued the on command. For more information, see rexd (1M) and on (1). rstatd The rpc.
You can use HP SMH to modify the /var/adm/inetd.sec file. For more information, see inetd.conf (4) and inetd.sec (4). Examples from /var/adm/inetd.sec In the following example, only hosts on subnets 15.13.2.0 through 15.13.12.0 are allowed to use the spray command: sprayd allow 15.13.2-12.
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
Following sections describe the different components of AutoFS that work together to automatically mount and unmount filesystems, in detail. AutoFS Filesystem The AutoFS filesystem is a virtual filesystem that provides a directory structure to enable automatic mounting of filesystems. It includes autofskd, a kernel-based process that periodically cleans up mounts. The filesystem interacts with the automount command and the automountd daemon to mount filesystems automatically.
When AutoFS receives a request to mount a filesystem, it calls the automountd daemon, which mounts the requested filesystem. AutoFS mounts the filesystems at the configured mount-points. The automountd daemon is independent of the automount command. This separation enables you to add, delete, or modify the AutoFS map information, without stopping and restarting the automountd daemon.
Table 3-1 Types of AutoFS Maps (continued) Type of Map Description Executable Map An executable map is a map whose entries are generated dynamically by a program or a script. Local maps that have the execute bit set in their file permissions will be executed by the AutoFS daemon. A direct map cannot be made executable. Special Map Special Maps are of two types namely, -hosts and -null. Included Map An included map is a map that is included within another map.
directory. If you are a user on any of the AutoFS clients, you can use maps to access the /export directory from the NFS server, Basil. AutoFS clients can access the exported filesystem using any one of the following map types: • • • Direct Map Indirect Map Special Map The AutoFS client, Sage, uses a direct map to access the /export directory. Sage includes an entry similar to the following in its map: /export Basil:/export Sage mounts the /export directory on the export mount-point.
# /etc/auto_direct file # local mount-point mount options remote server:directory /auto/project/specs -nosuid thyme:/export/project/specs /auto/project/specs/reqmnts -nosuid thyme:/export/projects/specs/reqmnts /auto/project/specs/design -nosuid thyme:/export/projects/specs/design A user on the NFS client, sage, enters the following command: cd /auto/project/specs Only the /auto/project/specs subdirectory is mounted.
Following are the contents of the indirect map, /etc/auto_indirect, which contains the local mount-points on the client and the references to the directories on the server: # /etc/auto_indirect file # local mount-point mount options /test /apps -nosuid remote server:directory -nosuid thyme:/export/project/test basil:/export/apps Enter the following commands to view the contents of the /nfs/desktop directory: cd /nfs/desktop ls The ls command displays the following: test apps The test and apps subdire
Reliable NFS Ping In a congested network, the default timeout for an NFS ping may be too short, possibly resulting in failed mounts. AutoFS supports the -retry= n mount option for an NFS map entry to configure the ping timeout value. Increasing the value raises the probability for the ping to succeed. Supported Filesystems AutoFS enables you to mount different types of filesystems. To mount the filesystems, use the fstype mount option, and specify the location field of the map entry.
For mount devices, If the mount resource begins with a “ / ” , it must be preceded by a colon. For instance in the above section, the CD-ROM device, /dev/sr0, is preceded by the colon. Supported Backends (Map Locations) AutoFS maps can be located in the following: • Files: Local files that store the AutoFS map information for that individual system. An example of a map that can be kept on the local system is the master map. The AUTO_MASTER variable in /etc/rc.config.
Enabling LDAP Support for AutoFS Before you enable LDAP support for AutoFS determine which schema shall be used for storing the AutoFS maps. The LDAP administrator can provide this information. HP-UX supports the following two options for schemas: • • older schema that uses nisMAP and nisObject objects to represent the maps new schema that uses automountMap and automount objects To enable LDAP support for AutoFS using the older schema, follow these steps: 1.
the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool. For information on importing the LDIF files, see the LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 3. Enter the following command to run the AutoFS shutdown script: /sbin/init.d/autofs stop 4. Enter the following command to run the AutoFS startup script: /sbin/init.
Starting with HP-UX 11i v2, Automounter is no longer supported. HP recommends that you update to AutoFS. Updating to AutoFS offers the following advantages: • • • AutoFS can be used to mount any type of filesystem. Automounter mounted the directories mounted under /tmp_mnt and created symbolic links from the configured mount-points to the actual ones on /tmp_mnt. In AutoFS, the configured mount-points are the actual mount-points. Automounter had to be killed and restarted when you wanted to modify maps.
Table 3-2 Old Automount Command-Line Options Used By AutoFS (continued) 3. Old automount Option Equivalent AutoFS command option -tminterval Obsolete with AutoFS. To specify interval between mount attempts. -twinterval Obsolete with AutoFS. To specify interval between unmount attempts. -v automount -vautomountd -v To specify the Verbose mode. Purpose Scripts that kill and restart automount are modified.
Table 3-3 AutoFS Configuration Variables Variable Name Description AUTOFS Specifies if the system uses AutoFS. Set the value to 1 to specify that this system uses AutoFS. Set the value to 0 to specify that this system does not use AutoFS. The default value of AutoFS is 1. NOTE: If you set the value of AUTOFS to 1, the NFS_CORE core configuration variable must also be set to 1. AUTOMOUNT_OPTIONS Specifies a set of options to be passed to the automount command when it is run. The default value is “ ” .
Configuring AutoFS Using the/etc/default/autofs File You can also use the /etc/default/autofs file to configure your AutoFS environment. The /etc/default/autofs file contains parameters for the automountd daemon and the automount command. Initially, the parameters in the /etc/default/autofs file are commented. You must uncomment the parameter to make the value for that parameter take effect. Changes made to the /etc/default/autofs file can be overridden if you modify the AutoFS values from the command line.
ensure that the AUTO_MASTER variable in /etc/rc.config.d/nfsconf is set to the name of the master map, as follows: AUTO_MASTER=”/etc/auto_master” If the -f option is not specified for the automount command then the Name Service Switch (NSS) is used to get the master map. If the master map is not found in any of the backends, then it tries the default master map name/etc/auto_master. If the -f option is specified for the automount command, then it uses that file irrespective of what the backend says.
# Master map for AutoFS # #auto_master #mount-point map-name [mount-options] /net -hosts -nosuid,nobrowse #Special Map /home auto_home -nobrowse #Indirect Map //etc/auto_direct #Direct Map /nfs /etc/auto_indirect #Indirect Map The master map is a primary map that defines all other maps. In this sample, auto_master, the master map, contains a single entry each for a direct map and a special map. It also contains a single entry for each top-level indirect map.
An indirect map uses the key to establish a connection between a mount-point on the client and a directory on the server. Indirect maps are useful for accessing specific file systems, such as home directories. The automounts configured in an indirect map are mounted under the same local parent directory. Figure 3-4 shows the difference between direct mounts and indirect mounts on an NFS client.
/- 2. direct_map_name [mount_options] If you are using local files for maps, use an editor to open or create a direct map in the /etc directory. The direct map is commonly called /etc/auto_direct. Add an entry to the direct map with the following syntax: local_directory [mount_options] server:remote_directory If you are using NIS or LDAP to manage maps, add an entry to the direct map on the NIS master server or the LDAP directory. 3. 4.
You cannot use the bg option for an automounted directory. The mount options configured in the direct map override the ones in the master map, if there is a conflict. You can configure all the direct automounts in the same map. Most users use the file name, /etc/auto_direct, for their direct map.
Figure 3-5 How AutoFS Sets Up Direct Mounts Automounting a Remote Directory Using an Indirect Map This section describes how to automount a remote directory using an indirect map. To automount a remote directory using an indirect map, follow these steps: 1. If you are using local files for maps, use an editor to edit the master map in the /etc directory. The master map is commonly called /etc/auto_master. If you are using NIS, open the master map on the corresponding master server.
IMPORTANT: Ensure that local_parent_directory and local_subdirectory are not already created. AutoFS creates them when it mounts the remote directory. If these directories exist, their files and directories in them are hidden when the remote directory is mounted. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server.
# /etc/auto_master file # local mount-point //nfs/desktop map name mount options /etc/auto_direct /etc/auto_desktop The local_parent_directory specified in the master map consists of directories, except the lowest-level subdirectory in the local directory pathname. For example, if you are mounting a remote directory on /nfs/apps/draw, /nfs/apps is the local_parent_directory specified in the master map.
or an indirect AutoFS map anywhere except the first field, which specifies the local mount-point. IMPORTANT: You cannot use environment variables in the AutoFS master map. In this example `the NFS server basil, contains subdirectories in its /export/private_files directory, which are named after the hosts in its network. Every host in the network can use the same AutoFS map and the same AUTOMOUNTD_OPTIONS definition to mount its private files from basil.
Consider the following guidelines while using wildcard characters as shortcuts: • Use an asterisk (*) as a wildcard character in an indirect map, to represent the local subdirectory if you want the local subdirectory to be the same as the remote system name, or the remote subdirectory. You cannot use the asterisk (*) wildcard in a direct map. • Use an ampersand (&) in a direct or an indirect map as the remote system name or the remote subdirectory.
If the home directory of the user terry is configured in the /etc/passwd file as /home/basil/terry, AutoFS mounts the remote directory /export/home/basil from the server, basil, on the local directory /home/basil when Terry logs in. The line with the asterisk must be the last line in an indirect map. AutoFS reads the lines in the indirect map sequentially until it finds a match for the requested local subdirectory. If asterisk (*) matches any subdirectory, AutoFS stops reading at the line with the asterisk.
/usr/sbin/automount Example of Automounting a User’s Home Directory User Howard’s home directory is located on the NFS server, basil, where it is called /export/home/howard.
map entry. For information on how to include a map in another map, see “Including a Map in Another Map” (page 99). Automounting All Exported Directories from Any Host Using the -hosts Map To automount all exported directories using the -hosts map, follow these steps: 1.
cd /net/basil/opt/frame the subdirectory, /basil is created under /net, and /opt is mounted under /basil. Figure 3-8 shows the automounted file structure after the user enters the command. Figure 3-8 Automounted Directories from the -hosts Map—One Server In the following example, the server thyme exports the directory /exports/proj1, and a user enters the following command: more /net/thyme/exports/proj1/readme The subdirectory /thyme is created under /net, and /exports/proj1 is mounted under /thyme.
This enables AutoFS to ignore the map entry that does not apply to your host. Notes on the -null Map The -null map is used to ignore mapping entries that do not apply to your host, but which would otherwise be inherited from the NIS or LDAP maps. The -null option causes AutoFS to ignore AutoFS map entries that affect the specified directory.
Advanced AutoFS Administration This section presents advanced AutoFS concepts that enable you to improve mounting efficiency and also help make map building easier.
specified, it is likely to be selected. Servers that have no weight factor specified have a default weight of zero and are most likely to be selected. man -ro broccoli(1),cabbage(2),cauliflower(3):/usr/share/man However, server proximity is more important than the weighting factor you assign. A server on the same network segment as the client is more likely to be selected, than a server on another network segment, regardless of the weight you assign.
15.43.234.210 255.255.248.0 AutoFS uses the /etc/netmasks file to determine that the masked value for the subnet of basil and the network number is the same (15. 42. 232. 0). This shows that the client is on the same network as basil. AutoFS then mounts /nfs/mount from basil on the local subnet. Notes on Configuring Replicated Servers Directories with multiple servers must be mounted as read-only to ensure that the versions remain the same on all servers.
If a user, whose home directory is in /home/basil, logs in, AutoFS mounts the /export/home/basil directory, from the host, basil. If a user, whose home directory is in /home/sage, /home/thyme, or any subdirectory of /home other than basil, logs in, AutoFS consults the auto_home map for information on mounting the user’s home directory. The plus (+) sign instructs AutoFS to look up a different map for the information it needs to mount the directory.
Starting with the AutoFS mount at /org, the evaluation of this path dynamically creates additional AutoFS mounts at /org/eng and /org/eng/projects. No action is required for the changes to take effect on the user's system because the AutoFS mounts are created only when required. You need to run the automount command only when you make changes to the master map or to a direct map.
local_directory is the configured mount-point in the AutoFS map. 2. Enter the following command to verify that the contents of the remote directory are mounted under the local mount-point: /usr/bin/ls If the directory is configured in an indirect map, entering the ls command from the parent directory displays potential mount-points (browsability). Changing to a subdirectory configured in the indirect map or entering the command, ls subdirectory, mounts the subdirectory.
IMPORTANT: Do not disable AutoFS by terminating the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it terminates. Use the autofs stop command instead. Restarting AutoFS AutoFS rarely needs to be restarted. In case you do need to restart it, follow these steps: 1.
Troubleshooting AutoFS This section describes the tools and procedures for troubleshooting AutoFS. AutoFS Logging AutoFS logs messages through /usr/sbin/syslogd. By default, syslogd writes messages to the file /var/adm/syslog/syslog.log. For more information on the syslog daemon, see syslogd(1M). To Start AutoFS Logging To start AutoFS Logging, follow these steps: 1. 2. Log in as superuser to the NFS client.
To Stop AutoFS Logging To stop AutoFS logging, stop AutoFS and restart it after removing the “-v” option from AUTOMOUNTD_OPTIONS. AutoFS Tracing AutoFS supports the following Trace levels: Detailed (level 3) Includes traces of all the AutoFS requests and replies, mount attempts, timeouts, and unmount attempts. You can start level 3 tracing while AutoFS is running. Basic (level 1) Includes traces of all the AutoFS requests and replies. You must restart AutoFS to start level 1 tracing.
3. To find a list of all the automounted directories on the client, run the following script: for FS in $(grep autofs /etc/mnttab | awk ‘{print $2}’) do grep ‘nfs’ /etc/mnttab | awk ‘{print $2}’ | grep ^${FS} done 4. For each automounted directory listed by the grep command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of all users who are using the mounted directory. 5.
where: = the key value from the map = subdirectory (may be blank)
May May May May 108 13 13 13 13 18:46:27 18:46:27 18:46:27 18:46:27 t1 t1 t1 t1 Port match nfsunmount: umount /n2ktmp_8264/nfs127/tmp OK unmount /n2ktmp_8264/nfs127/tmp OK UNMOUNT REPLY: status=0 Configuring and Administering AutoFS
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
NOTE: CacheFS cannot be used with NFSv4. CacheFS Terms The following CacheFS terms are used in this chapter: back filesystem A filesystem that is cached is called a back filesystem. HP-UX currently supports only NFS as the back filesystem. front filesystem A local filesystem that is used to store the cached data is called a front filesystem. HFS and VxFS are the only supported front filesystem types.
Figure 4-1 Sample CacheFS Network In the figure, cachefs1, cachefs2, and cachefs3 are CacheFS clients. The figure also displays an NFSv4 client which is not a CacheFS client because CacheFS cannot be used with NFSv4. The NFS server, NFSServer, shares files and the CacheFS clients mount these files as the back filesystem. When a user on any of the CacheFS clients, say cachefs1, accesses a file that is part of the back filesystem, portions of the file that were accessed are placed in the local cache.
Features of CacheFS This section discusses the features that CacheFS supports on systems running HP-UX 11i v3. • Cache Pre-Loading via the “cachefspack” Command The cachefspack command enables you to pre-load or pack specific files and directories in the cache, thereby improving the effectiveness of CacheFS. It also ensures that current copies of these files are always available in the cache. Packing files and directories in the cache enables you to have greater control over the cache contents.
NOTE: For information on how to force a cache consistency check, see “Forcing a Cache Consistency Check” (page 120). — noconst Disables consistency checking. Use this option only if you know that the contents of the back filesystem are rarely modified. — weakconst Verifies cache consistency with the NFS client's copy of the file attributes. NOTE: • Consistency is not checked at file open time. Switching Mount Options You can also switch between mount options without deleting or rebuilding the cache.
Configuring and Administering CacheFS You can use CacheFS to cache both manually mounted NFS filesystems or automatically mounted NFS filesystems. All CacheFS operations, except displaying CacheFS statistics, require superuser permissions.
Consider the following example where the /opt/frame directory is going to be NFS-mounted from the NFS server nfsserver to the local /opt/cframe directory. To mount the example NFS filesystem using CacheFS manually, enter the following command on an NFS client system: mount -F cachefs -o backfstype=nfs, \ cachedir=/disk2/cache nfsserver:/opt/frame /opt/cframe The /opt/frame directory can now be accessed like any other mounted filesystem.
umount /mnt1 mount -F cachefs -o backfstype=nfs,cachedir=/cache CFS1:/tmp /mnt1 To change the mount option from default to weakconst after unmounting, enter the following command: mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M). Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS.
Enabling Logging in CacheFS This section describes how you can enable logging in CacheFS. You can use the cachefslog command to enable logging for a CacheFS mount-point. Enabling the logging functionality may have a performance impact on the operations performed for all the CacheFS mount-points that are using the same cache directory. To enable CacheFS logging, follow these steps: 1. 2. Log in as superuser.
not logged: /cfs_mnt1 Caching a Complete Binary CacheFS is designed to work best with NFS filesystems that contain stable read-only data. One of the most common uses of CacheFS is managing application binaries. These are typically read-only and are rarely ever modified. They are modified when new versions of the application are installed or a patch containing a modified binary is installed. The rpages mount option enables you to cache a complete binary file.
NOTE: When you pack a directory, all files in that directory, subdirectories, and files in the subdirectories are packed. For instance, consider a directory /dir1 that contains two subdirectories /subdir1, and /subdir2, as well as two files test1, and test2. When you pack /dir1, /subdir1, /subdir2, test1, and test2 are packed. • Using the packing-list file A packing-list file is an ASCII file that contains a list of files and directories that are to be pre-packed in the cache.
You can unpack files that you no longer require, using one of the following methods: • Using the -u option To unpack a specific packed file or files from the cache directory, enter the following command: cachefspack -u filename where: • -u Specifies that certain files are to be unpacked. filename Specifies the file to unpack.
IMPORTANT: The -s option works only if CacheFS is mounted with the demandconst option. For information and an example on how to switch between mount options, see “Switching Mount Options” (page 115) Forcing a Consistency Check for all the Mount-Points To request for a consistency check on all the mount-points, enter the following command: cfsadmin -s all For information on cfsadmin options, see cfsadmin(1M).
/cfs_mnt1 /cfs_mnt2 You must now unmount these mount-points before you check the integrity of a cache. For information on how to unmount a cache mount-point, see “Unmounting a Cache Filesystem” (page 121). For more information on the fsck_cachefs command of CacheFS, see fsck_cachefs(1M). Updating Resource Parameters Each cache has a set of parameters that determines its structure and how it behaves. When a cache directory is created, it gets created with default values for the resource parameters.
where: cacheID Specifies the name of the cache filesystem. all Specifies that all cached filesystems in the cache-directory are to be deleted. cache-directory Specifies the name of the cache directory where the cache resides. NOTE: The cache directory must not be in use when attempting to delete a cached filesystem or the cache directory. To delete the cache directory, follow these steps: 1.
4. To delete the CacheFS filesystem corresponding to the Cache ID from the specified cache directory, enter the following command: cfsadmin -d CacheID cache-directory 5.
An output similar to the following is displayed: /home/smh cache hit rate: 20% (2543 hits, 9774 misses) consistency checks: 47842 (47825 pass, 17 fail) modifies: 85727 garbage collection: 0 You can run the cachefsstat command with the -z option to reinitialize CacheFS statistics. You must be a superuser to use the -z option. NOTE: If you do not specify the CacheFS mount-point, statistics for all CacheFS mount-points are displayed. For more information about the cachefsstat command, see cachefsstat(1M).
Common Problems While Using CacheFS This section discusses the common problems you may encounter while using CacheFS. It also describes steps to overcome these problems. The following tables list the error messages, their cause, and how to resolve these errors. Table 4-2 Common Error Messages encountered while using the cfsadmin command Error Message Possible Causes Resolution “cfsadmin: Cannot create lock file /test/mnt/c/.cfs_lock” Indicates that you do not have 1. Delete the cache.
Table 4-4 Common Error Messages encountered while using the mount command Error Message Possible Causes Resolution “mount -F cachefs: The /c directory may not be a valid /test/c/ cache directory. .cfs_mnt_points is not a valid cache” 1. Delete the cache. 2. Recreate the cache directory using the cfsadmin command.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 129) • “Performance Tuning” (page 137) • “Logging and Tracing of NFS Services” (page 140) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
– status – nlockmgr On the NFS server, check if the following processes are running: – – – – nfsd rpc.mountd rpc.statd rpc.lockd If any of these processes is not running, follow these steps: 1. Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines: NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start □ Enter the following command on the NFS client to make sure the rpc.
on BIND or /etc/hosts, see Installing and Administering Internet Services (B2355-91060). □ If you are using AutoFS, enter the ps -ef command to make sure the automountd process is running on your NFS client. If it is not, follow these steps: 1. Make sure the AUTOFS variable is set to 1 in the /etc/rc.config.d/nfsconf file on the NFS client. AUTOFS=1 2. Enter the following command on the NFS client to start the AutoFS: /sbin/init.
the remount mount option to mount the directory read/write without unmounting it. See “Changing the Default Mount Options” (page 55)“Changing the Default Mount Options” on page 51. If you are logged in as root to the NFS client, check the share permissions to determine whether root access to the directory is granted to your NFS client.
□ □ Verify that the filesystem you are trying to unmount is not a mount-point of another filesystem. Verify that the filesystem is not exported. In HP-UX 11i v3, an exported filesystem keeps the filesystem busy. “Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing.
directory, so one user cannot remove files another user is accessing. For more information on the source code control system, see rcsintro(5). □ If someone has restored the server’s file systems from backup or entered the fsirand command on the server, follow these steps on each of the NFS clients to prevent stale file handles by restarting NFS: 1. Enter the mount(1M) command with no options, to get a list of all the mounted file systems on the client: /usr/sbin/mount 2.
If any of these commands return RPC_TIMED_OUT, the rpc.statd or rpc.lockd process may be hung. Follow these steps to restart rpc.statd and rpc.lockd daemons: 1. Enter the following commands, on both the NFS client and the NFS server, to kill rpc.statd and rpc.lockd (PID is a process ID returned by the ps command): /usr/bin/ps -ef | /usr/bin/grep rpc.statd /usr/bin/kill PID /usr/bin/ps -ef | /usr/bin/grep rpc.lockd /usr/bin/kill PID 2. Enter the following commands to restart rpc.statd and rpc.
□ has been sent to the NFS server and acknowledged. The O_SYNC flag degrades write performance for applications that use it. If multiple NFS users are writing to the same file, add the lockf() call to your applications to lock the file so that only one user may modify it at a time. If multiple users on different NFS clients are writing to the file, you must also turn off attribute caching on those clients by mounting the file with the noac mount option.
# # Kerberos configuration # This krb5.conf file is intended as an example only. # see krb5.conf(4) for more details # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.com default_tkt_enctypes = DES-CBC-CRC default_tgs_enctypes = DES-CBC-CRC ccache_type = 2 [realms] krbhost.anyrealm.com = { kdc = krbhost.anyrealm.com:88 admin_server = krbhost.anyrealm.com } [domain_realm] .
nfsstat -rc 2. If the timeout and retrans values displayed by nfsstat -rc are high, but the badxid value is close to zero, packets are being dropped before they get to the NFS server. Try decreasing the values of the wsize and rsize mount options to 4096 or 2048 on the NFS clients. See “Changing the Default Mount Options” (page 55)“Changing the Default Mount Options” on page 51 . See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3.
nfsstat -s If the number of readlink calls is of the same magnitude as the number of lookup calls, you have a symbolic link in a filesystem that is frequently traversed by NFS clients. On the NFS clients that require access to the linked directory, mount the target of the link. Then, remove the link from the exported directory on the server.
On the NFS clients, set the wsize and rsize mount options to the bsize value displayed by tunefs. □ On the NFS clients, look in the /etc/fstab file for “stepping-stone” mounts (hierarchical mounts), as in the following example: thyme:/usr /usr nfs defaults 0 0 basil:/usr/share /usr/share nfs defaults 0 0 sage:/usr/share/lib /usr/share/lib nfs defaults 0 0 Wherever possible, change these “stepping-stone” mounts so that whole directories are mounted from a single NFS server.
To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly. Following is an example crontab entry that empties the logfile at 1:00 AM every Monday, Wednesday, and Friday: 0 1 * * 1,3,5 cat /dev/null > log_file For more information, type man 1M cron or man 1 crontab at the HP-UX prompt. To Configure Logging for the Other NFS Services 1. Add the -l logfile option to the lines in /etc/inetd.
/usr/sbin/netfmt -lN -f /var/adm/nettl.LOG00 > formatted_file where formatted_file is the name of the file where you want the formatted output from netfmt. The default logfile, /var/adm/nettl.LOGnn, is specified in the nettl configuration file, /etc/nettlgen.conf. If the file /var/adm/nettl.LOG00 does not exist on your system, the default logfile may have been changed in /etc/nettlgen.conf. For more information, see nettl(1M) and netfmt(1M). Tracing With nettl and netfmt 1.
Index Symbols + (plus sign) in AutoFS maps, 99 in group file, 63 in passwd file, 62 -hosts map, 50, 94 examples, 95 -null map, 95 32k transfer size, 57 A access denied, NFS, 131 attribute caching, 136, 139 auto_master map, 84, 87, 94 AUTO_OPTIONS variable, 104, 105 AutoFS -hosts map, 50, 94 -null map, 95 direct vs.
hung program, 134 hung system, 50 I included files, in AutoFS maps, 99 indirect map, 87 advantages, 83 environment variables in, 90 examples, 89 wildcards in, 91, 92 inetd.conf file, 64, 130, 141 inetd.
Revision Control System see RCS, 134 rexd logging, 141 root access to exported directories, 132 RPC, 14 authentication error, 26 rpc file, 65 rpc.