NFS Services Administrator's Guide HP-UX 11i version 3 HP Part Number: B1031-90068 Published: March 2009
© Copyright 2009 Hewlett-Packard Development Company, L.P Legal Notices © Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license required from HP for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Table of Contents Preface: About This Document ......................................................................................................11 Intended Audience.............................................................................................................11 Publishing History..............................................................................................................11 What's in This document............................................................................
Secure NFS Client Configuration with Kerberos...............................................38 Accessing Shared NFS Directories across a Firewall....................................................40 Sharing directories across a firewall without fixed port numbers (NFSv2 and NFSv3)......................................................................................................................41 Sharing directories across a firewall using fixed port numbers in the nfs file........
AutoFS Filesystem.........................................................................................................66 The automount Command..........................................................................................66 The automountd Daemon ..........................................................................................66 Maps Overview..................................................................................................................67 Features ...................
Example of using the /etc/netmasks File...........................................................96 Notes on Configuring Replicated Servers...............................................................96 Including a Map in Another Map.................................................................................97 Creating a Hierarchy of AutoFS Maps..........................................................................98 Sample Map Hierarchy............................................................
Displaying CacheFS Statistics......................................................................................122 Viewing the CacheFS Statistics..............................................................................122 Viewing the Working Set (Cache) Size........................................................................123 Common Problems While Using CacheFS.......................................................................124 5 Troubleshooting NFS Services.............................
List of Figures 1-1 2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 4-1 8 Server View of the Shared Directories........................................................................18 Symbolic Links in NFS Mounts...................................................................................32 WebNFS Session..........................................................................................................43 NFS Mount of manpages..............................................................
List of Tables 1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 3-1 3-2 3-3 3-4 4-1 4-2 4-3 4-4 4-5 5-1 Publishing History Details..........................................................................................11 NFS Server Configuration Files..................................................................................28 NFS Server Daemons..................................................................................................28 Security Modes of the share command..................................
Preface: About This Document The latest version of this document can be found on line at: http://www.docs.hp.com This document describes how to configure, and troubleshoot the NFS Services on HP-UX 11i v3. The document printing date and part number indicate the document's current edition. The printing date will change when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The document part number will change when extensive changes are made.
What's in This document This manual describes how to install, configure and troubleshoot the NFS Services product. The manual is organized as follows: Chapter 1 Introduction Describes the Network File System (NFS) services such NFS, AutoFS, and CacheFS. It also describes new features available with HP-UX 11i v3. Chapter 2 Configuring and Administering NFS Describes how to configure and administer NFS services.
1 Introduction This chapter introduces the Open Network Computing (ONC) services, such as NFS, AutoFS, and CacheFS. This chapter addresses the following topics: • • • “ONC Services Overview” (page 13) “Network File System (NFS)” (page 14) “New Features in NFS” (page 15) ONC Services Overview Open Network Computing (ONC) services is a technology that consists of core services which enable you to implement distributed applications in a heterogeneous, distributed computing environment.
implements a lock recovery service used by KLM. It enables rpc.lockd daemon to recover locks after the NFS service restarts. Files can be locked using the lockf() or fcntl() system calls. For more information on daemons and system calls that enable you to lock and synchronize your files, see lockd(1M), statd(1M), lockf(2), and fcntl(2). • Remote Procedure Call (RPC) is a mechanism that enables a client application to communicate with a server application.
NFS Servers and Clients In the NFS context, a system that shares its filesystems over a network is known as a server, and a system that mounts and accesses these shared filesystems is known as a client. The NFS service enables a system to access a filesystem located on a remote system. Once the filesystem is shared by a server, it can be accessed by a client. Clients access files on the server by mounting the shared filesystem. For users, these mounted filesystems appear as a part of the local filesystem.
The NFSv4 protocol design enables NFS developers to add new operations that are based on IETF specifications. • Delegation In NFSv4, the server can delegate certain responsibilities to the client. Delegation enables a client to locally service operations, such as OPEN, CLOSE, LOCK, LOCKU, READ, and WRITE, without immediate interactions with the server. The server grants either a READ or a WRITE delegation to a client at OPEN. After the delegation is granted, the client can perform all operations locally.
NOTE: In NFSv2 and NFSv3, ACLs are manipulated using NFSACL protocol. If systems in your environment do not support the NFSACL protocol, then ACLs cannot be manipulated using this feature. • File Handle Types File handles are created on the server and contain information that uniquely identify files and directories. Following are the different file handle types: — ROOT The ROOT file handle represents the conceptual root of the file system namespace on an NFS server.
filesystems without having to mount them, to access the shared points from a single common root. The NFSv4 specification does not require a client to traverse the NFS server’s namespace. For example, a server shares /opt/dce, /opt/hpsmh, and /doc/archives directories. In this list, the shared list hierarchy is not connected. The /doc/ archives directory is neither a parent nor a child directory of /opt. Figure 1-1 shows the server view of the shared directories.
However, UNIX systems use integers to represent users and groups in the underlying filesystems stored on the disk. As a result, using string identifiers requires mapping of string names to integers and back. The nfsmapid daemon is used to map the owner and owner_group identification attributes with the local user identification (UID) and group identification (GID) numbers, which are used by both the NFSv4 server and the NFSv4 client.
In earlier versions of HP-UX, the exportfs command was used to export directories and files to other systems over a network. Users and programs accessed the exported files on remote systems as if they were part of the local filesystem. NFS disabled the system from exporting directories by using the -u option of the exportfs command. For information on how to share directories with the NFS clients, see “Sharing Directories with NFS Clients” (page 30).
Mounting and Unmounting Directories NFS clients can mount any filesystem or a part of a filesystem that is shared by the NFS server. Filesystems can be mounted automatically when the system boots, from the command line, or through the automounter. The different ways to mount a filesystem are as follows: • Mounting a filesystem at boot time and using the mount command For information on how to mount a filesystem at boot time, see “Mounting a Remote Directory on an NFS client” (page 49).
Secure Sharing of Directories In earlier versions of HP-UX, NFS used the AUTH_SYS authentication, which uses UNIX style authentication, (uid/gid), to allow access to the shared files. It is fairly simple to develop an application or server that can masquerade as a user because the gid/uid ownership of a file can be viewed. The AUTH_DH authenticating method was introduced to address the vulnerabilities of the AUTH_SYS authentication method.
Replicated Filesystems A replicated filesystem contains the corresponding directory structures and identical files. A replica (identical copy) of a filesystem consists of files of the same size and same file type as the original filesystem. HP recommends that you create these replicated filesystems using the rdist utility. The rdist utility enables you to maintain identical copies of files on multiple hosts.
2 Configuring and Administering NFS Services This chapter describes how to configure and administer an HP-UX system as an NFS server or an NFS client, using the command-line interface. An NFS server exports or shares its local filesystems and directories with NFS clients. An >NFS client mounts the files and directories exported or shared by the NFS servers. NFS-mounted directories and filesystems appear as a part of the NFS client’s local filesystem.
NOTE: NFSv4 uses string identifiers that map to user IDs and group IDs in the standard integer format. For more information on string identifiers supported on NFSv4, see “New Features in NFS” (page 15). Consider the following points when you set user IDs and group IDs: • • • • Each user must have the same user ID on all systems where that user has an account. Each group has the same group ID on all systems where that group exists. No two users on the network have the same user ID.
use the following procedures depending on the user and group configuration method you use: • • • Using the HP-UX System Files Using NIS Using LDAP Using the HP-UX System Files If you are using HP-UX system files to manage your group database, follow these steps: 1. To identify the number of groups that the user belongs to, enter the following command for each user on your system: /usr/bin/grep -x -c username /etc/group This command returns the number of occurrences of username in the /etc/group file. 2.
2. filesystem. The decision of sharing directories by an NFS server is driven by the applications running on NFS clients that require access to those directories. Specify access restrictions and security modes for the shared directories. For example, you can use Kerberos (a security product) that is already configured on your system to specify access restrictions and security modes for the NFS shared directories.
Table 2-2 NFS Server Daemons (continued) Daemon Name Function nfslogkd Flushes nfslog information from the kernel to a file. nfsmapid Maps to and from NFSv4 owner and owner group identification attributes to local UID and GID numbers used by both NFSv4 client and server. nfs4srvkd Supports server side delegation. rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.
NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command to verify whether rpcbind daemon is running: ps -ae | grep rpcbind If the daemon is running, an output similar to the following is displayed: 778 ? 0:04 rpcbind No message is displayed if the daemon is not running. To start the rpcbind daemon, enter the following command: /sbin/init.d/nfs.core start 3. Enter the following commands to verify whether the lockd and statd daemons are running: ps -ae | grep rpc.lockd ps -ae | grep rpc.
NOTE: The exportfs command, used to export directories in versions prior to HP-UX 11i v3, is now a script that calls the share command. HP provides a new exportfs script for backward compatibility to enable you to continue using exportfs with the functionality supported in earlier versions of HP-UX. To use any of the new features provided in HP-UX 11i v3 you must use the share command.
NOTE: Use the bdf command to determine whether your filesystems are on different disks or logical volumes. Each entry in the bdf output represents a separate disk or volume that requires its own entry in the /etc/dfs/dfstab file, if shared. For more information on the bdf command, see bdf(1M). • When you share a directory, the share options that restrict access to a shared directory are applied, in addition to the regular HP-UX permissions on that directory.
Sharing a directory with NFS Clients Before you share your filesystem or directory, determine whether you want the sharing to be automatic or manual. To share a directory with NFS clients, select one of the following methods: • • Automatic Share Manual Share Automatic Share To share your directories automatically, follow these steps: 1. Add an entry to the /etc/dfs/dfstab file for each directory you want to share with the NFS clients.
share -F nfs directory_name 2. Enter the following command to verify if your filesystem is shared: share An output similar to the following output is displayed: /tmp rw=hpdfs001.cup.hp.com ““ /mail rw ““ /var rw ““ The directory that you have shared must be present in this list. For more information on the share command and a list of share options, see share_nfs(1M) and share(1M). Examples for Sharing directories This section discusses different examples for sharing directories.
In this example, the /var/mail/Red directory is shared. Only the superuser on client Red is granted root access to the directory. All other users on client Red have read-write access if they are provided read-write access by the regular HP-UX permissions. Users on other clients have read-only access if they are allowed read access through the HP-UX permissions.
Table 2-3 Security Modes of the share command (continued) Security Mode Description krb5i Uses Kerberos V5 authentication with integrity checking to verify that the data is not tampered with, while in transit between the NFS clients and servers. krb5p Uses Kerberos V5 authentication, integrity checking, and privacy protection (encryption) on the shared filesystems. none Uses NULL authentication (AUTH_NONE). NFS clients using AUTH_NONE are mapped to the anonymous user nobody by NFS.
1. Set up the host as a Kerberos client. For more information on setting up the NFS server as a Kerberos client, see Configuration Guide for Kerberos Client Products on HP-UX (5991-7685). NOTE: In all of this section, the following systems will be used as examples: Kerberos Server: onc52.ind.hp.com NFS Server: onc20.ind.hp.com NFS Client: onc36.ind.hp.com 2. 3. Synchronize the date & time of server nodes with kerberos server.
onc20# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------1 nfs/onc20.ind.hp.com@ONC52.IND.HP.COM 7. Edit the /etc/nfssec.conf file and uncomment the entries for krb5, krb5i, or krb5p based on the security protocol you want to choose. onc20# cat /etc/nfssec.conf | grep krb5 krb5 390003 krb5_mech default krb5i 390004 krb5_mech default integrity krb5p 390005 krb5_mech default privacy 8.
Enter policy name (Press enter key to apply default policy) : Principal added. 3. Copy the /etc/krb5.conf file from the Kerberos server to the NFS client. onc52# rcp /etc/krb5.conf onc36:/etc/ The following steps are to be configured in NFS client 1. To get the initial TGT to request a service from the application server, enter the following command: onc36# kinit root Password for root@ONC52.IND.HP.COM: The password prompt is displayed.
Enables you to specify the location of the directory. Enables you to specify the mount-point location where the filesystem is mounted. An initial ticket grant is carried out when the user accesses the mounted filesystem. Example onc36# mount -F nfs -o sec=krb5 onc36:/export_krb5 /aaa 1.
Sharing directories across a firewall without fixed port numbers (NFSv2 and NFSv3) This is the default method of sharing directories across a firewall. In this method, the rpc.statd and rpc.mountd daemons do not run on fixed ports. The ports used by these daemons are assigned from the anonymous port range. By default, the anonymous port range is configured between 49152 and 65535. The rpc.lockd daemon runs at port 4045 and is not configurable. To determine the port numbers currently used by rpc.
1. Assign values to the variables, STATD_PORT and MOUNT_PORT, as follows: STATD_PORT = port_number MOUNTD_PORT = port_number Where: port_number The port number on which the daemon runs. It can be set to any unique value between 1024 and 65536. STATD_PORT The port on which the rpc.statd daemon runs. MOUNTD_PORT The port on which the rpc.mountd daemon runs. 2. Activate the changes made to the/etc/default/nfs file by restarting the lock manager and NFS server daemons as follows: /sbin/init.d/nfs.
Table 2-4 NFS Session Versus WebNFS Session How NFS works across LANs How WebNFS works across WANs NFS servers must register their port assignments NFS servers register on port 2049. WebNFS clients with the portmapper service that is registered on contact the WebNFS server on port 2049. port 111, although the NFS server uses 2049 as the destination port. The MOUNT service is not registered on a specific A WebNFS client can use the PUBLIC file handle as port.
Configuring an NFS Server for use by a PC NFS client PC NFS is a protocol designed to perform the following functions: • • Allow PC users who do not have UNIX style credentials to authenticate to a UNIX account Perform print spooling from a PC on to a UNIX server Once a PC client has successfully authenticated itself on the NFS server, the PC uses the MOUNT and NFS protocols to mount the filesystem and to read and write to a file. You may want to create the /etc/pcnfsd.
Unsharing (Removing) a Shared Directory NOTE: Before you unshare a directory, run the showmount -a command to verify whether any clients are accessing the shared directory. If users are accessing the shared directories, they must exit the directories before you unshare the directory. A directory that is shared can be unshared. You can temporarily unshare a directory using the unshare command.
unshareall -F nfs 2. Enter the following command to disable NFS server capability: /sbin/init.d/nfs.server stop 3. On the NFS server, edit the /etc/rc.config.d/nfsconf file to set the NFS_SERVER variable to 0, as follows: NFS_SERVER=0 This prevents the NFS server daemons from starting when the system reboots. For more information about forced unmount, unmounting and unsharing, see mount_nfs (1M), unshare(1M), and umount(1M).
Table 2-6 NFS client daemons Daemon Name Function rpc.lockd Supports record lock and share lock operations on the NFS files. rpc.statd Maintains a list of clients that have performed the file locking operation over NFS against the server. These clients are monitored and notified in the event of a system crash. Following are the tasks involved in configuring and administering an NFS client.
Deciding Between Standard-Mounted Directories and Automounted Directories Before you mount any remote directories on a local system, decide whether you want each directory to be standard-mounted or automounted. You can automount directories using AutoFS. For more information on AutoFS, see Chapter 3: “Configuring and Administering AutoFS” (page 65). Table 2-7 lists the differences between the Standard-Mounted and the Automounted directories.
NFS_CLIENT=1 2. Enter the following command to run the NFS client startup script: /sbin/init.d/nfs.client start The NFS client startup script starts the necessary NFS client daemons, and mounts the remote directories configured in the /etc/fstab file. Mounting Remote Directories The mount command mounts a shared NFS directory from a remote system (NFS server).
server:remote_directory local_directory nfs option[,option...] 0 0 2. Mount all the NFS file systems specified in the /etc/fstab file by entering the following command: /usr/sbin/mount -a -F nfs 3.
mount -o ro svr:dir_name,srv:dir_name dir_name If the first NFS server is down, the client accesses the second NFS server. For example, to mount the Casey directory to replicated servers, enter the following command: mount -o ro onc21:/home/Casey, onc23:/home/Casey /Clay If the NFS server onc21 is down, the client accesses NFS server onc23.
Figure 2-4 illustrates this example. Figure 2-4 NFS Mount of Home Directories • Mounting an NFS Version 2 filesystem using the UDP Transport mount -o vers=2,proto=udp onc21:/var/mail /var/mail In this example, the NFS client mounts the /var/mail directory from the NFS server, onc21, using NFSv2 and the UDP protocol.
NFS client to failover to either server onc21, onc23, or onc25 if the current server has become unavailable. • Mounting replicated set of NFS file systems with different pathnames mount -r onc21:/Casey/Clay,onc23:/Var/Clay,nfs://srv-z/Clay /Casey/Clay In this example, the NFS client mounts a replicated set of NFS file systems with different pathnames. Secure Mounting of Directories The mount command enables you to specify the security mode for each NFS mount-point.
/usr/sbin/umount local_directory /usr/sbin/mount local_directory 2. If you change the mount options in the AutoFS master map, you must restart AutoFS for the changes to take effect. For information on restarting AutoFS, see “Restarting AutoFS” (page 100). For more information on the different caching mount options, see mount_nfs(1M). Unmounting (Removing) a Mounted Directory You can temporarily unmount a directory using the umount command.
Disabling NFS Client Capability To disable the NFS client, follow these steps: 1. On the NFS client, enter the following command to get a list of all the mounted NFS filesystems on the client /usr/sbin/nfsstat -m 2. For every NFS mounted directory listed by the nfsstat command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of all processes currently using the mounted directory.
To enable support of 1MB transfers for TCP mounts, you must first modify the following tunables: • nfs3_bsize (for NFS version 3) This tunable controls the logical block size used by NFSv3 clients. The block size represents the amount of data the client attempts to read from or write to the server during an I/O operation. • nfs4_bsize (for NFS version 4) This tunable controls the logical block size used by NFSv4 clients.
Changes to the NFS Server Daemon The NFS server daemon (nfsd) handles client filesystem requests. By default, nfsd starts over TCP and UDP for NFSv2 and NFSv3. If NFSv4 is enabled, the nfsd daemon is started to service all TCP and UDP requests. If you want to change startup parameters for nfsd, you must login as superuser (root) and make changes to the /etc/default/nfs file or use the setoncenv command. The /etc/default/nfs file provides startup parameters for the nfsd daemon and rpc.lockd daemon.
A netgroup can be used in most NFS and NIS configuration files, instead of a host name or a user name. A netgroup does not create a relationship between users and hosts. When a netgroup is used in a configuration file, it represents either a group of hosts or a group of users, but never both. If you are using BIND (DNS) for hostname resolution, hosts must be specified as fully qualified domain names, for example: turtle.bio.nmt.edu.
this netgroup in an [access_list] argument in the /etc/dfs/dfstab file, any host can access the shared directory. If a netgroup is used strictly as a list of users, it is better to put a dash in the host field, as follows: administrators (-,jane, ) (-,art, ) (-,mel, ) The dash indicates that no hosts are included in the netgroup. The trusted_hosts and administrators netgroups can be used together in the /etc/hosts.
[access_list]=mail_clients The mail_clients netgroup is defined, as follows: mail_clients (cauliflower, , ) (broccoli, , ) (cabbage, , ) Only the host names from the netgroup are used. If the netgroup also contains user names, these are ignored. This netgroup is valid in any NIS domain, because the third field in each triple is left blank. Using Netgroups in the /etc/hosts.equiv or $HOME/.rhosts File In the /etc/hosts.equiv file, or in a .
The following sample entry from the /etc/passwd file indicates that users in the netgroup animals must be looked up in the NIS passwd database: +@animals The animals netgroup is defined in the /etc/netgroup file, as follows: animals (-,mickey, ) (-,daffy, ) (-,porky, ) (-,bugs, ) The /etc/passwd file is searched sequentially. As a result, user mickey, daffy, porky, or bugs appear before the animals netgroup in the /etc/passwd file. The NIS database is not consulted for information on that user.
For information on the /etc/group file, see group(4). Configuring RPC-based Services This section describes the following tasks: • • “Enabling Other RPC Services” “Restricting Access to RPC-based Services” Enabling Other RPC Services 1. In the /etc/inetd.conf file, use a text editor to uncomment the entries that begin with “rpc” . Following is the list of entries in an /etc/inetd.conf file: #rpc xit tcp nowait root /usr/sbin/rpc.rexd 100017 1 rpc.rexd #rpc dgram udp wait root /usr/lib/netsvc/rstat/rpc.
Table 2-8 RPC Services managed by inetd RPC Service Description rexd The rpc.rexd program is the server for the on command, which starts the Remote Execution Facility (REX). The on command sends a command to be executed on a remote system. The rpc.rexd program on the remote system executes the command, simulating the environment of the user who issued the on command. For more information, see rexd (1M) and on (1). rstatd The rpc.
You can use HP SMH to modify the /var/adm/inetd.sec file. For more information, see inetd.conf (4) and inetd.sec (4). Examples from /var/adm/inetd.sec In the following example, only hosts on subnets 15.13.2.0 through 15.13.12.0 are allowed to use the spray command: sprayd allow 15.13.2-12.
3 Configuring and Administering AutoFS This chapter provides an overview of AutoFS and the AutoFS environment. It also describes how to configure and administer AutoFS on a system running HP-UX 11i v3.
Following sections describe the different components of AutoFS that work together to automatically mount and unmount filesystems, in detail. AutoFS Filesystem The AutoFS filesystem is a virtual filesystem that provides a directory structure to enable automatic mounting of filesystems. It includes autofskd, a kernel-based process that periodically cleans up mounts. The filesystem interacts with the automount command and the automountd daemon to mount filesystems automatically.
When AutoFS receives a request to mount a filesystem, it calls the automountd daemon, which mounts the requested filesystem. AutoFS mounts the filesystems at the configured mount-points. The automountd daemon is independent of the automount command. This separation enables you to add, delete, or modify the AutoFS map information, without stopping and restarting the automountd daemon.
Table 3-1 Types of AutoFS Maps (continued) Type of Map Description Special Map Special Maps are of two types namely, -hosts and -null. Included Map An included map is a map that is included within another map. The entries of the included map are read as if they are part of the existing map. Figure 3-2 “AutoFS” displays a sample AutoFS network. Figure 3-2 AutoFS In this figure, AFS1, AFS2, and Sage are AutoFS clients. Thyme and Basil are the NFS servers. The NFS servers export directories.
AutoFS clients can access the exported filesystem using any one of the following map types: • • • Direct Map Indirect Map Special Map The AutoFS client, Sage, uses a direct map to access the /export directory. Sage includes an entry similar to the following in its map: /export Basil:/export Sage mounts the /export directory on the export mount-point. The AFS1 client uses an indirect map to access the /export directory.
/auto/project/specs -nosuid thyme:/export/project/specs /auto/project/specs/reqmnts -nosuid thyme:/export/projects/specs/reqmnts /auto/project/specs/design -nosuid thyme:/export/projects/specs/design A user on the NFS client, sage, enters the following command: cd /auto/project/specs Only the /auto/project/specs subdirectory is mounted.
/test /apps -nosuid -nosuid thyme:/export/project/test basil:/export/apps Enter the following commands to view the contents of the /nfs/desktop directory: cd /nfs/desktop ls The ls command displays the following: test apps The test and apps subdirectories are the potential mount-points. However, they are not currently mounted.
Supported Filesystems AutoFS enables you to mount different types of filesystems. To mount the filesystems, use the fstype mount option, and specify the location field of the map entry. Following is a list of supported filesystems and the appropriate map entry: AutoFS mount-point -fstype=autofs autofs_map_name NOTE: You can specify another AutoFS map name in the location field of the map-entry. This would enable AutoFS to trigger other AutoFS mounts.
Supported Backends (Map Locations) AutoFS maps can be located in the following: • Files: Local files that store the AutoFS map information for that individual system. An example of a map that can be kept on the local system is the master map. The AUTO_MASTER variable in /etc/rc.config.d/nfsconf is set to the name of the master map. The default master map name is /etc/auto_master. NOTE: To modify the master map name, use the -f option of the automount command and specify the filename.
1. If the AutoFS maps are not already migrated, migrate your AutoFS maps to LDAP Directory Interchange Format (LDIF) files using the migration scripts. The migrated maps can also be used if you have chosen the older schema. For information on the specific migration scripts, see LDAP-UX Client Services B.04.10 Administrator’s Guide (J4269-90067). 2. Import the LDIF files into the LDAP directory server using the ldapmodify tool.
/sbin/init.d/autofs start AutoFS Configuration Prerequisites Consider the following points before configuring AutoFS: • • • Ensure that the local directory you configure as the mount-point is either empty or non-existent. If the local directory you configured as the mount-point is not empty, the local files or directories in it are hidden and become inaccessible if the automounted filesystem is mounted over the local directory.
1. 2. In the /etc/rc.config.d/nfsconf file, the AUTOFS variable is set to 1. Any options you had specified in the AUTO_OPTIONS variable are copied to either the AUTO_OPTIONS or the AUTOMOUNTD_OPTIONS variable. Obsolete options are removed. Table 3-2 lists the options of the old automount command and the equivalent AutoFS command options. It also indicates which automount options are not supported in AutoFS. Table 3-2 Old Automount Command-Line Options Used By AutoFS Old automount Option 3.
Configuring AutoFS Using the nfsconf File You can use the /etc/rc.config.d/nfsconf file to configure your AutoFS environment. The /etc/rc.config.d/nfsconf file is the NFS configuration file. This file consists of the following sets of variables or parameters: 1. 2. 3. 4.
For more information on configuring the NSS, see nsswitch.conf (4) and automount(1M) . To configure AutoFS using the /etc/rc.config.d/nfsconf file, follow these steps: 1. 2. Log in as superuser. Edit the /etc/rc.config.d/nfsconf file. For instance, to change the default time for which a filesystem remains mounted when not in use, modify the AUTOMOUNT_OPTIONS variable, as follows: AUTOMOUNT_OPTIONS="-t 720" 3. Enter the following command to start AutoFS. /sbin/init.
NOTE: Values modified using the /etc/default/autofs file are updated when the command or daemon is started. This update is made irrespective of the way the command or daemon is started, Autofs startup script or command-line. Enabling AutoFS To enable AutoFS, follow these steps: 1. In the /etc/rc.config.d/nfsconf file, set the value of AUTOFS variables to 1, as follows: AUTOFS=1 2. Configure AutoFS using either the /etc/rc.config.d/nfsconf or the /etc/ default/autofs configuration files.
mount-options Options that apply to the maps specified by map-name. If the first field specifies the directory as /-, then the second field is the name of the direct map. The master map file, like any other map file, may be distributed by NIS or LDAP by modifying the appropriate configuration files and removing any existing /etc/auto_master master map file. NOTE: If the same mount-point is used in two entries, the first entry is used by the automount command. The second entry is ignored.
Table 3-4 Direct Versus Indirect AutoFS Map Types (continued) Direct Map Indirect Map Disadvantage: If you add or remove mounts in a direct map, or if you change the local mount-point for an existing mount in a direct map, you must force AutoFS to reread its maps. Advantage: If you modify an indirect map, AutoFS will view the changes the next time it mounts the directory. You need not force AutoFS to reread its maps.
In the Mounts in a Direct Map figure, mounts are configured in various places in the local filesystem and not located under the same parent directory. In the Mounts in an Indirect Map figure, all the mounts are configured under the same parent directory. CAUTION: Any filesystems that are being managed by AutoFS should never be manually mounted or unmounted.
IMPORTANT: Do not automount a remote directory on a local directory, which is a symbolic link. Ensure that the local mount-point specified in the AutoFS map entry is different from the exported directory on the NFS server. If it is the same, and the NFS server also acts as an NFS client and uses AutoFS with these map entries, the exported directory can attempt to mount over itself. This can result in unexpected behavior.
If the direct map name in the master map begins with a slash (/), AutoFS considers it to be a local file. If it does not contain a slash, AutoFS uses the NSS to determine whether it is a file, LDAP, or an NIS map. For more information on using NSS, see nsswitch.conf(4).
1. If you are using local files for maps, use an editor to edit the master map in the /etc directory. The master map is commonly called /etc/auto_master. If you are using NIS, open the master map on the corresponding master server. If you are using LDAP, the map must be modified on the LDAP server. For information on modifying the map, see the LDAP-UX Client Services B.04.00 Administrator’s Guide (J4269-90064).
directory. The mount options configured in the indirect map override the ones in the master map if there is a conflict. Indirect maps are usually called /etc/auto_name, where name helps you remember what is configured in the map. Following is the syntax of the indirect map: local mount-point mount options remote server:directory where: local mount-point Simple name in the indirect map mount options Options you want to apply to this mount.
draw write -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Figure 3-6 illustrates how AutoFS sets up indirect mounts. Figure 3-6 How AutoFS Sets Up NFS Indirect Mounts Using Environment Variables as Shortcuts in AutoFS Maps This section describes how to use environment variables as shortcuts in AutoFS maps using an example. You can create an environment variable by prefixing a dollar ($) to its name, or enclosing it in braces ({}).
In the direct map entry, HOST, is the environment variable. 2. Add the -D option to the AUTOMOUNTD_OPTIONS variable in the /etc/rc.config.d/nfsconf file to assign a value to the variable, as follows: AUTOMOUNTD_OPTIONS=”-D HOST='hostname'” NOTE: You can also use the /etc/default/autofs file to modify the value assigned to the variable. You can use any environment variable that is set to a value in an AutoFS map. If you do not set the variable either with the -D option in /etc/rc.config.
The following line from the /etc/auto_home indirect map mounts the user's home directories on demand: # /etc/auto_home file # local mount-point mount options remote server:directory * basil:/export/home/& The user's home directory is configured in the /etc/passwd file as /home/username. For example, the home directory of the user terry is /home/terry. When Terry logs in, AutoFS looks up the /etc/auto_home map and substitutes terry for both the asterisk and the ampersand.
2. 3. configuring a system as an NFS server, see “Configuring and Administering an NFS Server” (page 27). In the /etc/passwd file on the NFS clients, configure the home directory of each user as the NFS mount-point, where the user’s home directory is mounted. For example, if home directories are mounted under /home, Claire’s home directory will be configured as /home/claire in the /etc/passwd file.
Figure 3-7 Home Directories Automounted with Wildcards Special Maps There are two types of special maps: -hosts and -null. By default, the -hosts map is used with the /net directory and assumes that the map entry is the hostname of the NFS server. The automountd daemon dynamically creates a map entry from the server's list of exported filesystems. For example, a reference to /net/Casey/usr initiates an automatic mount of all the exported filesystems from Casey which can be mounted by the client.
Notes on the -hosts Map The -hosts map is a built-in AutoFS map. It enables AutoFS to mount exported directories from any NFS server found in the hosts database, whenever a user or a process requests access to one of the exported directories from that server. CAUTION: You may inadvertently cause an NFS mount over X.25 or SLIP, which is unsupported, or through a slow router or gateway, because the -hosts map allows NFS access to any reachable remote system.
Figure 3-9 shows the automounted directory structure after the user enters the second command. Figure 3-9 Automounted Directories from the -hosts Map—Two Servers Turning Off an AutoFS Map Using the -null Map To turn off a map using the -null map, follow these steps: 1. Add a line with the following syntax in the AutoFS master map: local_directory -null 2.
NOTE: The -null entry must precede the included map entry to be effective. Using Executable Maps An executable map is a map whose entries are generated dynamically by a program or a script. AutoFS determines whether a map is executable, by checking whether the execute bit is set in its permissions string. If a map is not executable, ensure that its execute bit is not set. An executable map is an indirect map.
Adding these map entries does not automatically mount them. The listed remote directories are mounted only when referenced. For example, the following entry from a direct map mounts the source code and the data files for a project whenever anyone requests access to both of them: /our_project /source -ro basil:/opt/proj1/src \ /datafiles thyme:/opt/proj1/samples/data The following is another example from an indirect map.
To configure multiple replicated servers for a directory, follow these steps: 1. Create and configure the /etc/netmasks file. AutoFS requires the /etc/netmasks file to determine the subnets of local clients in a replicated multiple server environment. The /etc/netmasks file contains IP address masks with IP network numbers. It supports both standard subnetting as specified in RFC-950, and variable-length subnetting as specified in RFC-1519.
the same side of the gateway as the NFS client is always preferred. For multiple servers outside the local network, and with no weighting factors assigned, the server with the lowest response time is used for the mount. Multiple servers provide users with reliable access to a mounted directory. If one server is down, the directory can be mounted from another.
Creating a Hierarchy of AutoFS Maps Hierarchical AutoFS maps provide a framework that enables you to organize large exported filesystems. Together with NIS, which allows you to share information across administrative domains, hierarchical maps enable you to decentralize the maintenance of a shared namespace. Sample Map Hierarchy In the following example, an organization consisting of many departments, wants to organize a shared automounted directory structure. The shared top-level directory is called /org.
This command lists the process IDs and user names of everyone using the mounted directory. 2. Warn all users to exit the directory. Terminate processes that are using the directory, or wait until the processes terminate. Enter the following command to kill all the processes that are using the mounted directory: /usr/sbin/fuser -ck local_mount_point 3. 4. 5. Use an editor to modify the direct or indirect map.
/nfs/desktop /etc/auto_desktop # /etc/auto_desktop file # local mount-point mount options remote server:directory draw write -nosuid -nosuid thyme:/export/apps/draw basil:/export/write Enter the following commands: cd /nfs/desktop ls The ls command displays the following output: draw write The draw and write subdirectories are the potential mount-points (browsability), but are not currently mounted.
grep ‘nfs’ /etc/mnttab | awk ‘{print $2}’ | grep ^${FS} done 2. To determine whether each automounted directory returned by the grep command is currently in use, enter the following command: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of all the users who are using the mounted directory. 3. Warn any users to exit the directory, and terminate any processes that are using the directory, or wait until the processes terminate.
for FS in $(grep autofs /etc/mnttab | awk ‘{print $2}’) do grep ‘nfs’ /etc/mnttab | awk ‘{print $2}’ | grep ^${FS} done 3. For every automounted directory listed by the grep command, enter the following command to determine whether the directory is currently in use: /usr/sbin/fuser -cu local_mount_point This command lists the process IDs and user names of all the users who are using the mounted directory. 4.
1. 2. Log in as superuser to the NFS client. Enter the following commands: ps -ef | grep automountd kill -SIGUSR2 PID where: Process ID returned by the ps command. PID Level 3 tracing is appended to the /var/adm/automount.log file. NOTE: The command, kill -SIGUSR2 PID, works only if tracing is not already on. To stop level 3 tracing, enter the same commands listed above to send the SIGUSR2 signal to automountd. The SIGUSR2 signal is a toggle that turns tracing on or off depending on its current state.
/usr/sbin/fuser -ck local_mount_point 6. Enter the following command to stop AutoFS: /sbin/init.d/autofs stop CAUTION: Do not kill the automountd daemon with the kill command. It does not unmount AutoFS mount-points before it dies. 7. Enter the following command to start AutoFS with tracing enabled: /sbin/init.d/autofs start To Stop AutoFS Basic Tracing To stop AutoFS tracing, kill AutoFS and restart it only after removing -T from AUTOMOUNTD_OPTIONS.
May 13 18:45:09 May 13 18:45:09 penalty=0 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 May 13 18:45:09 t5 t5 do_mount1: (nfs,nfs) t5 t5 t5 t5 t5 t5 t5 t5 t5 t5 t5 nfsmount: input: hpnfs127[other] nfsmount: standard mount on/n2ktmp_8264/nfs127/tmp : hpnfs127:/tmpMay 13 18:45:09 t5 nfsmount: v3=1[0],v2=0[0] => v3.
4 Configuring and Administering a Cache Filesystem This chapter introduces the Cache Filesystem (CacheFS) and the CacheFS environment. It also describes how to configure and administer CacheFS on a system running HP-UX 11i v3.
NOTE: CacheFS cannot be used with NFSv4. CacheFS Terms The following CacheFS terms are used in this chapter: back filesystem A filesystem that is cached is called a back filesystem. HP-UX currently supports only NFS as the back filesystem. front filesystem A local filesystem that is used to store the cached data is called a front filesystem. HFS and VxFS are the only supported front filesystem types.
Figure 4-1 Sample CacheFS Network In the figure, cachefs1, cachefs2, and cachefs3 are CacheFS clients. The figure also displays an NFSv4 client which is not a CacheFS client because CacheFS cannot be used with NFSv4. The NFS server, NFSServer, shares files and the CacheFS clients mount these files as the back filesystem. When a user on any of the CacheFS clients, say cachefs1, accesses a file that is part of the back filesystem, portions of the file that were accessed are placed in the local cache.
Features of CacheFS This section discusses the features that CacheFS supports on systems running HP-UX 11i v3. • Cache Pre-Loading via the “cachefspack” Command The cachefspack command enables you to pre-load or pack specific files and directories in the cache, thereby improving the effectiveness of CacheFS. It also ensures that current copies of these files are always available in the cache. Packing files and directories in the cache enables you to have greater control over the cache contents.
NOTE: For information on how to force a cache consistency check, see “Forcing a Cache Consistency Check” (page 118). — noconst Disables consistency checking. Use this option only if you know that the contents of the back filesystem are rarely modified. — weakconst Verifies cache consistency with the NFS client's copy of the file attributes. NOTE: • Consistency is not checked at file open time. Switching Mount Options You can also switch between mount options without deleting or rebuilding the cache.
Configuring and Administering CacheFS You can use CacheFS to cache both manually mounted NFS filesystems or automatically mounted NFS filesystems. All CacheFS operations, except displaying CacheFS statistics, require superuser permissions.
Consider the following example where the /opt/frame directory is going to be NFS-mounted from the NFS server nfsserver to the local /opt/cframe directory. To mount the example NFS filesystem using CacheFS manually, enter the following command on an NFS client system: mount -F cachefs -o backfstype=nfs, \ cachedir=/disk2/cache nfsserver:/opt/frame /opt/cframe The /opt/frame directory can now be accessed like any other mounted filesystem.
umount /mnt1 mount -F cachefs -o backfstype=nfs,cachedir=/cache CFS1:/tmp /mnt1 To change the mount option from default to weakconst after unmounting, enter the following command: mount -F cachefs -o backfstype=nfs,cachedir=/cache,weakconst CFS2:/tmp /mnt1 For more information on the various mount options of the CacheFS filesystem, see mount_cachefs(1M). Automounting a Filesystem Using CacheFS This section describes how to automount a filesystem using CacheFS.
Enabling Logging in CacheFS This section describes how you can enable logging in CacheFS. You can use the cachefslog command to enable logging for a CacheFS mount-point. Enabling the logging functionality may have a performance impact on the operations performed for all the CacheFS mount-points that are using the same cache directory. To enable CacheFS logging, follow these steps: 1. 2. Log in as superuser.
not logged: /cfs_mnt1 Caching a Complete Binary CacheFS is designed to work best with NFS filesystems that contain stable read-only data. One of the most common uses of CacheFS is managing application binaries. These are typically read-only and are rarely ever modified. They are modified when new versions of the application are installed or a patch containing a modified binary is installed. The rpages mount option enables you to cache a complete binary file.
NOTE: When you pack a directory, all files in that directory, subdirectories, and files in the subdirectories are packed. For instance, consider a directory /dir1 that contains two subdirectories /subdir1, and /subdir2, as well as two files test1, and test2. When you pack /dir1, /subdir1, /subdir2, test1, and test2 are packed. • Using the packing-list file A packing-list file is an ASCII file that contains a list of files and directories that are to be pre-packed in the cache.
You can unpack files that you no longer require, using one of the following methods: • Using the -u option To unpack a specific packed file or files from the cache directory, enter the following command: cachefspack -u filename where: • -u Specifies that certain files are to be unpacked. filename Specifies the file to unpack.
IMPORTANT: The -s option works only if CacheFS is mounted with the demandconst option. For information and an example on how to switch between mount options, see “Switching Mount Options” (page 113) Forcing a Consistency Check for all the Mount-Points To request for a consistency check on all the mount-points, enter the following command: cfsadmin -s all For information on cfsadmin options, see cfsadmin(1M).
/cfs_mnt1 /cfs_mnt2 You must now unmount these mount-points before you check the integrity of a cache. For information on how to unmount a cache mount-point, see “Unmounting a Cache Filesystem” (page 119). For more information on the fsck_cachefs command of CacheFS, see fsck_cachefs(1M). Updating Resource Parameters Each cache has a set of parameters that determines its structure and how it behaves. When a cache directory is created, it gets created with default values for the resource parameters.
where: cacheID Specifies the name of the cache filesystem. all Specifies that all cached filesystems in the cache-directory are to be deleted. cache-directory Specifies the name of the cache directory where the cache resides. NOTE: The cache directory must not be in use when attempting to delete a cached filesystem or the cache directory. To delete the cache directory, follow these steps: 1.
4. To delete the CacheFS filesystem corresponding to the Cache ID from the specified cache directory, enter the following command: cfsadmin -d CacheID cache-directory 5.
An output similar to the following is displayed: /home/smh cache hit rate: 20% (2543 hits, 9774 misses) consistency checks: 47842 (47825 pass, 17 fail) modifies: 85727 garbage collection: 0 You can run the cachefsstat command with the -z option to reinitialize CacheFS statistics. You must be a superuser to use the -z option. NOTE: If you do not specify the CacheFS mount-point, statistics for all CacheFS mount-points are displayed. For more information about the cachefsstat command, see cachefsstat(1M).
Common Problems While Using CacheFS This section discusses the common problems you may encounter while using CacheFS. It also describes steps to overcome these problems. The following tables list the error messages, their cause, and how to resolve these errors. Table 4-2 Common Error Messages encountered while using the cfsadmin command Error Message Possible Causes Resolution “cfsadmin: Cannot create lock file /test/mnt/c/.cfs_lock” Indicates that you do not have 1. Delete the cache.
Table 4-4 Common Error Messages encountered while using the mount command Error Message Possible Causes Resolution “mount -F cachefs: The /c directory may not be a valid /test/c/ cache directory. .cfs_mnt_points is not a valid cache” 1. Delete the cache. 2. Recreate the cache directory using the cfsadmin command.
5 Troubleshooting NFS Services This chapter describes tools and procedures for troubleshooting the NFS Services. This chapter addresses the following topics: • “Common Problems with NFS” (page 127) • “Performance Tuning” (page 135) • “Logging and Tracing of NFS Services” (page 138) Common Problems with NFS This section lists the following common problems encountered with NFS and suggests ways to correct them.
– status – nlockmgr On the NFS server, check if the following processes are running: – – – – nfsd rpc.mountd rpc.statd rpc.lockd If any of these processes is not running, follow these steps: 1. Make sure the /etc/rc.config.d/nfsconf file on the NFS server contains the following lines: NFS_SERVER=1 START_MOUNTD=1 2. Enter the following command on the NFS server to start all the necessary NFS processes: /sbin/init.d/nfs.server start □ Enter the following command on the NFS client to make sure the rpc.
on BIND or /etc/hosts, see Installing and Administering Internet Services (B2355-91060). □ If you are using AutoFS, enter the ps -ef command to make sure the automountd process is running on your NFS client. If it is not, follow these steps: 1. Make sure the AUTOFS variable is set to 1 in the /etc/rc.config.d/nfsconf file on the NFS client. AUTOFS=1 2. Enter the following command on the NFS client to start the AutoFS: /sbin/init.
the remount mount option to mount the directory read/write without unmounting it. See “Changing the Default Mount Options” (page 53)“Changing the Default Mount Options” on page 51. If you are logged in as root to the NFS client, check the share permissions to determine whether root access to the directory is granted to your NFS client.
□ □ Verify that the filesystem you are trying to unmount is not a mount-point of another filesystem. Verify that the filesystem is not exported. In HP-UX 11i v3, an exported filesystem keeps the filesystem busy. “Stale File Handle” Message A “stale file handle” occurs when one client removes an NFS-mounted file or directory that another client is accessing.
directory, so one user cannot remove files another user is accessing. For more information on the source code control system, see rcsintro(5). □ If someone has restored the server’s file systems from backup or entered the fsirand command on the server, follow these steps on each of the NFS clients to prevent stale file handles by restarting NFS: 1. Enter the mount(1M) command with no options, to get a list of all the mounted file systems on the client: /usr/sbin/mount 2.
If any of these commands return RPC_TIMED_OUT, the rpc.statd or rpc.lockd process may be hung. Follow these steps to restart rpc.statd and rpc.lockd daemons: 1. Enter the following commands, on both the NFS client and the NFS server, to kill rpc.statd and rpc.lockd (PID is a process ID returned by the ps command): /usr/bin/ps -ef | /usr/bin/grep rpc.statd /usr/bin/kill PID /usr/bin/ps -ef | /usr/bin/grep rpc.lockd /usr/bin/kill PID 2. Enter the following commands to restart rpc.statd and rpc.
□ has been sent to the NFS server and acknowledged. The O_SYNC flag degrades write performance for applications that use it. If multiple NFS users are writing to the same file, add the lockf() call to your applications to lock the file so that only one user may modify it at a time. If multiple users on different NFS clients are writing to the file, you must also turn off attribute caching on those clients by mounting the file with the noac mount option.
# # Kerberos configuration # This krb5.conf file is intended as an example only. # see krb5.conf(4) for more details # hostname is the fully qualified hostname(FQDN) of host on which kdc is running # domain_name is the fully qualified name of your domain [libdefaults] default_realm = krbhost.anyrealm.com default_tkt_enctypes = DES-CBC-CRC default_tgs_enctypes = DES-CBC-CRC ccache_type = 2 [realms] krbhost.anyrealm.com = { kdc = krbhost.anyrealm.com:88 admin_server = krbhost.anyrealm.com } [domain_realm] .
nfsstat -rc 2. If the timeout and retrans values displayed by nfsstat -rc are high, but the badxid value is close to zero, packets are being dropped before they get to the NFS server. Try decreasing the values of the wsize and rsize mount options to 4096 or 2048 on the NFS clients. See “Changing the Default Mount Options” (page 53)“Changing the Default Mount Options” on page 51 . See Installing and Administering LAN/9000 Software for information on troubleshooting LAN problems. 3.
nfsstat -s If the number of readlink calls is of the same magnitude as the number of lookup calls, you have a symbolic link in a filesystem that is frequently traversed by NFS clients. On the NFS clients that require access to the linked directory, mount the target of the link. Then, remove the link from the exported directory on the server.
On the NFS clients, set the wsize and rsize mount options to the bsize value displayed by tunefs. □ On the NFS clients, look in the /etc/fstab file for “stepping-stone” mounts (hierarchical mounts), as in the following example: thyme:/usr /usr nfs defaults 0 0 basil:/usr/share /usr/share nfs defaults 0 0 sage:/usr/share/lib /usr/share/lib nfs defaults 0 0 Wherever possible, change these “stepping-stone” mounts so that whole directories are mounted from a single NFS server.
To Control the Size of LogFiles Logfiles grow without bound, using up disk space. You might want to create a cron job to truncate your logfiles regularly. Following is an example crontab entry that empties the logfile at 1:00 AM every Monday, Wednesday, and Friday: 0 1 * * 1,3,5 cat /dev/null > log_file For more information, type man 1M cron or man 1 crontab at the HP-UX prompt. To Configure Logging for the Other NFS Services 1. Add the -l logfile option to the lines in /etc/inetd.
/usr/sbin/netfmt -lN -f /var/adm/nettl.LOG00 > formatted_file where formatted_file is the name of the file where you want the formatted output from netfmt. The default logfile, /var/adm/nettl.LOGnn, is specified in the nettl configuration file, /etc/nettlgen.conf. If the file /var/adm/nettl.LOG00 does not exist on your system, the default logfile may have been changed in /etc/nettlgen.conf. For more information, see nettl(1M) and netfmt(1M). Tracing With nettl and netfmt 1.
Index Symbols + (plus sign) in AutoFS maps, 97 in group file, 61 in passwd file, 60 -hosts map, 48, 91 examples, 92 -null map, 93 32k transfer size, 55 A access denied, NFS, 129 attribute caching, 134, 137 auto_master map, 82, 85, 91 AUTO_OPTIONS variable, 102, 103 AutoFS -hosts map, 48, 91 -null map, 93 direct vs.
hung program, 132 hung system, 48 I included files, in AutoFS maps, 97 indirect map, 85 advantages, 80 environment variables in, 87 examples, 86 wildcards in, 88, 90 inetd.conf file, 62, 128, 139 inetd.
Revision Control System see RCS, 132 rexd logging, 139 root access to exported directories, 130 RPC, 14 authentication error, 26 rpc file, 63 rpc.