Secure NFS on HP-UX 11i v3

17
VII. Using Secure NFS with Serviceguard (Optional)
There are very few differences between a standalone NFS server and a Serviceguard NFS Toolkit
from a Secure NFS perspective. In both cases an NFS service principal must be added to the
Kerberos realm using the hostname of the NFS server. In the Serviceguard NFS Toolkit case, this
hostname would be one that maps to the virtual IP address used by the Serviceguard package running
on the NFS cluster nodes. Just as with a standalone server, this NFS service principal must be
extracted from the Kerberos database and stored in the /etc/krb5.keytab file on any NFS servers
in the Serviceguard cluster where this package could potentially run.
The examples shown throughout this paper have illustrated these steps. In my test environment, the
hostname used by my Serviceguard NFS Toolkit package is “nfs-pkg1.rose.hp.com” and it is this
hostname that NFS clients will mount Highly Available NFS filesystems from. Figure 7 on page 8
shows how a Kerberos credential for the principal “nfs/nfs-pkg1.rose.hp.com” was added to the
database. Figure 14 on page 12 shows this key being extracted from the Kerberos database and
stored in the /etc/krb5.keytab file on my two Serviceguard cluster nodes atcux12.rose.hp.com
and atcux13.rose.hp.com. This allows either of these nodes to run this Serviceguard package and
use the cached credentials in the /etc/krb5.keytab file to help authenticate NFS client requests.
The only other difference between a Serviceguard NFS Toolkit package and a standalone NFS server
from a Secure NFS perspective is how filesystems are shared. On standalone servers this is done via
the /etc/dfs/dfstab file or manually via the share(1M) command. In a Serviceguard NFS
Toolkit package, the shared filesystems are configured in the hanfs.sh script. Each NFS package will
use its own hanfs.sh script, so it is important to modify each script where Secure NFS semantics are
desired.
For example, in my test environment I am sharing the /memfs filesystem via my Serviceguard NFS
Toolkit package. As before, I only want NFS client “atcux10.rose.hp.com” to be able to mount this
filesystem, the client should have read/write access, it must authenticate with Kerberos v5, and the
root user on the client should have root privileges on the NFS server for this filesystem. My sample
hanfs.sh script would therefore contain the following line:
XFS[0]="-o sec=krb5,rw=atcux10.rose.hp.com,root=atcux10.rose.hp.com /memfs"
This change would need to be propagated to each copy of the hanfs.sh script on all NFS server
nodes in the cluster so that the /memfs filesystem is shared with the same syntax on each NFS server
node.
Figure 18 shows an example of NFS client atcux10.rose.hp.com mounting a Secure NFS filesystem
from the hostname associated with the Serviceguard NFS Toolkit package “nfs-pkg1.rose.hp.com.”