Application Use Cases for the HP Serviceguard Storage Management Suite, November 2009

Figure 1: Implementing node-specific files within the CFS name space
CFS storage
/cfsmnt01
apache
Node A
Node B
Local root disk
htdocs
logs
S
y
m
b
o
l
i
c
l
i
n
k
S
y
m
b
o
l
i
c
l
i
nk
Local root disk
Symbolic link is created once on CFS
Node-specific log files are stored in the local
file system (which is not part of CFS)
/
var/opt/hpws/logs
/cfsmnt01/apache/logs/ --> /var/opt/apache/logs
/
var/opt/hpws/logs
CFS storage
/cfsmnt01
apache
Node A
Node B
Local root disk
htdocs
logs
S
y
m
b
o
l
i
c
l
i
n
k
S
y
m
b
o
l
i
c
l
i
nk
Local root disk
Symbolic link is created once on CFS
Node-specific log files are stored in the local
file system (which is not part of CFS)
/
var/opt/hpws/logs
/cfsmnt01/apache/logs/ --> /var/opt/apache/logs
/
var/opt/hpws/logs
Cluster-wide files are best placed on CFS. For example,
application data files usually belong to these
kinds of files on CFS. Other cluster-wide files (like application configuration files) could be stored
locally on node-specific file systems but would need to be kept in sync manually. Since those files
change frequently, storing them on CFS is probably the better choice.
For application executables, the decision is often made in favor of local storage because they are
updated less frequently than configuration files. Storing application executables locally on each node
also increases the application availability slightly. Independent software vendor (ISV) applications might
provide installation utilities that consider these points and provide cluster-wide installation options.
Synchronizing data updates on CFS
Mu
lti-instance applications that access the shared data on CFS from multiple nodes concurrently must
synchronize their write access to the data. All three CFS bundles provide a single file system schema that
is cache-coherent and Portable Operating System Interface for UNIX
®
(POSIX)-compliant, which means
that different processes running on different nodes of the cluster can access the CFS concurrently.
The lockf system call is available for applications to synchronize concurrent access of data within a
file between multiple processes. With CFS, this lockf functionality extends from processes on a
single node to processes on all cluster nodes that have a CFS mounted. Those locks are advisory only
according to the POSIX standard. The HP-UX proprietary S_ENFMT option to enforce locking is not
supported on a CFS. With lockf, a range of a file can be locked with the granularity of a page
(greater than or equal to 4 Kb) to increase parallel write access to the same file.
14