VERITAS Storage Foundation 4.1 Cluster File System HP Serviceguard Storage Management Suite Extracts, December 2005

VERITAS Cluster File System Architecture
2 Installation and Administration Guide
VERITAS Cluster File System Architecture
Master/Slave File System Design
The VERITAS Cluster File System uses a master/slave, or primary/secondary,
architecture to manage file system metadata on shared disk storage. The first server to
mount each cluster file system becomes its primary; all other nodes in the cluster become
secondaries. Applications access the user data in files directly from the server on which
they are running. Any CFS file system’s metadata, however, is only updated by its CFS
primary node (the first node to mount the file system). The CFS primary node is
responsible for making all metadata updates and for maintaining the file system’s
metadata update intent log. Other servers update file system metadata, to allocate new
files or delete old ones for example, by sending requests to the primary, which performs
the actual updates and responds to the requesting server. This guarantees consistency of
file system metadata and the intent log used to recover from system failures.
CFS Failover
If the server on which the CFS primary is running fails, the remaining cluster nodes elect a
new primary. The new primary reads the file system intent log and completes any
metadata updates that were in process at the time of the failure.
Because nodes using a cluster file system in secondary mode do not update file system
metadata directly, failure of a secondary node does not require any metadata repair. CFS
recovery from secondary node failure is therefore faster than recovery from primary node
failure.