Veritas Storage Foundation 5.1 SP1 Cluster File System Installation Guide (5900-1510, April 2011)
■ Coordination points—Act as a global lock during membership changes
See “About coordination points” on page 84.
About data disks
Data disks are standard disk devices for data storage and are either physical disks
or RAID Logical Units (LUNs).
These disks must support SCSI-3 PR and must be part of standard VxVM or CVM
disk groups. CVM is responsible for fencing data disks on a disk group basis. Disks
that are added to a disk group and new paths that are discovered for a device are
automatically fenced.
About coordination points
Coordination points provide a lock mechanism to determine which nodes get to
fence off data drives from other nodes. A node must eject a peer from the
coordination points before it can fence the peer from the data drives. Racing for
control of the coordination points to fence data disks is the key to understand
how fencing prevents split-brain.
Note: Typically, a fencing configuration for a cluster must have three coordination
points. Symantec also supports server-based fencing with a single CP server as
its only coordination point with a caveat that this CP server becomes a single
point of failure.
The coordination points can be disks, servers, or both.
■ Coordinator disks
Disks that act as coordination points are called coordinator disks. Coordinator
disks are three standard disks or LUNs set aside for I/O fencing during cluster
reconfiguration. Coordinator disks do not serve any other storage purpose in
the SFCFS configuration.
Dynamic Multi-pathing (DMP) allows coordinator disks to take advantage of
the path failover and the dynamic adding and removal capabilities of DMP.
On cluster nodes with HP-UX 11i v3, you must use DMP devices or iSCSI devices
for I/O fencing. The following changes in HP-UX 11i v3 require you to not use
raw devices for I/O fencing:
■ Provides native multipathing support
■ Does not provide access to individual paths through the device file entries
Preparing to configure SFCFS
About I/O fencing components
84