READ ME before using the Veritas Storage Foundation™ 5.1 SP1 for Oracle RAC Administrator's Guide (April 2011)

CVM does not impose any write locking between nodes. Each node is free to update any area of
the storage. All data integrity is the responsibility of the upper application. From an application
perspective, standalone systems access logical volumes in the same way as CVM systems.
CVM imposes a “Uniform Shared Storage” model. All nodes must connect to the same disk sets
for a given disk group. Any node unable to detect the entire set of physical disks for a given disk
group cannot import the group. If a node loses contact with a specific disk, CVM excludes the
node from participating in the use of that disk.
CVM Communication
CVM communication involves various GAB ports for different types of communication.
Port W
Most CVM communication uses port w for vxconfigd communications. During any change in volume
configuration, such as volume creation, plex attachment or detachment, and volume resizing,
vxconfigd on the master node uses port w to share this information with slave nodes.
When all slaves use port w to acknowledge the new configuration as the next active configuration,
the master updates this record to the disk headers in the VxVM private region for the disk group
as the next configuration.
Port V
CVM uses port v for kernel-to-kernel communication. During specific configuration events, certain
actions require coordination across all nodes. An example of synchronizing events is a resize
operation. CVM must ensure all nodes see the new or old size, but never a mix of size among
members.
CVM also uses this port to obtain cluster membership from GAB and determine the status of other
CVM members in the cluster.
Cluster File System
CFS enables you to simultaneously mount the same file system on multiple nodes and is an extension
of the industry-standard Veritas File System. Unlike other file systems which send data through
another node to the storage, CFS is a true SAN file system. All data traffic takes place over the
storage area network (SAN), and only the metadata traverses the cluster interconnect.
In addition to using the SAN fabric for reading and writing data, CFS offers storage checkpoints
and rollback for backup and recovery.
Access to cluster storage in typical SGeRAC configurations use CFS. Raw access to CVM volumes
is also possible but not part of a common configuration.
CFS Architecture
SGeRAC uses CFS to manage a file system in a large database environment. Since CFS is an
extension of VxFS, it operates in a similar fashion and caches metadata and data in memory
(typically called buffer cache or vnode cache). CFS uses a distributed locking mechanism called
Global Lock Manager (GLM) to ensure all nodes have a consistent view of the file system. GLM
provides metadata and cache coherency across multiple nodes by coordinating access to file
system metadata, such as inodes and free lists. The role of GLM is set on a per-file system basis to
enable load balancing.
CFS involves a primary/secondary architecture. One of the nodes in the cluster is the primary node
for a file system. Though any node can initiate an operation to create, delete, or resize data, the
GLM master node carries out the actual operation. After creating a file, the GLM master node
grants locks for data coherency across nodes. For example, if a node tries to modify a block in a
file, it must obtain an exclusive lock to ensure other nodes that may have the same file cached
have this cached copy invalidated.
Cluster File System 13