HP Serviceguard Version A.11.20 Release Notes, April 2011

Because DSF names may be duplicated between one host and another, it is possible for different
storage devices to have the same name on different nodes in a cluster, and for the same piece of
storage to be addressed by different names. Serviceguard A.11.20 September 2010 patch
(PHSS_41225) and later supports Cluster-wide device files (cDSFs), which ensure that each
storage device used by the cluster has a unique device file name. cDSFs are available on HP-UX
as of the September 2010 Fusion Release.
HP recommends that you use cDSFs for the storage devices in the cluster because this makes it
simpler to deploy and maintain a cluster, and removes a potential source of configuration errors.
Using cDSFs with Easy Deployment (page 16) further simplifies the configuration of storage for the
cluster and packages. See “Creating Cluster-wide Device Special Files (cDSFs)” and “Using Easy
Deployment” in chapter 5 of Managing Serviceguard for instructions.
Points To Note
cDSFs can be created for any group of nodes that you specify, provided that Serviceguard
A.11.20 and the required patch are installed on each node.
Normally, the group should comprise the entire cluster.
cDSFs apply only to shared storage; they will not be generated for local storage, such as root,
boot, and swap devices.
Once you have created cDSFs for the cluster, HP-UX automatically creates new cDSFs when
you add shared storage.
Where cDSFs Reside
cDSFs reside in two new HP-UX directories, /dev/cdisk for cluster-wide block devicefiles and
/dev/rcdisk for cluster-wide character devicefiles. Persistent DSFs that are not cDSFs continue
to reside in /dev/disk and /dev/rdisk, and legacy DSFs (DSFs using the naming convention
that was standard before HP–UX 11i v3) in /dev/dsk and /dev/rdsk. It is possible that a
storage device on an 11i v3 system could be addressed by DSFs of all three types of device
but if you are using cDSFs, you should ensure that you use them exclusively as far as possible.
NOTE: Software that assumes DSFs reside only in /dev/disk and /dev/rdisk will not find
cDSFs and may not work properly as a result; as of the date of this document, this was true of the
Veritas Volume Manager, VxVM.
Limitations of cDSFs
cDSFs are supported only within a single cluster; you cannot define a cDSF group that crosses
cluster boundaries.
A node can belong to only one cDSF group.
cDSFs are not supported by VxVM, CVM, CFS, or any other application that assumes DSFs
reside only in /dev/disk and /dev/rdisk.
Oracle ASM cannot detect cDSFs created after ASM is installed.
cDSFs do not support disk partitions.
Such partitions can be addressed by a device file using the agile addressing scheme, but not
by a cDSF.
cDSFs are not supported by Ignite-UX in Serviceguard Cluster environment. Recovery support
for such a configuration is not supported. If you require support for recovery archives in a
Serviceguard environment do not implement Ignite-UX with cDSFs.
LVM Commands and cDSFs
Some HP-UX commands have new options and behavior to support cDSFs, specifically:
What’s in this Release 19