Managing Serviceguard Sixteenth Edition, March 2009

vgcreate -g bus0 /dev/vgdatabase /dev/dsk/c1t2d0
vgextend -g bus1 /dev/vgdatabase /dev/dsk/c0t2d0
CAUTION: Volume groups used by Serviceguard must have names no longer
than 35 characters (that is, the name that follows /dev/, in this example
vgdatabase, must be at most 35 characters long).
The first command creates the volume group and adds a physical volume to it in
a physical volume group called bus0. The second command adds the second drive
to the volume group, locating it in a different physical volume group named bus1.
The use of physical volume groups allows the use of PVG-strict mirroring of disks.
4. Repeat this procedure for additional volume groups.
Creating Logical Volumes Use the following command to create logical volumes (the
example is for /dev/vgdatabase):
lvcreate -L 120 -m 1 -s g /dev/vgdatabase
This command creates a 120 MB mirrored volume named lvol1. The name is supplied
by default, since no name is specified in the command. The -s g option means that
mirroring is PVG-strict, that is, the mirror copies of data will be in different physical
volume groups.
NOTE: If you are using disk arrays in RAID 1 or RAID 5 mode, omit the -m 1 and
-s g options.
Creating File Systems If your installation uses filesystems, create them next. Use the
following commands to create a filesystem for mounting on the logical volume just
created:
1. Create the filesystem on the newly created logical volume:
newfs -F vxfs /dev/vgdatabase/rlvol1
Note the use of the raw device file for the logical volume.
2. Create a directory to mount the disk:
mkdir /mnt1
3. Mount the disk to verify your work:
mount /dev/vgdatabase/lvol1 /mnt1
Note the mount command uses the block device file for the logical volume.
4. Verify the configuration:
vgdisplay -v /dev/vgdatabase
Distributing Volume Groups to Other Nodes After creating volume groups for cluster
data, you must make them available to any cluster node that will need to activate the
volume group. The cluster lock volume group must be made available to all nodes.
Preparing Your Systems 209