Using Serviceguard Extension for RAC, 10th Edition, April 2013
• Extend each LV to the maximum size possible on that PV (the number of extents available
in a PV can be determined via vgdisplay -v <vgname>)
• Configure LV timeouts, based on the PV timeout and number of physical paths, as
described in the previous section. If a PV timeout has been explicitly set, its value can be
displayed via pvdisplay -v. If not, pvdisplay will show a value of default, indicating
that the timeout is determined by the underlying disk driver. For SCSI, in HP-UX 11i v3,
the default timeout is 30 seconds.
• Null out the initial part of each LV to ensure ASM accepts the LV as an ASM disk group
member. Note that we are zeroing out the LV data area, not its metadata. It is the ASM
metadata that is being cleared.
# lvcreate -n lvol1 vgora_asm
# lvcreate -n lvol2 vgora_asm
# lvchange -C y /dev/vgora_asm/lvol1
# lvchange -C y /dev/vgora_asm/lvol2
# Assume vgdisplay shows each PV has 2900 extents in our example
# lvextend -l 2900 /dev/vgora_asm/lvol1 /dev/disk/disk1
# lvextend -l 2900 /dev/vgora_asm/lvol2 /dev/disk/disk2
# Assume a PV timeout of 30 seconds.
# There are 2 paths to each PV, so the LV timeout value is 60 seconds
# lvchange -t 60 /dev/vgora_asm/lvol1
# lvchange -t 60 /dev/vgora_asm/lvol2
# dd if=/dev/zero of=/dev/vgora_asm/rlvol1 bs=8192 count=12800
# dd if=/dev/zero of=/dev/vgora_asm/rlvol1 bs=8192 count=12800
3. Export the volume group across the SGeRAC cluster and mark it as shared, as specified by
SGeRAC documentation. Assign the right set of ownerships and access rights to the raw logical
volumes on each node as required by Oracle (oracle:dba and 0660, respectively).
We can now use the raw logical volume device names as disk group members when configuring
ASM disk groups using the Oracle database management utilities. There are a number of ways
of doing this described in Oracle ASM documentation, including the dbca database creation
wizard as well as sqlplus.
The same command sequence, with some modifications, can be used for adding new disks to an
already existing volume group that is being used by ASM to store one or more RAC databases.
If the database(s) should be up and running during the operation, we use the Single Node Online
volume Reconfiguration (SNOR) feature of SLVM.
Step 1 of the above sequence is modified as follows:
• First, deactivate the volume group vgora_asm on all nodes but one, say node A. This requires
prior shutdown of the database(s) using ASM-managed storage and ASM itself, on all nodes
but node A. See the section to understand why it is not adequate to shut down only the
database(s) using the volume group to be reconfigured, and why we must shut down ASM
itself and therefore all database(s) using ASM-managed storage, on all nodes but node A.
• Next, on node A, switch the volume group to exclusive mode, using SNOR.
• Initialize the disks to be added with pvcreate, and then extend the volume group with
vgextend.
Step 2 remains the same. Logical volumes are prepared for the new disks in the same way.
In step 3, switch the volume group back to shared mode, using SNOR, and export the VG across
the cluster, ensuring that the right ownership and access rights are assigned to the raw logical
volumes. Activate the volume group, and restart ASM and the database(s) using ASM-managed
storage on all nodes (they are already active on node A).
ASM over Raw disk
As mentioned above, a new I/O infrastructure that enables the native built-in multipathing
functionality is included in HP-UX 11i v3. This feature offers users a continuous I/O access to a
LUN or disk if any of the paths fails. This newly added functionality enables SGeRAC (11.17.01
SGeRAC Support for ASM on HP-UX 11i v3 99