Veritas Storage Foundation™ 5.0.
Legal Notices © Copyright 2005-2009 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Table of Contents Preface.................................................................................................................................9 Publication History.................................................................................................................................9 1 Introducing Serviceguard Extension for RAC............................................................11 About Serviceguard Extension for RAC..............................................................
Viewing Information on a Disk Group .....................................................................................25 Checking the Connectivity Policy on a Shared Disk Group......................................................25 Determining Whether a Node is CVM Master or Slave.............................................................25 Enabling Write Access to Volumes in the Disk Groups.............................................................25 Deporting and Importing Shared Disk Groups......
Two Host Configuration ............................................................................................................53 Host and Storage Requirements.................................................................................................54 Creating a Snapshot Mirror of a Volume or Volume Set Used by the Database.............................54 Upgrading Existing Volumes to Use Veritas Volume Manager 5.0.................................................
List of Figures 1-1 1-2 1-3 1-4 2-1 5-1 5-2 5-3 5-4 5-5 5-6 6 Serviceguard Extension for RAC Architecture ............................................................................13 Data Stack......................................................................................................................................14 Communication Stack...................................................................................................................15 Cluster Communication....................
List of Tables 1 1-1 2-1 2-2 2-3 2-4 3-1 4-1 4-2 4-3 4-4 4-5 4-6 4-7 4-8 4-9 4-10 4-11 4-12 4-13 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 6-1 6-2 6-3 6-4 A-1 A-2 Printing History ..............................................................................................................................9 SG SMS Serviceguard Cluster File System for RAC Bundle Component Products.....................13 Hardware Requirements............................................................................
Preface The Veritas Storage Foundation™ 5.0.1 for Oracle RAC Installation, Configuration, and Administrator's Guide Extracts for the HP Serviceguard Storage Management Suite on HP-UX 11i v3 contain relevant information extracted from Veritas™ partner documentation and supplemented with HP specific content. Publication History The latest publication date and part number indicate the current edition.
1 Introducing Serviceguard Extension for RAC This chapter contains the following topics: • “About Serviceguard Extension for RAC” (page 11) • “How Serviceguard Extension for RAC Works (High-Level Perspective)” (page 12) • “Component Products and Processes of SG SMS Serviceguard Cluster File System for RAC” (page 13) • “Communication Infrastructure” (page 14) • “Cluster Interconnect Communication Channel” (page 15) • “Low-level Communication: Port Relationship Between GAB and Processes” (page 16) • “Cluster
• • • Share all types of files, in addition to Oracle database files, across nodes. Increase availability and performance with dynamic multipathing (DMP), which provides wide storage array support for protection from failures and performance bottlenecks in the HBAs and SAN switches. Optimize I/O performance through storage mapping technologies and tunable attributes.
Figure 1-1 Serviceguard Extension for RAC Architecture Component Products and Processes of SG SMS Serviceguard Cluster File System for RAC To understand how the SG SMS Serviceguard Cluster File System for RAC product manages database instances running in parallel on multiple nodes, review the architecture and communication mechanisms that provide the infrastructure for Oracle RAC. Table 1-1 highlights the SG SMS Serviceguard Cluster File System for RAC components products.
Table 1-1 SG SMS Serviceguard Cluster File System for RAC Bundle Component Products (continued) Component Product Description Serviceguard Manages Oracle RAC databases and infrastructure components. RAC Extensions (Serviceguard eRAC) Manage cluster membership and communications between cluster nodes. Communication Infrastructure To understand the communication infrastructure, review the data flow and communication requirements.
RAC GAB LLT ODM RAC Cache Fusion/Lock Mgmt Data File Management ODM CFS File System MetaData CFS CVM Volume Management CVM SG/SGeRAC Core SGORAC Cluster State GAB LLT SG/SGeRAC Core Figure 1-3 Communication Stack Cluster Interconnect Communication Channel The cluster interconnect provides the communication channel for all system-to-system communication, in addition to one-node communication between modules.
NIC Server Server NIC GAB Messaging - Cluster Membership/State - Datafile Management - File System Metadata - Volume Management NIC NIC Figure 1-4 Cluster Communication Cluster Membership At a high level, all nodes configured by the installer can operate as a cluster; these nodes form a cluster membership. In SGeRAC, a cluster membership specifically refers to all systems configured with the same cluster ID communicating by way of a redundant cluster interconnect.
in the cluster to manage all storage. All other nodes immediately recognize any changes in disk group and volume configuration with no interaction. CVM Architecture CVM is designed with a “master and slave” architecture. One node in the cluster acts as the configuration master for logical volume management, and all other nodes are slaves. Any node can take over as master if the existing master fails. The CVM master exists on a per-cluster basis and uses GAB and LLT to transport its configuration data.
Access to cluster storage in typical SGeRAC configurations use CFS. Raw access to CVM volumes is also possible but not part of a common configuration. CFS Architecture SGeRAC uses CFS to manage a file system in a large database environment. Since CFS is an extension of VxFS, it operates in a similar fashion and caches metadata and data in memory (typically called buffer cache or vnode cache).
Databases using file systems typically incur additional overhead: • • • Extra CPU and memory usage to read data from underlying disks to the file system cache. This scenario requires copying data from the file system cache to the Oracle cache. File locking that allows for only a single writer at a time. Allowing Oracle to perform locking allows for finer granularity of locking at the row level. File systems generally go through a standard Sync I/O library when performing I/O.
2 Planning SGeRAC Installation and Configuration This chapter contains the following topics: • “Installation Requirements” (page 21) • “About CVM and CFS in an SGeRAC Environment” (page 23) • “Overview of SGeRAC Installation and Configuration Tasks” (page 27) Installation Requirements Make sure each node on which you want to install or upgrade SGeRAC meets the installation requirements.
Table 2-2 Disk Space Requirements Directory Required depots /opt 1.5 GB /usr 225 MB /var 32 MB / 230 MB /tmp 512 MB /var/tmp 600 MB Total 3.1 GB Supported Software Software versions that Storage Foundation 5.0.1 supports include: Table 2-3 Supported Software Oracle RAC Oracle 10g Release 2 Oracle 11g Release 1 HP-UX operating system 11i v3 VxVM, VxFS Use only versions of VxVM and VxFS provided on the software disc.
Identifying the Operating Environment To identify the OE currently installed on a system: • Run the swlist | grep OE command to identify the OE currently installed on your system. The output of this command includes a line that identifies the installed OE. For example: # swlist | grep HPUX|grep OE HPUX11i-HA-OE B.11.31.0909 HP-UX High Availability Operating Environment Required HP Patches SGeRAC requires HP-UX depots and patches on each node before installation.
Adding nodes to the cluster can also result in too small a log size. In this situation, VxVM marks the log invalid and performs full volume recovery instead of using DRL. About CFS Review CFS File System benefits, CFS configuration differences from VxFS and CFS recovery operations. CFS File System Benefits Many features available in VxFS do not come into play in an SGeRAC environment because ODM handles such features.
Coordinating CVM and CFS Configurations After installing SGeRAC, a VCS cluster attribute (HacliUserLevel) is set to give root the ability to run commands on remote systems by way of the cluster interconnect. CFS takes advantage of this mechanism to enable you to perform file system operations requiring the primary node be initiated on secondary nodes and carried out on the primary node transparently.
# vxdg -g shared_disk_group set activation=sw • On the slave nodes, enter: # vxdg -g shared_disk_group set activation=sw Refer to the description of disk group activation modes in the Veritas Volume Manager Administrator's Guide for more information. Deporting and Importing Shared Disk Groups Shared disk groups in an SGeRAC environment are configured for “Autoimport” at the time of CVM startup. If the user manually deports the shared disk group on the CVM master, the disk group is deported on all nodes.
Overview of SGeRAC Installation and Configuration Tasks Phases involved in installing and configuring SF 5.0 for Oracle RAC include: • • • • • Preparing to install and configure SF Oracle RAC Installing SF Oracle RAC and configuring its components Installing Oracle RAC and creating Oracle RAC database Setting up disaster recovery in SF Oracle RAC environment (optional) Setting up backup and recovery feature for SF Oracle RAC (optional) Figure 2-9 describes a high-level flow of the SF 5.
See “Using Storage Checkpoints and Storage Rollback” on page 339. • Database FlashSnapAllows you to create a point-in-time copy of an Oracle RAC database for backup and off-host processing. See Chapter 5: “Using FlashSnap for Backup and Recovery” (page 49). • Storage MappingAllows you to evaluate or troubleshoot I/O performance. You can access mapping information that allows for a detailed understanding of the storage hierarchy in which files reside.
3 Configuring the Repository Database for Oracle This chapter contains the following topics: • “About Configuring the Repository Database for Oracle” (page 29) • “Creating and Configuring the Repository Database for Oracle” (page 29) About Configuring the Repository Database for Oracle After installing SF Oracle RAC, you can create and configure the repository database using the sfua_db_config script.
* A public NIC used by each system in the cluster. * A Virtual IP address and netmask. Press enter to continue. Enter Veritas filesystem mount point for SFORA repository: /sfua_rep Enter the NIC for system galaxy for HA Repository configuration:lan0 Enter the NIC for system nebula for HA Repository configuration:lan0 Enter the Virtual IP address for repository failover:10.182.186.249 Enter the netmask for public NIC interface:255.255.0.
4 Using Storage Checkpoints and Storage Rollback This chapter contains the following topics: • “About Storage Checkpoints and Storage Rollback in SGeRAC” (page 31) • “Using Storage Checkpoints and Storage Rollback for Backup and Restore” (page 31) • “Determining Space Requirements for Storage Checkpoints” (page 32) • “Performance of Storage Checkpoints” (page 33) • “Backing up and Recovering the Database Using Storage Checkpoints” (page 34) • “Guidelines for Oracle Recovery” (page 36) • “Using the Storage C
A Storage Checkpoint initially satisfies read requests by finding the data on the primary file system, using its block map copy, and returning the data to the requesting process. When a write operation changes a data block n in the primary file system, the old data is first copied to the Storage Checkpoint, and then the primary file system is updated with the new data. The Storage Checkpoint maintains the exact view of the primary file system at the time the Storage Checkpoint was taken.
allocated to store the original block content. Subsequent changes to the same block require no overhead or block allocation. If a file system that has Storage Checkpoints runs out of space, by default VxFS removes the oldest Storage Checkpoint automatically instead of returning an ENOSPC error code (UNIX errno 28- No space left on device), which can cause the Oracle instance to fail.
Backing up and Recovering the Database Using Storage Checkpoints Storage Checkpoints can be created by specifying one of the following options: online, offline, or instant. To create a Storage Checkpoint with the online option, the database should be online and you must enable ARCHIVELOG mode for the database. For the offline option, the database should be offline. During the creation of the Storage Checkpoint, the tablespaces are placed in backup mode.
Storage Checkpoints can only be used to restore from logical errors (for example, a human error). Because all the data blocks are on the same physical device, Storage Checkpoints cannot be used to restore files due to a media failure. A media failure requires a database restore from a tape backup or a copy of the database files kept on a separate medium.
NOTE: Some database changes made after a Storage Checkpoint was taken may make it impossible to perform an incomplete recovery of the databases after Storage Rollback of an online or offline Storage Checkpoint using the current control files. For example, you cannot perform incomplete recovery of the database to the point right before the control files have recorded the addition or removal of datafiles.
redo logs were restored to an alternate location, use the ALTER DATABASE RECOVER ... FROM statement during media recovery. • After storage rollback, perform Oracle recovery, applying some or all of the archived redo logs. NOTE: After rolling back the database (including control files and redo logs) to a Storage Checkpoint, you need to recover the Oracle database instance.
NOTE: The SGeRAC command line interface depends on certain tablespace and container information that is collected and stored in a repository. Some CLI commands update the repository by default. It is also important to regularly ensure the repository is up-to-date by using the dbed_update command. NOTE: For SGeRAC database, when you issue the commands, replace $ORACLE_SID with $ORACLE_SID=instance_name and provide the instance name on which the instance is running.
Prerequisites • You must log in as the database administrator to use the following CLI commands. — dbed_update — dbed_ckptcreate — dbed_clonedb • You can log in as the database administrator (typically, the user ID oracle) or superuser to use the following CLI commands. — dbed_ckptdisplay — dbed_ckptmount — dbed_ckptumount — dbed_ckptrollback — dbed_ckptremove Creating or Updating the Repository Using dbed_update You can use the dbed_update command to create or update the repository.
Table 4-5 Create Storage Checkpoint Notes Prerequisites You must be logged on as the database administrator (typically, the user ID oracle). For best recoverability, always keep ARCHIVELOG mode enabled when you create Storage Checkpoints. Usage notes dbed_ckptcreate stores Storage Checkpoint information under the following directory: /etc/vx/vxdbed/$ORACLE_SID/checkpoint_dir See the dbed_ckptcreate(1M) manual page for more information.
Table 4-6 Display Storage Checkpoints Notes Prerequisites You may be logged in as either the database administrator or superuser. Usage Notes In addition to displaying the Storage Checkpoints created by SGeRAC, dbed_ckptdisplay also displays other Storage Checkpoints (for example, Storage Checkpoints created by the Capacity Planning Utility and NetBackup).
To display Storage Checkpoints created by HP Storage Management suite for Oracle • Use the dbed_ckptdisplay command as follows to display information for Storage Checkpoints created by SGeRAC: # /opt/VRTS/bin/dbed_ckptdisplay -S PROD -H /oracle/product/10g Checkpoint_975876659 Sun Apr 3 12:50:59 2005 P+R+IN Checkpoint_974424522_wr001 Thu May 16 17:28:42 2005 C+R+ON Checkpoint_974424522 Thu May 16 17:28:42 2004 P+R+ON To display other Storage Checkpoints • Use the dbed_ckptdisplay command as follows: # /o
You can use the dbed_ckptcreate command to schedule Storage Checkpoint creation in a cron job or other administrative script. Before scheduling Storage Checkpoints, the following conditions must be met: Table 4-7 Scheduling Storage Checkpoints Notes Prerequisites You must be logged on as the database administrator (typically, the user ID oracle).
Table 4-8 Mounting Storage Checkpoints Notes Prerequisites You may be logged in as either the database administrator or superuser. Usage notes The dbed_ckptmount command is used to mount a Storage Checkpoint into the file system namespace. Mounted Storage Checkpoints appear as any other file system on the machine and can be accessed using all normal file system based commands. Storage Checkpoints can be mounted as read-only or read-write. By default, Storage Checkpoints are mounted as read-only.
Table 4-10 Perform Storage Rollback Notes Prerequisites You may be logged in as either the database administrator or superuser. Usage notes The dbed_ckptrollback command rolls an Oracle database back to a specified Storage Checkpoint. You can perform a Storage Rollback for the entire database, a specific tablespace, or list of datafiles. Database rollback for the entire database requires that the database be inactive before Storage Rollback commences.
To remove Storage Checkpoints • Use the dbed_ckptremove command as follows: # /opt/VRTS/bin/dbed_ckptremove -S PROD -c Checkpoint_971672042_wr001 Cloning the Oracle Instance Using dbed_clonedb You can use the dbed_clonedb command to clone an Oracle instance using a Storage Checkpoint. Cloning an existing database using a Storage Checkpoint must be done on the same host.
To clone an Oracle instance with manual Oracle recovery • Use the dbed_clonedb command as follows: # /opt/VRTS/bin/dbed_clonedb -S NEW9 -m /local/oracle9/1 -c Checkpoint_988813047 -i Primary Oracle SID is TEST10g New Oracle SID is NEW9 Checkpoint_988813047 not mounted at /local/oracle9/1 Mounting Checkpoint_988813047 at /local/oracle9/1 Using environment-specified parameter file /local/oracle9/links/dbs/initTEST10g.ora Default Oracle parameter file found: /local/oracle9/links/dbs/initTEST10g.
• Use the dbed_clonedb command as follows: # /opt/VRTS/bin/dbed_clonedb -S NEW9 -m /local/oracle9/1 -c Checkpoint_988813047 Primary Oracle SID is TEST10g New Oracle SID is NEW9 Checkpoint_988813047 not mounted at /local/oracle9/1 Mounting Checkpoint_988813047 at /local/oracle9/1 Using environment-specified parameter file /local/oracle9/links/dbs/initTEST10g.ora Default Oracle parameter file found: /local/oracle9/links/dbs/initTEST10g.ora Copying /local/oracle9/links/dbs/initTEST10g.
5 Using FlashSnap for Backup and Recovery This chapter contains the following topics: • “About Veritas Database FlashSnap” (page 49) • “Planning to Use Database FlashSnap” (page 52) • “Preparing Hosts and Storage for Database FlashSnap ” (page 53) • “Summary of Database Snapshot Steps” (page 60) • “Creating a Snapplan (dbed_vmchecksnap)” (page 65) • “Validating a Snapplan (dbed_vmchecksnap)” (page 69) • “Displaying, Copying, and Removing a Snapplan (dbed_vmchecksnap)” (page 71) • “Creating a Snapshot (dbed_
NOTE: For SGeRAC database, when you issue the commands, replace $ORACLE_SID with $ORACLE_SID=instance_name and provide the instance name on which the instance is running.
storage is the only aspect of using Database FlashSnap that requires the system administrator’s participation. To use Database FlashSnap, a database administrator must first define their snapshot requirements. For example, they need to determine whether off-host processing is required and, if it is, which host should be used for it. In addition, it is also important to consider how much database downtime can be tolerated. Database snapshot requirements are defined in a file called a snapplan.
NOTE: dbed_vmclonedb does not support instant snapshot for database cloning. All of these commands can be executed by the Oracle database administrator and do not require superuser (root) privileges. NOTE: Database FlashSnap operations can be executed from the command-line. Using Database FlashSnap Options Database FlashSnap offers three options for creating database snapshots. The option you choose is specified in the snapplan.
Preparing Hosts and Storage for Database FlashSnap This section describes the following: • • • “Setting Up Hosts” (page 53) “Creating a Snapshot Mirror of a Volume or Volume Set Used by the Database” (page 54) “Upgrading Existing Volumes to Use Veritas Volume Manager 5.0” (page 57) Setting Up Hosts Database FlashSnap requires sufficient Veritas Volume Manager disk space, and can be used on the same host that the database resides on (the primary host) or on a secondary host.
Figure 5-2 Example of an Off-host Database FlashSnap Solution Primary Host Secondary Host Network 1 2 Disks containing primary volumes used to hold production databases SCSI or Fibre Channel Connectivity Disks containing snapshot volumes Host and Storage Requirements Before using Database FlashSnap, ensure that: • • • • • • All files are on VxFS file systems over VxVM volumes. Raw devices are not supported. Symbolic links to datafiles are not supported. ORACLE_HOME is on a separate file system.
Table 5-1 vxsnap Command for Snapshot Mirror Notes Prerequisites • • • • • • • • • • Usage Notes You must be logged in as superuser (root). The disk group must be version 110 or later. For more information on disk group versions, see the vxdg(1M) online manual page. Be sure that a data change object (DCO) and a DCO log volume are associated with the volume for which you are creating the snapshot.
4. Create a mirror of a volume: # vxsnap -g diskgroup addmir volume_name alloc= diskname There is no option for creating multiple mirrors at the same time. Only one mirror can be created at a time. 5. List the available mirrors: # vxprint -g diskgroup -F%name -e"pl_v_name in \"volume_name\"" The following two steps enable database FlashSnap to locate the correct mirror plexes when creating snapshots. 6. Set the dbed_flashsnap tag for the data plex you want to use for breaking off the mirror.
v data_vol fsgen ENABLED 4194304 pl data_vol-01 data_vol ENABLED 4194304 sd PRODdg03-01 data_vol-01 ENABLED 4194304 0 pl data_vol-02 data_vol ENABLED 4194304 sd PRODdg02-01 data_vol-02 ENABLED 4194304 0 dc data_vol_dco data_vol v data_vol_dcl gen ENABLED 560 pl data_vol_dcl-01 data_vol_dcl ENABLED 560 ACTIVE sd PRODdg01-01 data_vol_dcl-01 ENABLED 560 0 pl data_vol_dcl-02 data_vol_dcl DISABLED 560 DCOSNP sd PRODdg02-02 data_vol_dcl-02 ENABLED 560 0 - ACTIVE - - ACTIVE - - - - - SNAPDONE - - - -
5. Use the following command to dissociate a DCO object from an earlier version of VxVM, DCO volume and snap objects from the volume: # vxassist [-g diskgroup] remove log volume logtype=dco 6. Use the following command on the volume to upgrade it: # vxsnap [-g diskgroup] prepare volume alloc=”disk_name1,disk_name2” Provide two disk names to avoid overlapping the storage of the snapshot DCO plex with any other non-moving data or DCO plexes.
TY dg dm dm dm NAME PRODdg PRODdg01 PRODdg02 PRODdg03 ASSOC PRODdg Disk_1 Disk_2 Disk_3 KSTATE - LENGTH 71117760 71117760 71117760 PLOFFS - STATE - TUTIL0 - v data_vol fsgen ENABLED 4194304 pl data_vol-01 data_vol ENABLED 4194304 sd PRODdg01-01 data_vol-01 ENABLED 4194304 0 pl data_vol-04 data_vol ENABLED 4194304 sd PRODdg02-03 data_vol-04 ENABLED 4194304 0 dc data_vol_dco data_vol v data_vol_dcl gen ENABLED 560 pl data_vol_dcl-01 data_vol_dcl ENABLED 560 sd PRODdg01-02 data_vol_dcl-01 ENABLED 560 0
TY dg dm dm dm NAME PRODdg PRODdg01 PRODdg02 PRODdg03 ASSOC PRODdg Disk_1 Disk_2 Disk_3 KSTATE - LENGTH 71117760 71117760 71117760 PLOFFS - pl data_vol-03 DISABLED 4194304 sd PRODdg02-01 data_vol-03 ENABLED 4194304 v data_vol fsgen ENABLED 4194304 pl data_vol-01 data_vol ENABLED 4194304 sd PRODdg01-01 data_vol-01 ENABLED 4194304 pl data_vol-04 data_vol ENABLED 4194304 sd PRODdg02-03 data_vol-04 ENABLED 4194304 dc data_vol_dco data_vol v data_vol_dcl gen ENABLED 560 pl data_vol_dcl-01 data_vol_dcl ENAB
taken out of backup mode, the log files are switched to ensure that the extra redo logs are archived, and a snapshot of the archive logs is created. If the SNAPSHOT_MODE is set to offline, the database must be shut down before the snapshot is created. Online redo logs and control files are required and will be used to ensure a full database recovery. If the SNAPSHOT_MODE is set to instant, tablespaces are not put into and out of backup mode.
the -o mountdb option to perform your own point-in-time recovery and bring up the database manually. For a point-in-time recovery, the snapshot mode must be online. You can also create a clone on the primary host. Your snapplan settings specify whether a clone should be created on the primary or secondary host. 5. 6. You can now use the clone database to perform database backup and other off-host processing work.
Figure 5-3 Prerequisites for Creating a Snapshot of Your Database There are many actions you can take after creating a snapshot of your database using Database FlashSnap. You can create a clone of the database for backup and off-host processing purposes. You can resynchronize the snapshot volumes with the primary database. In the event of primary database failure, you can recover it by reverse resynchronizing the snapshot volumes.
Figure 5-4 is a flow chart, which depicts the actions that you can perform after creating a snapshot of your database using Database FlashSnap.
Creating a Snapplan (dbed_vmchecksnap) The dbed_vmchecksnap command creates a snapplan that dbed_vmsnap uses to create a snapshot of an Oracle database. The snapplan specifies snapshot scenarios (such as online, offline, or instant). You can name a snapplan file whatever you choose. Each entry in the snapplan file is a line in parameter=argument format.
Table 5-3 Parameter Values for dbed_vmchecksnap (continued) Parameter Value SNAPSHOT_PLEX_TAG Specifies the snapshot plex tag. Use this variable to specify a tag for the plexes to be snapshot. The maximum length of the plex_tag is 15 characters. The default plex tag is dbed_flashsnap. SNAPSHOT_VOL_PREFIX Specifies the snapshot volume prefix. Use this variable to specify a prefix for the snapshot volumes split from the primary disk group. A volume name cannot be more than 32 characters.
1. Change directories to the working directory you want to store your snapplan in. # cd /working_directory 2. Create a snapplan with default values using the dbed_vmchecksnap command: # /opt/VRTS/bin/dbed_vmchecksnap -S ORACLE_SID -H ORACLE_HOME -f SNAPPLAN -o setdefaults -t host_name [-p PLEX_TAG] 3. Open the snapplan file in a text editor and modify it as needed. • In this example, a snapplan, snap1, is created for a snapshot image in a single-host configuration and default values are set.
By default, a snapplan’s SNAPSHOT_PLEX_TAG value is set as dbed_flashsnap. You can use the -p option to assign a different tag name. Make use of the -p option when creating the snapplan with the setdefaults option. • In the following example, the -p option is used with setdefaults to assign my_tag as the SNAPSHOT_PLEX_TAG value.
ORACLE_SID=PRODARCHIVE LOG_DEST=/prod_ar SNAPSHOT_ARCHIVE_LOG=yes SNAPSHOT_MODE=online SNAPSHOT_PLAN_FOR=database SNAPSHOT_PLEX_TAG=dbed_flashsnap SNAPSHOT_VOL_PREFIX=SNAP_ALLOW_REVERSE_RESYNC=no SNAPSHOT_MIRROR=2 Establishing a mandatory archive destination When cloning a database using Database FlashSnap, the Oracle database must have at least one mandatory archive destination.
NOTE: In a HA environment, you must modify the default snapplan, use the virtual host name defined for the resource group for the PRIMARY_HOST and/or SECONDARY_HOST, and run validation. • In the following example, a snapplan, snap1, is validated for a snapshot image in a single-host configuration. The primary host is host1 and the working directory is /export/ snap_dir.
Displaying, Copying, and Removing a Snapplan (dbed_vmchecksnap) Consider these notes before listing all snapplans for a specific Oracle database, displaying a snapplan file, or copying and removing snapplans. Table 5-6 Check Snapplan Notes Usage Notes • If the local snapplan is updated or modified, you must revalidate it. • If the database schema or disk group is modified, you must revalidate it after running dbed_update.
STATUS_INFO SNAP_STATUS=init_full DB_STATUS=init Copying a Snapplan If you want to create a snapplan similar to an existing snapplan, you can simply create a copy of the existing snapplan and modify it. To copy a snapplan from the VxDBA repository to your current directory, the snapplan must not already be present in the current directory.
NOTE: You cannot access Database FlashSnap commands (dbed_vmchecksnap, dbed_vmsnap, and dbed_vmclonedb) with the VxDBA menu utility. Table 5-7 Create Snapshot Notes Prerequisites • You must be logged in as the Oracle database administrator. • You must create and validate a snapplan using dbed_vmchecksnap before you can create a snapshot image with dbed_vmsnap. Usage Notes • The dbed_vmsnap command can only be used on the primary host.
If -r is used in dbed_vmclonedb, make sure is created and owned by Oracle DBA. Otherwise, the following mount points need to be created and owned by Oracle DBA: /prod_db. /prod_ar. dbed_vmsnap ended at 2006-03-02 14:16:11 • In this example, a snapshot image of the primary database, PROD, is created for a two-host configuration. In this case, the SECONDARY_HOST parameter specifies a different host name than the PRIMARY_HOST parameter in the snapplan.
Figure 5-6 shows a typical configuration when snapshot volumes are used on a secondary host. Figure 5-6 Example System Configuration for Database Backup on a Secondary Host Table 5-8 Backup Snapshot Notes Prerequisites • You must be logged in as the Oracle database administrator to use dbed_vmclonedb command. • Before you can use the dbed_vmclonedb command, you must validate a snapplan and create a snapshot. • The volume snapshot must contain the entire database.
Mounting /clone/single/prod_ar on /dev/vx/dsk/SNAP_PRODdg/SNAP_prod_ar.dbed_vm clonedb ended at 2006-03-02 15:35:50 To mount a Storage Checkpoint carried over from the snapshot volumes to a secondary host 1. On the secondary host, list the Storage Checkpoints carried over from the primary database using: # /opt/VRTS/bin/dbed_ckptdisplay -S ORACLE_SID -n 2.
systems. When creating or unmounting the clone database in a single-host configuration, -r relocate_path is required so that the clone database’s file systems use different mount points than those used by the primary database. When used in a two-host configuration, the dbed_vmclonedb command imports the snapshot disk group SNAP_dg, mounts the file systems on the snapshot volumes, and starts a clone database.
dbed_vmclonedb started at 2006-03-02 15:34:41 Mounting /clone/prod_db on /dev/vx/dsk/SNAP_PRODdg/SNAP_prod_db. Mounting /clone/prod_ar on /dev/vx/dsk/SNAP_PRODdg/SNAP_prod_ar. All redo-log files found. Altering instance_name paramter in initabc.ora. Altering instance_number paramter in initabc.ora. Altering thread paramter in initabc.ora.Starting automatic database recovery. Database NEWPROD (SID=NEWPROD) is in recovery mode.
To clone the database automatically • Use the dbed_vmclonedb command as follows: # /opt/VRTS/bin/dbed_vmclonedb -S ORACLE_SID -g snap_dg -o recoverdb,new_sid=new_sid[,vxdbavol=vol_name] -f SNAPPLAN [-H ORACLE_HOME] [-r relocate_path] Where: Table 5-10 dbed_vmclonedb command options ORACLE_SID Represents the name of the Oracle database used to create the snapshot. snap_dg Represents the name of the diskgroup that contains all the snapshot volumes.
Shutting Down the Clone Database and Unmounting File Systems When you are done using the clone database, you can shut it down and unmount all snapshot file systems with the dbed_vmclonedb -o umount command. If the clone database is used on a secondary host that has shared disks with the primary host, the -o umount option also deports the snapshot disk group. NOTE: Any Storage Checkpoints mounted need to be unmounted before running dbed_vmclonedb -o umount command.
Recreating Oracle Tempfiles After a clone database is created and opened, the tempfiles are added if they were residing on the snapshot volumes. If the tempfiles were not residing on the same file systems as the datafiles, dbed_vmsnap does not include the underlying volumes in the snapshot. In this situation, dbed_vmclonedb issues a warning message and you can then recreate any needed tempfiles on the clone database as described in the following procedure. To recreate the Oracle tempfiles: 1.
Table 5-11 Resynchronize Snapshot Notes Prerequisites • You must be logged in as the Oracle database administrator. • Before you can resynchronize the snapshot image, you must validate a snapplan and create a snapshot. I • f a clone database has been created, shut it down and unmount the file systems using the dbed_vmclonedb -o umountcommand. This command also deports the disk group if the primary and secondary hosts are different. • The Oracle database must have at least one mandatory archive destination.
For example, the following commands will remove a snapshot volume from disk group PRODdg: # vxsnap -g PRODdg dis snap_v1# vxvol -g PRODdg stop snap_v1# vxedit -g PRODdg -rf rm snap_v1 Removing a Snapshot Volume 83
6 Investigating I/O Performance for SGeRAC: Storage Mapping This chapter contains the following topics: • “About Storage Mapping in SGeRAC” (page 85) • “Understanding Storage Mapping” (page 85) • “Verifying Veritas Storage Mapping Setup” (page 86) • “Using vxstorage_stats” (page 86) • “Displaying I/O Statistics Information” (page 87) • “Using dbed_analyzer” (page 88) • “Oracle File Mapping (ORAMAP)” (page 89) About Storage Mapping in SGeRAC The storage mapping feature available with SGeRAC enables you to m
NOTE: For SGeRAC database, when you issue the commands, replace $ORACLE_SID with $ORACLE_SID=instance_name and provide the instance name on which the instance is running. In addition, you can also use the Oracle Enterprise Manager GUI to display storage mapping information after file mapping has occurred. Oracle Enterprise Manager does not display I/O statistics information.
Displaying Storage Mapping Information To display storage mapping information • Use the vxstorage_stats command with the -m option to display storage mapping information: # /opt/VRTSdbed/bin/vxstorage_stats -m -f file_name For example: # /opt/VRTSdbed/bin/vxstorage_stats -m -f /oradata/system01.dbf Output similar to the following is displayed: TY NAME NSUB DESCRIPTION fi /oradata/system01.
• Use the vxstorage_stats command with the -i interval and -c countoptions. The -i intervaloption specifies the interval frequency for displaying updated I/O statistics and the -c count option specifies the number of times to display statistics: # /opt/VRTSdbed/bin/vxstorage_stats [-m] [-s] [-i interval -c count ] -f file_name For example, type the following command to display statistics two times with a time interval of two seconds: # /opt/VRTSdbed/bin/vxstorage_stats -s -i2 -c2 -f /data/system01.
Obtaining Storage Mapping Information for a List of Tablespaces To obtain storage mapping information sorted by tablespace • Use the dbed_analyzer command with the -f filename and -o sort=tbs options: # /opt/VRTSdbed/bin/dbed_analyzer -S $ORACLE_SID -H $ORACLE_HOME -o sort=tbs -f filename For example, # /opt/VRTSdbed/bin/dbed_analyzer -S PROD -H /usr1/oracle -o sort=tbs -f /tmp/tbsfile Output similar to the following is displayed in the file tbsfile: TBSNAME SYSTEM TEMP TEMP SYSAUX ITEM ITM_IDX PRODID_IDX
These two libraries provide a mapping interface to Oracle 10g release 2 or a later release. These two libraries serve as a bridge between the Oracle’s set of storage APIs (ORAMAP) and Veritas Federated Mapping Service (VxMS), a library that assists in the development of distributed SAN applications that must share information about the physical location of files and volumes on a disk.
Storage Mapping Views The mapping information that is captured is presented in Oracle’s dynamic performance views. Table 6-4 provides brief descriptions of these views. For more detailed information, refer to your Oracle documentation. Table 6-4 Storage Mapping Views View Description V$MAP_LIBRARY Contains a list of all the mapping libraries that have been dynamically loaded by the external process. V$MAP_FILE Contains a list of all the file mapping structures in the shared memory of the instance.
lib=VERITAS:/opt/VRTSdbed/lib/libvxoramap_64.sl 2. After verifying that the system is using the Veritas library for Oracle storage mapping, set the file_mapping initialization parameter to true. SQL> alter system set file_mapping=true; The file_mapping initialization parameter is set to false by default. You do not need to shut down the instance to set this parameter. Setting file_mapping=true starts the FMON background process.
15 where fv.file_map_idx = fe.file_map_idx) 16 connect by prior sb.child_idx = sb.parent_idx; Using Oracle Enterprise Manager Oracle Enterprise Manager is a web-based GUI for managing Oracle databases. You can use this GUI to perform a variety of administrative tasks such as creating tablespaces, tables, and indexes; managing user security; and backing up and recovering your database. You can also use Oracle Enterprise Manager to view performance and status information about your database instance.
mapping services and performance statistics for supported storage arrays, you must install both VAIL and Veritas Mapping Services (VxMS). You will need to install required third-party array CLIs and APIs on the host where you are going to install VAIL. If you install any required CLI or API after you install VAIL, rescan the arrays so that SGeRAC can discover them. For details on supported array models, see the Veritas Array Integration Layer Array Configuration Guide.
A Troubleshooting SGeRAC This appendix contains the following topics: • “About Troubleshooting SGeRAC” (page 95) • “Troubleshooting Tips” (page 95) • “Troubleshooting Oracle” (page 95) • “Troubleshooting SGeRAC Checkpoint” (page 96) About Troubleshooting SGeRAC Review the troubleshooting options, known problems, and their solutions. Running Script for Engineering Support Analysis You can use a script to gather information about the configuration and status of your cluster and its various modules.
This file contains the logs pertaining to the CRS resources such as the virtual IP, Listener, and database instances. It indicates some configuration errors or Oracle problems, because CRS does not directly interact with any of the Veritas components. • To check for core dumps — For Oracle 10g R1: $ORA_CRS_HOME/crs/init — For Oracle 10g R2: $ORA_CRS_HOME/log//crsd/ Core dumps for the crsd.bin daemon are written here. Use this file for further debugging.