Using Serviceguard Extension for RAC Manufacturing Part Number : T1859-90038 May 2006 Update © Copyright 2006 Hewlett-Packard Development Company, L.P. All rights reserved.
Legal Notices © Copyright 2003- 2006 Hewlett-Packard Development Company, L.P. Publication Dates: June 2003, June 2004, February 2005, December 2005, March 2006, May 2006 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license.
Contents 1. Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Packages in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serviceguard Extension for RAC Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Storage Planning with CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Planning with CVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Serviceguard Extension for RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration File Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating a Storage Infrastructure with LVM. . .
Contents Verify that Oracle Disk Manager is Running. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Stop Using Oracle Disk Manager Library . . . . . . . . . . . . . . . . . Using Serviceguard Packages to Synchronize with Oracle 10g RAC . . . . . . . . . . . . . . Preparing Oracle Cluster Software for Serviceguard Packages. . . . . . . . . . . . . . . . . Configure Serviceguard Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Create Database with Oracle Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that Oracle Disk Manager is Configured. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure Oracle to use Oracle Disk Manager Library . . . . . . . . . . . . . . . . . . . . . . . . Verify Oracle Disk Manager is Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Oracle to Stop using Oracle Disk Manager Library . . . . . . . . .
Contents Replacing a Lock Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . On-line Hardware Maintenance with In-line SCSI Terminator . . . . . . . . . . . . . . . Replacement of I/O Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Replacement of LAN Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Off-Line Replacement . . . . . . . . . . . . . . . . . . . .
Contents 8
Printing History Table 1 Printing Date June 2003 Document Edition and Printing Date Part Number T1859-90006 Edition First Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) June 2004 T1859-90017 Second Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) February 2005 T1859-90017 Second Edition February 2005 Update Web (http://www.docs.hp.com/) October 2005 T1859-90033 Third Edition Print, CD-ROM (Instant Information), and Web (http://www.
The printing date changes when a new edition is printed. (Minor corrections and updates which are incorporated at reprint do not cause the date to change.) The part number is revised when extensive technical changes are incorporated. New editions of this manual will incorporate all material updated since the previous edition. To ensure that you receive the new editions, you should subscribe to the appropriate product support service. See your HP sales representative for details.
Preface The May 2006 update includes a new appendix on software upgrade procedures for SGeRAC clusters. Also, this guide describes how to use the Serviceguard Extension for RAC (Oracle Real Application Cluster) to configure Serviceguard clusters for use with Oracle Real Application Cluster software on HP High Availability clusters running the HP-UX operating system.
• Using High Availability Monitors (B5736-90046) • Using the Event Monitoring Service (B7612-90015) • Using Advanced Tape Services (B3936-90032) • Designing Disaster Tolerant High Availability Clusters (B7660-90017) • Managing Serviceguard Extension for SAP (T2803-90002) • Managing Systems and Workgroups (5990-8172) • Managing Serviceguard NFS (B5140-90017) • HP Auto Port Aggregation Release Notes Before attempting to use VxVM storage with Serviceguard, please refer to the following: • VERI
Conventions We use the following typographical conventions. audit (5) An HP-UX manpage. audit is the name and 5 is the section in the HP-UX Reference. On the web and on the Instant Information CD, it may be a hot link to the manpage itself. From the HP-UX command line, you can enter “man audit” or “man 5 audit” to view the manpage. See man (1). Book Title The title of a book. On the web and on the Instant Information CD, it may be a hot link to the book itself. KeyCap The name of a keyboard key.
Introduction to Serviceguard Extension for RAC 1 Introduction to Serviceguard Extension for RAC Serviceguard Extension for RAC (SGeRAC) enables the Oracle Real Application Cluster (RAC), formerly known as Oracle Parallel Server RDBMS, to run on HP high availability clusters under the HP-UX operating system. This chapter introduces Serviceguard Extension for RAC and shows where to find different kinds of information in this book.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? What is a Serviceguard Extension for RAC Cluster? A high availability cluster is a grouping of HP servers having sufficient redundancy of software and hardware components that a single point of failure will not disrupt the availability of computer services. High availability clusters configured with Oracle Real Application Cluster software are known as RAC clusters.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? RAC on HP-UX lets you maintain a single database image that is accessed by the HP servers in parallel, thereby gaining added processing power without the need to administer separate databases. Further, when properly configured, Serviceguard Extension for RAC provides a highly available database that continues to operate even if one hardware component should fail.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? Figure 1-2 Group Membership Services Using Packages in a Cluster In order to make other important applications highly available (in addition to the Oracle Real Application Cluster), you can configure your RAC cluster to use packages.
Introduction to Serviceguard Extension for RAC Serviceguard Extension for RAC Architecture Serviceguard Extension for RAC Architecture This chapter discusses the main software components used by Serviceguard Extension for RAC in some detail.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) SGeRAC supports Cluster File System (CFS) through Serviceguard. For more detail information on CFS support refer to the Managing Serviceguard Twelfth Edition user’s guide.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) Oracle RAC data files can be created on a CFS, allowing the database administrator or Oracle software to create additional data files without the need of root system administrator privileges. The archive area can now be on a CFS. Oracle instances on any cluster node can access the archive area when database recovery requires the archive logs.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 10g RAC Overview of SGeRAC and Oracle 10g RAC Starting with Oracle 10g RAC, Oracle has bundled its own cluster software. The initial release is called Oracle Cluster Ready Service (CRS). CRS is used both as a generic term referring to the Oracle cluster software and as a specific term referring to a component within the Oracle clusters software. At subsequent release, Oracle generic CRS is renamed to Oracle Clusterware.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 9i RAC Overview of SGeRAC and Oracle 9i RAC How Serviceguard Works with Oracle 9i RAC Serviceguard provides the cluster framework for Oracle, a relational database product in which multiple database instances run on different cluster nodes. A central component of Real Application Clusters is the distributed lock manager (DLM), which provides parallel cache management for database instances.
Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle RAC Instances Configuring Packages for Oracle RAC Instances Oracle instances can be configured as packages with a single node in their node list. NOTE 24 Packages that start and halt Oracle instances (called instance packages) do not fail over from one node to another; they are single-node packages. You should include only one NODE_NAME in the package ASCII configuration file.
Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle Listeners Configuring Packages for Oracle Listeners Oracle listeners can be configured as packages within the cluster (called listener packages). Each node with a RAC instance can be configured with a listener package. Listener packages are configured to automatically fail over from the original node to an adoptive node. When the original node is restored, the listener package automatically fails back to the original node.
Introduction to Serviceguard Extension for RAC Node Failure Node Failure RAC cluster configuration is designed so that in the event of a node failure, another node with a separate instance of Oracle can continue processing transactions. Figure 1-3 shows a typical cluster with instances running on both nodes. Figure 1-3 Before Node Failure Figure 1-4 shows the condition where Node 1 has failed and Package 1 has been transferred to Node 2.
Introduction to Serviceguard Extension for RAC Node Failure available and is now running on Node 2. Also note that Node 2 can now access both Package 1’s disk and Package 2’s disk. Oracle instance 2 now handles all database access, since instance 1 has gone down. Figure 1-4 After Node Failure In the above figure, pkg1 and pkg2 are not instance packages. They are shown to illustrate the movement of packages in general.
Introduction to Serviceguard Extension for RAC Larger Clusters Larger Clusters Serviceguard Extension for RAC supports clusters of up to 16 nodes. The actual cluster size is limited by the type of storage and the type of volume manager used. Up to Four Nodes with SCSI Storage You can configure up to four nodes using a shared F/W SCSI bus; for more than 4 nodes, FibreChannel must be used. An example of a four-node RAC cluster appears in the following figure.
Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-5 Chapter 1 Four-Node RAC Cluster 29
Introduction to Serviceguard Extension for RAC Larger Clusters In this type of configuration, each node runs a separate instance of RAC and may run one or more high availability packages as well. The figure shows a dual Ethernet configuration with all four nodes connected to a disk array (the details of the connections depend on the type of disk array). In addition, each node has a mirrored root disk (R and R').
Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-6 Eight-Node Cluster with XP or EMC Disk Array FibreChannel switched configurations also are supported using either an arbitrated loop or fabric login topology. For additional information about supported cluster configurations, refer to the HP 9000 Servers Configuration Guide, available through your HP representative.
Introduction to Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC Basic Serviceguard clusters are usually configured in a single data center, often in a single room, to provide protection against failures in CPUs, interface cards, and software.
Serviceguard Configuration for Oracle 10g RAC 2 Serviceguard Configuration for Oracle 10g RAC This chapter shows the additional planning and configuration that is needed to use Oracle Real Application Clusters 10g with Serviceguard.
Serviceguard Configuration for Oracle 10g RAC Interface Areas Interface Areas This section documents interface areas where there is expected interaction between SGeRAC and Oracle 10g Cluster Software and RAC. Group Membership API (NMAPI2) The NMAPI2 client links with the SGeRAC provided NMAPI2 library for group membership service. The group membership is layered on top of the SGeRAC cluster membership where all the primary group members are processes within cluster nodes.
Serviceguard Configuration for Oracle 10g RAC Interface Areas CSS Timeout When SGeRAC is on the same cluster as Oracle Cluster Software, the CSS timeout is set to a default value of 600 seconds (10 minutes) at Oracle software installation. This timeout is configurable with Oracle tools and should not be changed without ensuring that the CSS timeout allows enough time for Serviceguard Extension for RAC (SGeRAC) reconfiguration and to allow multipath (if configured) reconfiguration to complete.
Serviceguard Configuration for Oracle 10g RAC Interface Areas Automated Oracle Cluster Software Startup and Shutdown The preferred mechanism that allows Serviceguard to notify Oracle Cluster Software to start and to request Oracle Cluster Software to shutdown is the use of Serviceguard packages. Monitoring Oracle Cluster Software daemon monitoring is performed through programs initiated by the HP-UX init process.
Serviceguard Configuration for Oracle 10g RAC Interface Areas Mirroring and Resilvering On node and cluster wide failures, when SLVM mirroring is used and Oracle resilvering is available, the recommendation for the logical volume mirror recovery policy is set to full mirror resynchronization (NOMWC) for control and redo files and no mirror resynchronization (NONE) for the datafiles since Oracle would perform resilvering on the datafiles based on the redo log.
Serviceguard Configuration for Oracle 10g RAC Interface Areas Network Monitoring SGeRAC cluster provides network monitoring. For networks that are redundant and monitored by Serviceguard cluster, Serviceguard cluster provides local failover capability between local network interfaces (LAN) that is transparent to applications utilizing User Datagram Protocol (UDP) and Transport Control Protocol (TCP).
Serviceguard Configuration for Oracle 10g RAC Interface Areas (either a local failover or an instance package shutdown, or both) if the RAC cluster interconnect fails. Serviceguard does not monitor Hyperfabric networks directly (integration of Serviceguard and HF/EMS monitor is supported). Public Client Access When the client connection endpoint (virtual or floating IP address) is configured using Serviceguard packages, Serviceguard provides monitoring, local failover, and remote failover capabilities.
Serviceguard Configuration for Oracle 10g RAC RAC Instances RAC Instances Automated Startup and Shutdown CRS can be configured to automatically start, monitor, restart, and halt RAC instances. If CRS is not configured to automatically start the RAC instance at Oracle Cluster Software startup, the RAC instance startup can be automated through scripts using supported commands, such as srvctl or sqlplus, in a SGeRAC package to start and halt RAC instances. NOTE svrctl and sqlplus are Oracle commands.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle Cluster Software Planning Storage for Oracle Cluster Software Oracle Cluster Software requires shared storage for the Oracle Cluster Registry (OCR) and a vote device. Automatic Storage Management can not be used for the OCR and vote device since these files must be accessible before Oracle Cluster Software starts. The minimum required size for the OCR is 100MB and for the vote disk is 20 MB.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC Planning Storage for Oracle 10g RAC Volume Planning with SLVM Storage capacity for the Oracle database must be provided in the form of logical volumes located in shared volume groups. The Oracle software requires at least two log files for each Oracle instance, several Oracle control files and data files for the database itself.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vg_ops/rora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vg_ops/rora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vg_ops/ropsctl1.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC Volume Planning with CVM Storage capacity for the Oracle database must be provided in the form of volumes located in shared disk groups. The Oracle software requires at least two log files for each Oracle instance, several Oracle control files and data files for the database itself.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vx/rdsk/ops_dg/ora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vx/rdsk/ops_dg/ora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vx/rdsk/ops_dg/opsctl1.
Serviceguard Configuration for Oracle 10g RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC includes updating the software and rebuilding the kernel to support high availability cluster operation for Oracle Real Application Clusters.
Serviceguard Configuration for Oracle 10g RAC Configuration File Parameters Configuration File Parameters You need to code specific entries for all the storage groups that you want to use in an Oracle RAC configuration. If you are using LVM, the OPS_VOLUME_GROUP parameter is included in the cluster ASCII file. If you are using VERITAS CVM, the STORAGE_GROUP parameter is included in the package ASCII file.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating a Storage Infrastructure with LVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Cluster Volume Manager (CVM), or VERITAS Volume Manager (VxVM).
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section “Installing Oracle Real Application Clusters.” Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes. NOTE It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM If Oracle performs resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of “NONE” by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of “NONE”, SLVM does not perform the resynchronization.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM On your disk arrays, you should use redundant I/O channels from each node, connecting them to separate controllers on the array. Then you can define alternate links to the LUNs or logical disks you have defined on the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM # # # # lvcreate lvcreate lvcreate lvcreate -n -n -n -n ops1log1.log -L 4 /dev/vg_ops opsctl1.ctl -L 4 /dev/vg_ops system.dbf -L 28 /dev/vg_ops opsdata1.dbf -L 1000 /dev/vg_ops Oracle Demo Database Files The following set of files is required for the Oracle demo database which you can create during the installation process.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Table 2-1 Required Oracle File Names for Demo Database (Continued) Logical Volume Name LV Size (MB) Raw Logical Volume Path Name Oracle File Size (MB)* opsspfile1.ora 5 /dev/vg_ops/ropsspfile1.ora 5 pwdfile.ora 5 /dev/vg_ops/rpwdfile.ora 5 opsundotbs1.dbf 508 /dev/vg_ops/ropsundotbs1.log 500 opsundotbs2.dbf 508 /dev/vg_ops/ropsundotbs2.log 500 example1.dbf 168 /dev/vg_ops/ropsexample1.
Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure Displaying the Logical Volume Infrastructure To display the volume group, use the vgdisplay command: # vgdisplay -v /dev/vg_ops Exporting the Logical Volume Infrastructure Before the Oracle volume groups can be shared, their configuration data must be exported to other nodes in the cluster. This is done either in Serviceguard Manager or by using HP-UX commands, as shown in the following sections.
Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 4. Import the volume group data using the map file from node ftsys9. On node ftsys10 (and other nodes, as necessary), enter: # vgimport -s -m /tmp/vg_ops.
Serviceguard Configuration for Oracle 10g RAC Installing Oracle Real Application Clusters Installing Oracle Real Application Clusters NOTE Some versions of Oracle RAC requires installation of additional software. Refer to your version of Oracle for specific requirements. Before installing the Oracle Real Application Cluster software, make sure the storage cluster is running.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File Cluster Configuration ASCII File The following is an example of an ASCII configuration file generated with the cmquerycl command using the -w full option on a system with Serviceguard Extension for RAC. The OPS_VOLUME_GROUP parameters appear at the end of the file.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File # stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Access Control Policy Parameters.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File # # # # # # Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN # # # # # # List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Creating a Storage Infrastructure with CFS Creating a SGeRAC Cluster with CFS 4.1 for Oracle 10g With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS NOTE CVM 4.1 does not require rootdg 2. Create the Cluster ASCII file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 3. Create the Cluster # cmapplyconf -C clm.asc 4. Start the Cluster # cmruncl # cmviewcl The following output will be displayed: CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running 5.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b 6. Converting Disks from LVM to CVM You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name “SG-CFS-DG-1” was generated to control the resource shared disk group “cfsdg1” is associated with the cluster. 10. Activate the Disk Group # cfsdgadm activate cfsdg1 11.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name “SG-CFS-MP-1” was generated to control the resource. Mount point “/cfs/mnt1” was associated with the cluster. # cfsmntadm add cfsdg1 vol2 /cfs/mnt2 all=rw The following output will be displayed: Package name “SG-CFS-MP-2” was generated to control the resource. Mount point “/cfs/mnt2” was associated with the cluster.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS 15.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # cfsumount /cfs/mnt3 2.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS The following output will be generated: Stopping CVM...
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Creating a Storage Infrastructure with CVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Volume Manager (VxVM), or VERITAS Cluster Volume Manager (CVM).
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Preparing the Cluster and the System Multi-node Package for use with CVM 4.x 1. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 2. Create the Cluster # cmapplyconf -C clm.asc • Start the Cluster # cmruncl # cmviewcl The following output will be displayed: CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b • Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM # cmviewcl CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running MULTI_NODE_PACKAGES PACKAGE STATUS SG-CFS-pkg up IMPORTANT STATE running AUTO_RUN enabled SYSTEM yes After creating these files, use the vxedit command to change the ownership of the raw volume files to oracle and the group membership to dba, and to change the permissions to 660.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM NOTE The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager. Using CVM 3.x This section has information on how to prepare the cluster and the system multi-node package with CVM 3.x. Preparing the Cluster for Use with CVM 3.x In order to use the VERITAS Cluster Volume Manager (CVM) version 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Creating Disk Groups for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c0t3d2 Verify the configuration with the following command: # vxdg list NAME rootdg ops_dg STATE enabled enabled,shared ID 971995699.1025.node1 972078742.1084.node2 Creating Volumes Use the vxassist command to create logical volumes.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Mirror Detachment Policies with CVM The required CVM disk mirror detachment policy is ‘global’, which means that as soon as one node cannot see a specific mirror copy (plex), all nodes cannot see it as well. The alternate policy is ‘local’, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Table 2-2 Volume Name Required Oracle File Names for Demo Database (Continued) Size (MB) Raw Device File Name Oracle File Size (MB) ops2log3.log 128 /dev/vx/rdsk/ops_dg/ops2log3.log 120 opssystem.dbf 508 /dev/vx/rdsk/ops_dg/opssystem.dbf 500 opssysaux.dbf 808 /dev/vx/rdsk/ops_dg/opssysaux.dbf 800 opstemp.dbf 258 /dev/vx/rdsk/ops_dg/opstemp.dbf 250 opsusers.dbf 128 /dev/vx/rdsk/ops_dg/opsusers.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM 1. Create an ASCII file, and define the path for each database object. control1=/u01/ORACLE/db001/ctrl01_1.ctl 2. Set the following environment variable where filename is the name of the ASCII file created. # export DBCA_RAW_CONFIG=/filename Adding Disk Groups to the Cluster Configuration For CVM 4.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) Prerequisites for Oracle 10g (Sample Installation) The following sample steps prepare a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. 1. Create Inventory Groups on each Node Create the Oracle Inventory group if one does not exist, create the OSDBA group, and create the Operator Group (optional).
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) 5. Enable Remote Access (ssh and remsh) for Oracle User on all Nodes 6. Create File System for Oracle Directories In the following samples, /mnt/app is a mounted file system for Oracle software. Assume there is a private disk c4t5d0 at 18 GB size on all nodes. Create the local file system on each node.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # usermod -d /mnt/app/oracle oracle 9. Create Oracle Base Directory (For RAC Binaries on Cluster File System) If installing RAC binaries on Cluster File System, create the oracle base directory once since this is CFS directory visible by all nodes. The CFS file system used is /cfs/mnt1.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # ORACLE_BASE=/mnt/app/oracle; export ORACLE_BASE # mkdir -p $ORACLE_BASE/oradata/ver10 # chown -R oracle:oinstall $ORACLE_BASE/oradata # chmod -R 755 $ORACLE_BASE/oradata The following is a sample of the mapping file for DBCA: system=/dev/vg_ops/ropssystem.dbf sysaux=/dev/vg_ops/ropssysaux.dbf undotbs1=/dev/vg_ops/ropsundotbs01.dbf undotbs2=/dev/vg_ops/ropsundotbs02.dbf example=/dev/vg_ops/ropsexample1.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # chmod 755 VOTE # chown -R oracle:oinstall /cfs/mnt3 b. Create Directory for Oracle Demo Database on CFS Create the CFS directory to store Oracle database files. Run commands only on one node.
Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g Cluster Software Installing Oracle 10g Cluster Software The following sample steps for a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. Installing on Local File System Logon as a “oracle” user: $ export DISPLAY={display}:0.0 $ cd <10g Cluster Software disk directory> $ ./runInstaller Use following guidelines when installing on a local file system: 1.
Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g RAC Binaries Installing Oracle 10g RAC Binaries The following sample steps for a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. Installing RAC Binaries on a Local File System Logon as a “oracle” user: $ export ORACLE BASE=/mnt/app/oracle $ export DISPLAY={display}:0.0 $ cd <10g RAC installation disk directory> $ .
Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database Creating a RAC Demo Database This section demonstrates the steps for creating a demo database with datafiles on raw volumes with SLVM or CVM, or with Cluster File System. Creating a RAC Demo Database on SLVM or CVM Export environment variables for “oracle” user: export ORACLE_BASE=/mnt/app/oracle export \ DBCA_RAW_CONFIG=/mnt/app/oracle/oradata/ver10/ver10_raw.conf export ORACLE_HOME=$ORACLE_BASE/product/10.2.
Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database a. In this sample, the database name and SID prefix are ver10. b. Select the storage option for raw devices. Creating a RAC Demo Database on CFS Export environment variables for “oracle” user: export ORACLE_BASE=/cfs/mnt1/oracle export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORA_CRS_HOME=/mnt/app/crs/oracle/product/10.2.
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Configured Verify that Oracle Disk Manager is Configured NOTE The following steps are specific to CFS. 1. Check the license # /opt/VRTS/bin/vxlictest -n “VERITAS Storage Foundation for Oracle” -f “ODM” output: Using VERITAS License Manager API Version 3.00, Build 2 ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm output: VRTSodm VRTSodm.ODM-KRN VRTSodm.ODM-MAN VRTSodm.ODM-RUN 4.1m 4.
Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Use Oracle Disk Manager Library Configuring Oracle to Use Oracle Disk Manager Library NOTE The following steps are specific to CFS. 1. Login as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home for Oracle 10g For HP 9000 Systems: $ rm ${ORACLE_HOME}/lib/libodm10.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm10.sl For Integrity Systems: $ rm ${ORACLE_HOME}/lib/libodm10.
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running Verify that Oracle Disk Manager is Running NOTE The following steps are specific to CFS. 1. Start the cluster and Oracle database (if not already started) 2.
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running # kcmodule -P state odm Output: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.
Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Stop Using Oracle Disk Manager Library Configuring Oracle to Stop Using Oracle Disk Manager Library NOTE The following steps are specific to CFS. 1. Login as Oracle user 2. Shutdown database 3. Change directories: $ cd ${ORACLE_HOME}/lib 4. Remove the file linked to the ODM library For HP 9000 Systems: $ rm libodm10.sl $ ln -s ${ORACLE_HOME}/lib/libodmd10.sl \ ${ORACLE_HOME}/lib/libodm10.sl For Integrity Systems: $ rm libodm10.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC It is recommended to start and stop Oracle Cluster Software in a Serviceguard package, as that will ensure that Oracle Cluster Software will start after SGeRAC is started and will stop before SGeRAC is halted.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC When the Oracle Cluster Software required storage is configured on SLVM volume groups or CVM disk groups, the Serviceguard package should be configured to activate and deactivate the required storage in the package control script. As an example, modify the control script to activate the volume group in shared mode and set VG in the package control script for SLVM volume groups.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC • DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp2 SG-CFS-MP-2=UP SAME_NODE DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp3 SG-CFS-MP-3=UP SAME_NODE Starting and Stopping Oracle Cluster Software In the Serviceguard package control script, configure the Oracle Cluster Software start in the customer_defined_run_cmds function For 10g 10.1.0.04 or later: /sbin/init.d/init.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC 100 Chapter 2
Serviceguard Configuration for Oracle 9i RAC 3 Serviceguard Configuration for Oracle 9i RAC This chapter shows the additional planning and configuration that is needed to use Oracle Real Application Clusters 9i with Serviceguard.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Planning Database Storage The files needed by the Oracle database must be placed on shared storage that is accessible to all RAC cluster nodes. This section shows how to plan the storage using SLVM, VERITAS CFS, or VERITAS CVM. Volume Planning with SLVM Storage capacity for the Oracle database must be provided in the form of logical volumes located in shared volume groups.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File _____/dev/vg_ops/ropsctl1.ctl_______108______ Oracle Control File 2: ___/dev/vg_ops/ropsctl2.ctl______108______ Oracle Control File 3: ___/dev/vg_ops/ropsctl3.ctl______104______ Instance 1 Redo Log 1: ___/dev/vg_ops/rops1log1.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Storage Planning with CFS With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage • Configuration 4 is for Oracle software and archive to reside on local file system, while the database files are on raw volumes either SLVM or CVM.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage • Oracle create database files • Online changes (OMF - Oracle Managed Files) within CFS • Better manageability • Manual intervention when modifying volumes, DGs, disks • Requires the SGeRAC and CFS software CFS and SGeRAC is available in selected HP Serviceguard Storage Management Suite bundles. Refer to the HP Serviceguard Storage Management Suite Version A.01.00 Release Notes.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage availability disk arrays in RAID nodes. The logical units of storage on the arrays are accessed from each node through multiple physical volume links via DMP (Dynamic Multi-pathing), which provides redundant paths to each unit of storage. Fill out the VERITAS Volume worksheet to provide volume names for volumes that you will create using the VERITAS utilities.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File 1: ___/dev/vx/rdsk/ops_dg/opsctl1.ctl______100______ Oracle Control File 2: ___/dev/vx/rdsk/ops_dg/opsctl2.ctl______100______ Oracle Control File 3: ___/dev/vx/rdsk/ops_dg/opsctl3.
Serviceguard Configuration for Oracle 9i RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC includes updating the software and rebuilding the kernel to support high availability cluster operation for Oracle Real Application Clusters.
Serviceguard Configuration for Oracle 9i RAC Configuration File Parameters Configuration File Parameters You need to code specific entries for all the storage groups that you want to use in an Oracle RAC configuration. If you are using LVM, the OPS_VOLUME_GROUP parameter is included in the cluster ASCII file. If you are using VERITAS CVM, the STORAGE_GROUP parameter is included in the package ASCII file.
Serviceguard Configuration for Oracle 9i RAC Operating System Parameters Operating System Parameters The maximum number of Oracle server processes cmgmsd can handle is 8192. When there are more than 8192 server processes connected to cmgmsd, then cmgmsd will start to reject new requests. Oracle foreground server processes are needed to handle the requests of the DB client connected to the DB instance. Each foreground server process can either be a “dedicated” or a “shared” server process.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Creating a Storage Infrastructure with LVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Cluster Volume Manager (CVM), or VERITAS Volume Manager (VxVM).
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section “Installing Oracle Real Application Clusters.” Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes. NOTE It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM If Oracle performs the resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of “NONE” by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of “NONE”, SLVM does not perform the resynchronization.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line. The following example shows how to configure alternate links using LVM commands. The following disk configuration is assumed: 8/0.15.0 8/0.15.1 8/0.15.2 8/0.15.3 8/0.15.4 8/0.15.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM # ls -l /dev/*/group 3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume. # pvcreate -f /dev/rdsk/c0t15d0 It is only necessary to do this with one of the device file names for the LUN. The -f option is only necessary if the physical volume was previously used in some other volume group. 4.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Oracle Demo Database Files The following set of files is required for the Oracle demo database which you can create during the installation process. Table 3-2 Required Oracle File Names for Demo Database Logical Volume Name LV Size (MB) Raw Logical Volume Path Name Oracle File Size (MB)* opsctl1.ctl 108 /dev/vg_ops/ropsctl1.ctl 100 opsctl2.ctl 108 /dev/vg_ops/ropsctl2.ctl 100 opsctl3.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Table 3-2 Required Oracle File Names for Demo Database (Continued) Logical Volume Name LV Size (MB) Raw Logical Volume Path Name Oracle File Size (MB)* opsundotbs2.dbf 320 /dev/vg_ops/ropsundotbs2.dbf 312 opsexample1.dbf 168 /dev/vg_ops/ropsexample1.dbf 160 opscwmlite1.dbf 108 /dev/vg_ops/ropscwmlite1.dbf 100 opsindx1.dbf 78 /dev/vg_ops/ropsindx1.dbf 70 opsdrsys1.dbf 98 /dev/vg_ops/ropsdrsys1.
Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure Displaying the Logical Volume Infrastructure To display the volume group, use the vgdisplay command: # vgdisplay -v /dev/vg_ops Exporting the Logical Volume Infrastructure Before the Oracle volume groups can be shared, their configuration data must be exported to other nodes in the cluster. This is done either in Serviceguard Manager or by using HP-UX commands, as shown in the following sections.
Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 4. Import the volume group data using the map file from node ftsys9. On node ftsys10 (and other nodes, as necessary), enter: # vgimport -s -m /tmp/vg_ops.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle Real Application Clusters Installing Oracle Real Application Clusters NOTE Some versions of Oracle RAC requires installation of additional software. Refer to your version of Oracle for specific requirements. Before installing the Oracle Real Application Cluster software, make sure the cluster is running.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File Cluster Configuration ASCII File The following is an example of an ASCII configuration file generated with the cmquerycl command using the -w full option on a system with Serviceguard Extension for RAC. The OPS_VOLUME_GROUP parameters appear at the end of the file.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File # stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Access Control Policy Parameters.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File # # # # # # Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN # # # # # # List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Creating a Storage Infrastructure with CFS The following are example steps for creating a storage infrastructure for CFS. Creating a SGeRAC Cluster with CFS 4.1 for Oracle 9i The following software needs to be pre-installed in order to use this configuration: • SGeRAC and CFS are included with the HP Serviceguard Storage Management Suite bundle T2777BA or Mission Critical Operating Environment (MCOE) T2797BA.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS NOTE CVM 4.1 does not require rootdg 2. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.asc -n ever3a -n ever3b Edit Cluster file 3. Create the Cluster # cmapplyconf -C clm.asc 4. Start the Cluster # cmruncl # cmviewcl The following output will be displayed: CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running 5.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS # vxdctl -c mode The following output will be displayed: mode: enabled: cluster active - SLAVE master: ever3b or mode: enabled: cluster active - MASTER slave: ever3b 6. Converting Disks from LVM to CVM Use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Package name “SG-CFS-DG-1” was generated to control the resource shared disk group “cfsdg1” is associated with the cluster. 10. Activate the Disk Group # cfsdgadm activate cfsdg1 11.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Package name “SG-CFS-MP-1” was generated to control the resource. Mount point “/cfs/mnt1” was associated with the cluster. # cfsmntadm add cfsdg1 vol2 /cfs/mnt2 all=rw The following output will be displayed: Package name “SG-CFS-MP-2” was generated to control the resource. Mount point “/cfs/mnt2” was associated with the cluster.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS PACKAGE SG-CFS-pkg SG-CFS-DG-1 SG-CFS-MP-1 SG-CFS-MP-2 SG-CFS-MP-3 STATUS up up up up up STATE running running running running running AUTO_RUN enabled enabled enabled enabled enabled SYSTEM yes no no no no Deleting CFS from the Cluster Halt the applications that are using CFS file systems. 1. Unmount CFS Mount Points # cfsumount /cfs/mnt1 # cfsumount /cfs/mnt2 # cfsumount /cfs/cfssrvm 2.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS 3. Delete DG MNP # cfsdgadm delete cfsdg1 The following output will be generated: Shared disk group “cfsdg1” was disassociated from the cluster. NOTE “cfsmntadm delete” also deletes the disk group if there is no dependent package. To ensure the disk group deletion is complete, use the above command to delete the disk group package. 4.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Creating a Storage Infrastructure with CVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), VERITAS Volume Manager (VxVM), or VERITAS Cluster Volume Manager (CVM).
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM For more detailed information on how to configure CVM 4.x, refer the Managing Serviceguard Twelfth Edition user’s guide. Preparing the Cluster and the System Multi-node Package for use with CVM 4.x 1. Create the Cluster file: # cd /etc/cmcluster # cmquerycl -C clm.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make volsrvm 300m 5.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM NOTE The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager. Using CVM 3.x This section has information on how to prepare the cluster with CVM 3.x. Preparing the Cluster for Use with CVM 3.x In order to use the VERITAS Cluster Volume Manager (CVM) version 3.5, the cluster must be running with a special CVM package.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Creating Disk Groups for RAC Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c0t3d2 Verify the configuration with the following command: # vxdg list NAME rootdg ops_dg 142 STATE enabled enabled,shared ID 971995699.1025.node1 972078742.1084.
Serviceguard Configuration for Oracle 9i RAC Creating Volumes Creating Volumes Use the vxassist command to create logical volumes. The following is an example: # vxassist -g log_files make ops_dg 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files.
Serviceguard Configuration for Oracle 9i RAC Creating Volumes NOTE 144 The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the VERITAS Volume Manager.
Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files Oracle Demo Database Files The following set of volumes is required for the Oracle demo database which you can create during the installation process. Table 3-3 Volume Name Required Oracle File Names for Demo Database Size (MB) Raw Device File Name Oracle File Size (MB) opsctl1.ctl 108 /dev/vx/rdsk/ops_dg/opsctl1.ctl 100 opsctl2.ctl 108 /dev/vx/rdsk/ops_dg/opsctl2.ctl 100 opsctl3.ctl 108 /dev/vx/rdsk/ops_dg/opsctl3.
Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files Table 3-3 Volume Name Required Oracle File Names for Demo Database (Continued) Size (MB) Raw Device File Name Oracle File Size (MB) opsundotbs1.dbf 320 /dev/vx/rdsk/ops_dg/opsundotbs1.dbf 312 opsundotbs2.dbf 320 /dev/vx/rdsk/ops_dg/opsundotbs2.dbf 312 opsexample1.dbf 168 /dev/vx/rdsk/ops_dg/opsexample1.dbf 160 opscwmlite1.dbf 108 /dev/vx/rdsk/ops_dg/opscwmlite1.dbf 100 opsindx1.
Serviceguard Configuration for Oracle 9i RAC Adding Disk Groups to the Cluster Configuration Adding Disk Groups to the Cluster Configuration For CVM 4.x, if the multi-node package was configured for disk group activation, the application package should be configured with package dependency to ensure the CVM disk group is active. For CVM 3.5 and CVM 4.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC Installing Oracle 9i RAC The following sample steps for a SGeRAC cluster for Oracle 9i. Refer to the Oracle documentation for Oracle installation details. Install Oracle Software into CFS Home Oracle RAC software is installed using the Oracle Universal Installer. This section describes installation of Oracle RAC software onto a CFS home. 1. Oracle Pre-installation Steps a. Create user accounts.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC # ll total 0 drwxr-xr-x lost+found 2 root root 96 Jun 3 11:43 drwxr-xr-x oradat 2 oracle dba 96 Jun 3 13:45 d. Set up CFS directory for Server Management. Preallocate space for srvm (200MB) # prealloc /cfs/cfssrvm/ora_srvm 209715200 # chown oracle:dba /cfs/cfssrvm/ora_srvm 2. Install Oracle RAC Software a.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOM E/rdbms/lib SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32 export LD_LIBRARY_PATH SHLIB_PATH export CLASSPATH=/opt/java1.3/lib CLASSPATH=$CLASSPATH:$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$O RACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export CLASSPATH export DISPLAY={display}:0.0 2. Set up Listeners with Oracle Network Configuration Assistant $ netca 3.
Serviceguard Configuration for Oracle 9i RAC Verify that Oracle Disk Manager is Configured Verify that Oracle Disk Manager is Configured NOTE The following steps are specific to CFS. 1. Check the license: # /opt/VRTS/bin/vxlictest -n “VERITAS Storage Foundation for Oracle” -f “ODM” The following output will be displayed: Using VERITAS License Manager API Version 3.00, Build 2 ODM feature is licensed 2.
Serviceguard Configuration for Oracle 9i RAC Configure Oracle to use Oracle Disk Manager Library Configure Oracle to use Oracle Disk Manager Library NOTE The following steps are specific to CFS. 1. Logon as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home using the following commands: For HP 9000 systems: $ rm ${ORACLE_HOME}/lib/libodm9.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm9.sl For Integrity systems: $ rm ${ORACLE_HOME}/lib/libodm9.
Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running Verify Oracle Disk Manager is Running NOTE The following steps are specific to CFS. 1. Start the cluster and Oracle database (if not already started) 2.
Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running 3. Verify that Oracle Disk Manager is loaded with the following command: # kcmodule -P state odm The following output will be displayed: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.
Serviceguard Configuration for Oracle 9i RAC Configuring Oracle to Stop using Oracle Disk Manager Library Configuring Oracle to Stop using Oracle Disk Manager Library NOTE The following steps are specific to CFS. 1. Login as Oracle user 2. Shutdown the database 3. Change directories: $ cd ${ORACE_HOME}/lib 4. Remove the file linked to the ODM library: For HP 9000 systems: $ rm libodm9.sl $ ln -s ${ORACLE_HOME}/lib/libodmd9.sl \ ${ORACLE_HOME}/lib/libodm9.sl For Integrity systems: $ rm libodm9.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using Packages to Configure Startup and Shutdown of RAC Instances To automate the startup and shutdown of RAC instances on the nodes of the cluster, you can create packages which activate the appropriate volume groups and then run RAC. Refer to the section “Creating Packages to Launch Oracle RAC Instances” NOTE The maximum number of RAC instances for Oracle 9i is 127 per cluster.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 1. Shut down the Oracle applications, if any. 2. Shut down Oracle. 3. Deactivate the database volume groups or disk groups. 4. Shut down the cluster (cmhaltnode or cmhaltcl). If the shutdown sequence described above is not followed, cmhaltcl or cmhaltnode may fail with a message that GMS clients (RAC 9i) are active or that shared volume groups are active.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances If you are using CVM disk groups for the RAC database, be sure to include the name of each disk group on a separate STORAGE_GROUP line in the configuration file. If you are using CFS or CVM for RAC shared storage with multi-node packages, the package containing the RAC instance should be configured with package dependency to depend on the multi-node packages.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances own script, and copying it to all nodes that can run the package. This script should contain the cmmodpkg -e command and activate the package after RAC and the cluster manager have started. Adding or Removing Packages on a Running Cluster You can add or remove packages while the cluster is running, subject to the limit of MAX_CONFIGURED_PACKAGES.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances First, generate a control script template: # cmmakepkg -s /etc/cmcluster/pkg1/control.sh You may customize the script, as described in the section, “Customizing the Package Control Script.” Customizing the Package Control Script Check the definitions and declarations at the beginning of the control script using the information in the Package Configuration worksheet.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances NOTE • Add service command(s) • Add a service restart parameter, if desired. Use care in defining service run commands. Each run command is executed by the control script in the following way: • The cmrunserv command executes each run command and then monitors the process id of the process created by the run command.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Customizing the Control Script for RAC Instances Use the package control script to perform the following: • Activation and deactivation of RAC volume groups. • Startup and shutdown of the RAC instance. • Monitoring of the RAC instance. Set RAC environment variables in the package control script to define the correct execution environment for RAC.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using the Command Line to Configure an Oracle RAC Instance Package Serviceguard Manager provides a template to configure package behavior that is specific to an Oracle RAC Instance package. The RAC Instance package starts the Oracle RAC instance, monitors the Oracle processes, and stops the RAC instance.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 5. Copy the Oracle shell script templates from the ECMT default source directory to the package directory. # cd /etc/cmcluster/pkg/${SID_NAME} # cp -p /opt/cmcluster/toolkit/oracle/* Example: # cd /etc/cmcluster/pkg/ORACLE_TEST0 # cp -p /opt/cmcluster/toolkit/oracle/* Edit haoracle.conf as per README 6. Gather the package service name for monitoring Oracle instance processes.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using Serviceguard Manager to Configure Oracle RAC Instance Package The following steps use the information from the example in section 2.2. It is assumed that the SGeRAC cluster environment is configured and the ECMT can be used to start the Oracle RAC database instance. 1. Start Serviceguard Manager and Connect to the cluster. Figure 3-1 shows a RAC Instance package for node sg21.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Next select the check box “Enable template(x)” to enable Package Template for Oracle RAC. The template defaults can be reset with the “Reset template defaults” push button. When enabling the template for Oracle RAC, the package can only be run on one node. 4. Select the Node tab and select the node to run this package. 5. Select Networks tab and add monitored subnet for package. 6.
Maintenance and Troubleshooting 4 Maintenance and Troubleshooting This chapter includes information about carrying out routine maintenance on an Real Application Cluster configuration. As presented here, these tasks differ in some details from the similar tasks described in the Managing Serviceguard user’s guide.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Reviewing Cluster and Package States with the cmviewcl Command A cluster or its component nodes may be in several different states at different points in time. Status information for clusters, packages and other cluster elements is shown in the output of the cmviewcl command and in some displays in Serviceguard Manager.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Examples of Cluster and Package States The following is an example of the output generated shown for the cmviewcl command: CLUSTER cluster_mo NODE minie STATUS up STATUS up Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY up NODE mo STATUS up Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command mo up Script_Parameters: ITEM STATUS Service up Service up Service up Service up Service up PACKAGE SG-CFS-DG-1 NODE_NAME minie enabled MAX_RESTARTS 0 5 5 0 0 STATUS up STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-pkg NODE_NAME mo STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-pkg PACKAGE SG-CFS-MP-1 NODE_NAME minie STATUS up STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 PACKAGE SG-CFS-MP-3 NODE_NAME minie STATUS up STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 SATISFIED yes STATE running SWITCHING enabled SATISFIED yes STATE running STATE running AUTO_RUN enabled SYSTEM no SWITCHI
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Node Status and State The status of a node is either up (active as a member of the cluster) or down (inactive in the cluster), depending on whether its cluster daemon is running or not. Note that a node might be down from the cluster perspective, but still up and running HP-UX. A node may also be in one of the following states: • Failed. A node never sees itself in this state.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Package Switching Attributes Packages also have the following switching attributes: • Package Switching. Enabled means that the package can switch to another node in the event of failure. • Switching Enabled for a Node. Enabled means that the package can switch to the referenced node.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command MEMBER the ID number of a member of a group PID the Process ID of the group member MEMBER_NODE the Node on which the group member is running Service Status Services have only status, as follows: • Up. The service is being monitored. • Down. The service is not running. It may have halted or failed. • Uninitialized.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Failover and Failback Policies Packages can be configured with one of two values for the FAILOVER_POLICY parameter: • CONFIGURED_NODE. The package fails over to the next node in the node list in the package configuration file. • MIN_PACKAGE_NODE. The package fails over to the node in the cluster with the fewest running packages on it.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Start configured_node Failback manual Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled NAME ftsys9 (cur rent) NODE ftsys10 STATUS up STATE running Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up PATH 28.1 32.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Quorum Server Status: NAME STATUS lp-qs up ... NODE ftsys10 STATUS up STATE running STATE running Quorum Server Status: NAME STATUS lp-qs up STATE running CVM Package Status If the cluster is using the VERITAS Cluster Volume Manager for disk storage, the system multi-node package CVM-VxVM-pkg must be running on all active nodes for applications to be able to access CVM disk groups.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command ITEM STATUS Service up VxVM-CVM-pkg.srv MAX_RESTARTS 0 RESTARTS 0 NAME Status After Moving the Package to Another Node After issuing the following command: # cmrunpkg -n ftsys9 pkg2 the output of the cmviewcl -v command is as follows: CLUSTER example NODE ftsys9 STATUS up STATUS up STATE running Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up PATH 56/36.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback manual Script_Parameters: ITEM STATUS NAME MAX_RESTARTS Service up service2.1 0 Subnet up 15.13.168.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Both packages are now running on ftsys9 and pkg2 is enabled for switching. Ftsys10 is running the daemon and no packages are running on ftsys10.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command NODE ftsys10 STATUS up Network_Parameters: INTERFACE STATUS PRIMARY up Serial_Heartbeat: DEVICE_FILE_NAME /dev/tty0p0 STATE running PATH 28.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command UNOWNED_PACKAGES PACKAGE PKG3 STATUS down STATE halted AUTO_RUN enabled NODE unowned Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback automatic Script_Parameters: ITEM STATUS Resource up Subnet up Resource up Subnet up Resource up Subnet up Resource up Subnet up NODE_NAME manx manx burmese burmese tabby tabby persian persian NAME /resource/random 192.8.15.
Maintenance and Troubleshooting Online Reconfiguration Online Reconfiguration The online reconfiguration feature provides a method to make configuration changes online to a Serviceguard Extension for RAC (SGeRAC) cluster. Specifically, this provides the ability to add or/and delete nodes from a running SGeRAC Cluster, and to reconfigure SLVM VG while it is being accessed by only one node.
Maintenance and Troubleshooting Managing the Shared Storage Managing the Shared Storage Single Node Online volume Re-Configuration (SNOR) The SLVM Single Node Online volume Re-configuration (SNOR) feature provides a method for changing the configuration for an active shared volume group (VG) in a SGeRAC cluster. SLVM SNOR allows the reconfiguration of a shared volume group and of logical and physical volumes in the VG. This is done while keeping the VG active on a single node in exclusive mode.
Maintenance and Troubleshooting Managing the Shared Storage # vgchange -a e -x vg_shared NOTE Ensure that none of the mirrored logical volumes in this volume group have Consistency Recovery set to MWC (refer lvdisplay(1M)). Changing the mode back to “shared” will not be allowed in that case, since Mirror Write Cache consistency recovery (MWC) is not valid in volume groups activated in shared mode. 5.
Maintenance and Troubleshooting Managing the Shared Storage The vgimport(1M)/vgexport(1M) sequence will not preserve the order of physical volumes in the /etc/lvmtab file. If the ordering is significant due to the presence of active-passive devices, or if the volume group has been configured to maximize throughput by ordering the paths accordingly, the ordering would need to be repeated. 7. Change the activation mode back to “shared” on the node in the cluster where the volume group vg_shared is active.
Maintenance and Troubleshooting Managing the Shared Storage This command is issued from the configuration node only, and the cluster must be running on all nodes for the command to succeed. Note that both the -S and the -c options are specified. The -S y option makes the volume group shareable, and the -c y option causes the cluster id to be written out to all the disks in the volume group.
Maintenance and Troubleshooting Managing the Shared Storage NOTE Do not share volume groups that are not part of the RAC configuration unless shared access is controlled. Deactivating a Shared Volume Group Issue the following command from each node to deactivate the shared volume group: # vgchange -a n /dev/vg_ops Remember that volume groups remain shareable even when nodes enter and leave the cluster.
Maintenance and Troubleshooting Managing the Shared Storage 4. From node 1, use the vgchange command to deactivate the volume group: # vgchange -a n /dev/vg_ops 5. Use the vgchange command to mark the volume group as unshareable: # vgchange -S n -c n /dev/vg_ops 6. Prior to making configuration changes, activate the volume group in normal (non-shared) mode: # vgchange -a y /dev/vg_ops 7. Use normal LVM commands to make the needed changes.
Maintenance and Troubleshooting Managing the Shared Storage 13. Use the vgimport command, specifying the map file you copied from the configuration node. In the following example, the vgimport command is issued on the second node for the same volume group that was modified on the first node: # vgimport -v -m /tmp/vg_ops.map /dev/vg_ops /dev/dsk/c0t2d0/dev/dsk/c1t2d0 14.
Maintenance and Troubleshooting Managing the Shared Storage One node will identify itself as the master. Create disk groups from this node. Similarly, you can delete VxVM or CVM disk groups provided they are not being used by a cluster node at the time. NOTE For CVM without CFS, if you are adding a disk group to the cluster configuration, make sure you also modify any package or create the package control script that imports and deports this disk group.
Maintenance and Troubleshooting Removing Serviceguard Extension for RAC from a System Removing Serviceguard Extension for RAC from a System If you wish to remove a node from Serviceguard Extension for RAC operation, use the swremove command to delete the software. Note the following: NOTE 192 • The cluster service should not be running on the node from which you will be deleting Serviceguard Extension for RAC.
Maintenance and Troubleshooting Monitoring Hardware Monitoring Hardware Good standard practice in handling a high availability system includes careful fault monitoring so as to prevent failures if possible or at least to react to them swiftly when they occur.
Maintenance and Troubleshooting Adding Disk Hardware Adding Disk Hardware As your system expands, you may need to add disk hardware. This also means modifying the logical volume structure. Use the following general procedure: 1. Halt packages. 2. Ensure that the Oracle database is not active on either node. 3. Deactivate and mark as unshareable any shared volume groups. 4. Halt the cluster. 5. Deactivate automatic cluster startup. 6. Shutdown and power off system before installing new hardware. 7.
Maintenance and Troubleshooting Replacing Disks Replacing Disks The procedure for replacing a faulty disk mechanism depends on the type of disk configuration you are using and on the type of Volume Manager software. For a description of replacement procedures using VERITAS VxVM or CVM, refer to the chapter on “Administering Hot-Relocation” in the VERITAS Volume Manager Administrator’s Guide. Additional information is found in the VERITAS Volume Manager Troubleshooting Guide.
Maintenance and Troubleshooting Replacing Disks 1. Identify the physical volume name of the failed disk and the name of the volume group in which it was configured. In the following examples, the volume group name is shown as /dev/vg_sg01 and the physical volume name is shown as /dev/c2t3d0. Substitute the volume group and physical volume names that are correct for your system. 2. Identify the names of any logical volumes that have extents defined on the failed physical volume. 3.
Maintenance and Troubleshooting Replacing Disks 2. Halt all the applications using the SLVM VG on all the nodes but one. 3. Re-activate the volume group in exclusive mode on all nodes of the cluster: # vgchange -a e -x 4. Reconfigure the volume: vgextend, lvextend, disk addition, etc 5.
Maintenance and Troubleshooting Replacing Disks This will synchronize the stale logical volume mirrors. This step can be time-consuming, depending on hardware characteristics and the amount of data. 6. Deactivate the volume group: # vgchange -a n vg_ops 7. Activate the volume group on all the nodes in shared mode using vgchange - a s: # vgchange -a s vg_ops Replacing a Lock Disk Replacing a failed lock disk mechanism is the same as replacing a data disk.
Maintenance and Troubleshooting Replacing Disks NOTE You cannot use inline terminators with internal FW/SCSI buses on D and K series systems, and you cannot use the inline terminator with single-ended SCSI buses. You must not use an inline terminator to connect a node to a Y cable. Figure 4-1 shows a three-node cluster with two F/W SCSI buses. The solid line and the dotted line represent different buses, both of which have inline terminators attached to nodes 1 and 3.
Maintenance and Troubleshooting Replacing Disks Use the following procedure to disconnect a node that is attached to the bus with an in-line SCSI terminator or with a Y cable: 1. Move any packages on the node that requires maintenance to a different node. 2. Halt the node that requires maintenance. The cluster will re-form, and activity will continue on other nodes. Packages on the halted node will switch to other available nodes if they are configured to switch. 3. Disconnect the power to the node. 4.
Maintenance and Troubleshooting Replacement of I/O Cards Replacement of I/O Cards After an I/O card failure, you can replace the card using the following steps. It is not necessary to bring the cluster down to do this if you are using SCSI inline terminators or Y cables at each node. 1. Halt the node by using Serviceguard Manager or the cmhaltnode command. Packages should fail over normally to other nodes. 2. Remove the I/O cable from the card.
Maintenance and Troubleshooting Replacement of LAN Cards Replacement of LAN Cards If you have a LAN card failure, which requires the LAN card to be replaced, you can replace it on-line or off-line depending on the type of hardware and operating system you are running. It is not necessary to bring the cluster down to do this. Off-Line Replacement The following steps show how to replace a LAN card off-line. These steps apply to both HP-UX 11.0 and 11i: 1. Halt the node by using the cmhaltnode command. 2.
Maintenance and Troubleshooting Replacement of LAN Cards After Replacing the Card After the on-line or off-line replacement of LAN cards has been done, Serviceguard will detect that the MAC address (LLA) of the card has changed from the value stored in the cluster binary configuration file, and it will notify the other nodes in the cluster of the new MAC address. The cluster will operate normally after this.
Maintenance and Troubleshooting Monitoring RAC Instances Monitoring RAC Instances The DB Provider provides the capability to monitor RAC databases. RBA (Role Based Access) enables a non-root user to have the capability to monitor RAC instances using Serviceguard Manager.
Software Upgrades A Software Upgrades Serviceguard Extension for RAC (SGeRAC) software upgrades can be done in the two following ways: • rolling upgrade • non-rolling upgrade Instead of an upgrade, moving to a new version can be done with: • migration with cold install Rolling upgrade is a feature of SGeRAC that allows you to perform a software upgrade on a given node without bringing down the entire cluster. SGeRAC supports rolling upgrades on version A.11.
Software Upgrades One advantage of both rolling and non-rolling upgrades versus cold install is that upgrades retain the pre-existing operating system, software and data. Conversely, the cold install process erases the pre-existing system; you must re-install the operating system, software and data. For these reasons, a cold install may require more downtime.
Software Upgrades Rolling Software Upgrades Rolling Software Upgrades SGeRAC version A.11.15 and later allow you to roll forward to any higher revision provided all of the following conditions are met: • The upgrade must be done on systems of the same architecture (HP 9000 or Integrity Servers). • All nodes in the cluster must be running on the same version of HP-UX. • Each node must be running a version of HP-UX that supports the new SGeRAC version.
Software Upgrades Rolling Software Upgrades NOTE It is optional to set this parameter to “1”. If you want the node to join the cluster at boot time, set this parameter to “1”, otherwise set it to “0”. 6. Restart the cluster on the upgraded node (if desired). You can do this in Serviceguard Manager, or from the command line, issue the Serviceguard cmrunnode command. 7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on the local node. 8.
Software Upgrades Rolling Software Upgrades Example of Rolling Upgrade The following example shows a simple rolling upgrade on two nodes, each running standard Serviceguard and RAC instance packages, as shown in Figure A-1. (This and the following figures show the starting point of the upgrade as SGeRAC A.11.15 for illustration only. A roll to SGeRAC version A.11.16 is shown.) SGeRAC rolling upgrade requires the same operating system version on all nodes.
Software Upgrades Rolling Software Upgrades Step 1. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 1. 2. Halt node 1. This will cause the node’s packages to start up on an adoptive node. You can do this in Serviceguard Manager, or from the command line issue the following: # cmhaltnode -f node1 This will cause the failover package to be halted cleanly and moved to node 2. The Serviceguard daemon on node 1 is halted, and the result is shown in Figure A-2.
Software Upgrades Rolling Software Upgrades Step 2. Upgrade node 1 and install the new version of Serviceguard and SGeRAC (A.11.16), as shown in Figure A-3. NOTE If you install Serviceguard and SGeRAC separately, Serviceguard must be installed before installing SGeRAC. Figure A-3 Node 1 Upgraded to SG/SGeRAC 11.
Software Upgrades Rolling Software Upgrades Step 3. 1. Restart the cluster on the upgraded node (node 1) (if desired). You can do this in Serviceguard Manager, or from the command line issue the following: # cmrunnode node1 2. At this point, different versions of the Serviceguard daemon (cmcld) are running on the two nodes, as shown in Figure A-4. 3. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 1.
Software Upgrades Rolling Software Upgrades Step 4. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 2. 2. Halt node 2. You can do this in Serviceguard Manager, or from the command line issue the following: # cmhaltnode -f node2 This causes both packages to move to node 1; see Figure A-5. 3. Upgrade node 2 to Serviceguard and SGeRAC (A.11.16) as shown in Figure A-5. 4. When upgrading is finished, enter the following command on node 2 to restart the cluster on node 2: # cmrunnode node2 5.
Software Upgrades Rolling Software Upgrades Step 5. Move PKG2 back to its original node. Use the following commands: # cmhaltpkg pkg2 # cmrunpkg -n node2 pkg2 # cmmodpkg -e pkg2 The cmmodpkg command re-enables switching of the package, which is disabled by the cmhaltpkg command. The final running cluster is shown in Figure A-6.
Software Upgrades Rolling Software Upgrades Limitations of Rolling Upgrades The following limitations apply to rolling upgrades: • During a rolling upgrade, you should issue Serviceguard/SGeRAC commands (other than cmrunnode and cmhaltnode) only on a node containing the latest revision of the software. Performing tasks on a node containing an earlier revision of the software will not work or will cause inconsistent results.
Software Upgrades Non-Rolling Software Upgrades Non-Rolling Software Upgrades A non-rolling upgrade allows you to perform a software upgrade from any previous revision to any higher revision or between operating system versions. For example, you may do a non-rolling upgrade from SGeRAC A.11.14 on HP-UX 11i v1 to A.11.16 on HP-UX 11i v2, given both are running the same architecture.
Software Upgrades Non-Rolling Software Upgrades Steps for Non-Rolling Upgrades Use the following steps for a non-rolling software upgrade: 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on all nodes in the cluster. 2. Halt all nodes in the cluster. # cmhaltcl -f 3. If necessary, upgrade all the nodes in the cluster to the new HP-UX release. 4. Upgrade all the nodes in the cluster to the new Serviceguard/SGeRAC release. 5. Restart the cluster. Use the following command: # cmruncl 6.
Software Upgrades Non-Rolling Software Upgrades Limitations of Non-Rolling Upgrades The following limitations apply to non-rolling upgrades: 218 • Binary configuration files may be incompatible between releases of Serviceguard. Do not manually copy configuration files between nodes. • It is necessary to halt the entire cluster when performing a non-rolling upgrade.
Software Upgrades Non-Rolling Software Upgrades Migrating a SGeRAC Cluster with Cold Install There may be circumstances when you prefer a cold install of the HP-UX operating system rather than an upgrade. The cold install process erases the pre-existing operating system and data and then installs the new operating system and software; you must then restore the data. CAUTION The cold install process erases the pre-existing software, operating system, and data.
Software Upgrades Non-Rolling Software Upgrades 220 Appendix A
Blank Planning Worksheets B Blank Planning Worksheets This appendix reprints blank planning worksheets used in preparing the RAC cluster. You can duplicate any of these worksheets that you find useful and fill them in as a part of the planning process.
Blank Planning Worksheets LVM Volume Group and Physical Volume Worksheet LVM Volume Group and Physical Volume Worksheet VG and PHYSICAL VOLUME WORKSHEET Page ___ of ____ ========================================================================== Volume Group Name: ______________________________________________________ PV Link 1 PV Link2 Physical Volume Name:_____________________________________________________ Physical Volume Name:_____________________________________________________ Physical Volume Name:
Blank Planning Worksheets VxVM Disk Group and Disk Worksheet VxVM Disk Group and Disk Worksheet DISK GROUP WORKSHEET Page ___ of ____ =========================================================================== Disk Group Name: __________________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name:____________________________________________________
Blank Planning Worksheets Oracle Logical Volume Worksheet Oracle Logical Volume Worksheet NAME SIZE Oracle Control File 1: _____________________________________________________ Oracle Control File 2: _____________________________________________________ Oracle Control File 3: _____________________________________________________ 224 Instance 1 Redo Log 1: _____________________________________________________ Instance 1 Redo Log 2: _____________________________________________________ Instance 1 Red
Index A activation of volume groups in shared mode, 187 adding packages on a running cluster, 159 administration cluster and package states, 168 array replacing a faulty mechanism, 195, 196, 197 AUTO_RUN parameter, 157 AUTO_START_TIMEOUT in sample configuration file, 60, 124 B building a cluster CVM infrastructure, 73, 136 building an RAC cluster displaying the logical volume infrastructure, 57, 121 logical volume infrastructure, 48, 112 building logical volumes for RAC, 54, 118 C CFS, 65, 70 creating stora
Index in sample package control script, 161 FS_MOUNT_OPT in sample package control script, 161 G GMS group membership services, 23 group membership services define, 23 H hardware adding disks, 194 monitoring, 193 heartbeat subnet address parameter in cluster manager configuration, 47, 110 HEARTBEAT_INTERVAL in sample configuration file, 60, 124 HEARTBEAT_IP in sample configuration file, 60, 124 high availability cluster defined, 16 I in-line terminator permitting online hardware maintenance, 198 installing
Index optimizing packages for large numbers of storage units, 161 Oracle demo database files, 55, 80, 119, 145 Oracle 10 RAC installing binaries, 89 Oracle 10g RAC introducing, 33 Oracle 9i RAC installing, 148 introducing, 101 Oracle Disk Manager configuring, 92 Oracle Parallel Server starting up instances, 156 Oracle RAC installing, 59, 123 Oracle10g installing, 88 P package basic concepts, 17, 18 moving status, 178 state, 175 status and state, 172 switching status, 179 package configuration service name p
Index Serviceguard Extension for RAC installing, 46, 109 introducing, 15 shared mode activation of volume groups, 187 deactivation of volume groups, 188 shared volume groups making volume groups shareable, 186 sharing volume groups, 57, 121 SLVM making volume groups shareable, 186 SNOR configuration, 184 software upgrades, 205 state cluster, 175 node, 172 of cluster and package, 168 package, 172, 175 status cluster, 171 halting node, 180 moving package, 178 network, 174 node, 172 normal running RAC, 176 of