Using Serviceguard Extension for RAC Manufacturing Part Number : T1859-90048 June 2007
Legal Notices © Copyright 2003-2007 Hewlett-Packard Development Company, L.P. Publication Dates: June 2003, June 2004, February 2005, December 2005, March 2006, May 2006, February 2007, June 2007 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S.
Contents 1. Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? . . . . . . . . . . . . . . . . . . . . . . . . . . Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Packages in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Serviceguard Extension for RAC Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Network Planning for Cluster Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Planning Storage for Oracle Cluster Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Planning Storage for Oracle 10g RAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Volume Planning with SLVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Storage Planning with CFS . . . . . . . . . . . . .
Contents Using CVM 4.x or later. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CVM 3.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirror Detachment Policies with CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . .
Contents Creating Logical Volumes for RAC on Disk Arrays . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Demo Database Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting the Logical Volume Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installing Oracle Real Application Clusters. . . . . . . . . . . . . . . . . . . . .
Contents 4. Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command . . . . . . . . . . . . Types of Cluster and Package States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of Cluster and Package States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online Node Addition and Deletion. . . . . . .
Contents Limitations of Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non-Rolling Software Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations of Non-Rolling Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating a SGeRAC Cluster with Cold Install . . . . . . .
Printing History Table 1 Printing Date June 2003 Document Edition and Printing Date Part Number T1859-90006 Edition First Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) June 2004 T1859-90017 Second Edition Print, CD-ROM (Instant Information), and Web (http://www.docs.hp.com/) February 2005 T1859-90017 Second Edition February 2005 Update Web (http://www.docs.hp.com/) October 2005 T1859-90033 Third Edition Print, CD-ROM (Instant Information), and Web (http://www.
The last printing date and part number indicate the current edition, which applies to the 11.18 version of Serviceguard Extension for RAC (Oracle Real Application Cluster). The printing date changes when a new edition is printed. (Minor corrections and updates which are incorporated at reprint do not cause the date to change.) The part number is revised when extensive technical changes are incorporated. New editions of this manual will incorporate all material updated since the previous edition.
Preface This fifth printing of the manual has been updated for Serviceguard Extension for RAC (Oracle Real Application Cluster) Version A.11.18 on HP-UX 11i v2 and 11i v3, Veritas Cluster File System (CFS)/Cluster Volume Manager (CVM) from Symantec version 5.0, Cluster Interconnect Subnet Monitoring feature, and the SGeRAC Toolkit.
Related Publications The following documents contain additional useful information: • Clusters for High Availability: a Primer of HP Solutions. Hewlett-Packard Professional Books: Prentice Hall PTR, 2001 (ISBN 0-13-089355-2) • Serviceguard Extension for RAC Version A.11.18 Release Notes (T1907-90031) • HP Serviceguard Version A.11.18 Release Notes (B3935-90108) • Managing Serviceguard Fourteenth Edition (B3936-90117) • HP Serviceguard Storage Management Suite Version A.01.01 Release Notes (CFS 4.
If you will be using Veritas Cluster Volume Manager (CVM) and Veritas Cluster File System (CFS) from Symantec with Serviceguard version 4.1 refer to the HP Serviceguard Storage Management Suite Version A.01.01 Release Notes (T2771-90030). For CVM and CFS version 5.0, refer to the HP Serviceguard Storage Management Suite Version A.02.00 Release Notes (T2771-90036). These release notes describe suite bundles for the integration of HP Serviceguard A.11.
Command A command name or qualified command phrase. Variable The name of a variable that you may replace in a command or function or information in a display that represents several possible values. [ ] The contents are optional in formats and command descriptions. If the contents are a list separated by |, you must choose one of the items. { } The contents are required in formats and command descriptions. If the contents are a list separated by |, you must choose one of the items. ...
Introduction to Serviceguard Extension for RAC 1 Introduction to Serviceguard Extension for RAC Serviceguard Extension for RAC (SGeRAC) enables the Oracle Real Application Cluster (RAC), formerly known as Oracle Parallel Server RDBMS, to run on HP high availability clusters under the HP-UX operating system. This chapter introduces Serviceguard Extension for RAC and shows where to find different kinds of information in this book.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? What is a Serviceguard Extension for RAC Cluster? A high availability cluster is a grouping of HP servers having sufficient redundancy of software and hardware components that a single point of failure will not disrupt the availability of computer services. High availability clusters configured with Oracle Real Application Cluster software are known as RAC clusters.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? RAC on HP-UX lets you maintain a single database image that is accessed by the HP servers in parallel, thereby gaining added processing power without the need to administer separate databases. Further, when properly configured, Serviceguard Extension for RAC provides a highly available database that continues to operate even if one hardware component should fail.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? Figure 1-2 Group Membership Services Using Packages in a Cluster In order to make other important applications highly available (in addition to the Oracle Real Application Cluster), you can configure your RAC cluster to use packages.
Introduction to Serviceguard Extension for RAC What is a Serviceguard Extension for RAC Cluster? There are also packages that run on several cluster nodes at once, and do not fail over. These are called system multi-node packages and multi-node packages. As of Serviceguard Extension for RAC A.11.
Introduction to Serviceguard Extension for RAC Serviceguard Extension for RAC Architecture Serviceguard Extension for RAC Architecture This chapter discusses the main software components used by Serviceguard Extension for RAC in some detail.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) SGeRAC supports Veritas Cluster File System (CFS)/Cluster Volume Manager (CVM) from Symantec through Serviceguard. CFS and CVM are not supported on all versions of HP-UX (on HP-UX releases that support Veritas CFS and CVM; see “About Veritas CFS and CVM from Symantec” on page 22).
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Cluster File System (CFS)/Cluster Volume Manager (CVM) CFS provides SGeRAC with additional options, such as improved manageability. When planning a RAC cluster, application software could be installed once and be visible by all cluster nodes. A central location is available to store runtime logs, for example, RAC alert logs.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 10g RAC Overview of SGeRAC and Oracle 10g RAC Starting with Oracle 10g RAC, Oracle has bundled its own cluster software. The initial release is called Oracle Cluster Ready Service (CRS). CRS is used both as a generic term referring to the Oracle cluster software and as a specific term referring to a component within the Oracle clusters software. At subsequent release, Oracle generic CRS is renamed to Oracle Clusterware.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 10g RAC NOTE In this document, the generic terms “CRS” and “Oracle Clusterware” will subsequently be referred to as “Oracle Cluster Software”. The use of the term CRS will still be used when referring to a sub-component of Oracle Cluster Software. For more detail information on Oracle 10g RAC refer to Chapter 2, “Serviceguard Configuration for Oracle 10g RAC.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC Cluster Interconnect Subnet Monitoring Overview of SGeRAC Cluster Interconnect Subnet Monitoring In SGeRAC, the Cluster Interconnect Subnet Monitoring feature is used to monitor cluster communication subnets. This feature requires the use of a package configuration parameter known as the CLUSTER_INTERCONNECT_SUBNET. It can be set up to monitor certain subnets used by applications that are configured as Serviceguard multi-node packages.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC Cluster Interconnect Subnet Monitoring • Allows separation of SG-HB and Oracle RAC-IC traffic (recommended when RAC-IC traffic may interfere with SG-HB traffic). How Cluster Interconnect Subnet Works The CLUSTER_INTERCONNECT_SUBNET parameter works similar to the existing SUBNET package configuration parameter. The most notable difference is in the failure handling of the subnets monitored using these individual parameters.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC Cluster Interconnect Subnet Monitoring the subnet on any other node does not affect the running instance. No attempt will be made automatically to start the package instance on the restored node. An attempt to start the instance of the package on the restored node will be made, if the user explicitly tries to start the package instance.
Introduction to Serviceguard Extension for RAC Overview of SGeRAC and Oracle 9i RAC Overview of SGeRAC and Oracle 9i RAC This section describes some of the central components and functionality for SGeRAC and Oracle 9i RAC. How Serviceguard Works with Oracle 9i RAC Serviceguard provides the cluster framework for Oracle, a relational database product in which multiple database instances run on different cluster nodes.
Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle RAC Instances Configuring Packages for Oracle RAC Instances Oracle instances can be configured as packages with a single node in their node list. NOTE Chapter 1 Packages that start and halt Oracle instances (called instance packages) do not fail over from one node to another; they are single-node packages. You should include only one NODE_NAME in the package ASCII configuration file.
Introduction to Serviceguard Extension for RAC Configuring Packages for Oracle Listeners Configuring Packages for Oracle Listeners Oracle listeners can be configured as packages within the cluster (called listener packages). Each node with a RAC instance can be configured with a listener package. Listener packages are configured to automatically fail over from the original node to an adoptive node. When the original node is restored, the listener package automatically fails back to the original node.
Introduction to Serviceguard Extension for RAC Node Failure Node Failure RAC cluster configuration is designed so that in the event of a node failure, another node with a separate instance of Oracle can continue processing transactions. Figure 1-3 shows a typical cluster with instances running on both nodes. Figure 1-3 Before Node Failure Figure 1-4 shows the condition where Node 1 has failed and Package 1 has been transferred to Node 2.
Introduction to Serviceguard Extension for RAC Node Failure and is now running on Node 2. Also note that Node 2 can now access both Package 1’s disk and Package 2’s disk. Oracle instance 2 now handles all database access, since instance 1 has gone down. Figure 1-4 After Node Failure In the above figure, pkg1 and pkg2 are not instance packages. They are shown to illustrate the movement of packages in general.
Introduction to Serviceguard Extension for RAC Larger Clusters Larger Clusters Serviceguard Extension for RAC supports clusters of up to 16 nodes. The actual cluster size is limited by the type of storage and the type of volume manager used. Up to Four Nodes with SCSI Storage You can configure up to four nodes using a shared F/W SCSI bus; for more than 4 nodes, FibreChannel must be used. An example of a four-node RAC cluster appears in the following figure.
Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-5 34 Four-Node RAC Cluster Chapter 1
Introduction to Serviceguard Extension for RAC Larger Clusters In this type of configuration, each node runs a separate instance of RAC and may run one or more high availability packages as well. The figure shows a dual Ethernet configuration with all four nodes connected to a disk array (the details of the connections depend on the type of disk array). In addition, each node has a mirrored root disk (R and R').
Introduction to Serviceguard Extension for RAC Larger Clusters Figure 1-6 Eight-Node Cluster with XP or EMC Disk Array FibreChannel switched configurations also are supported using either an arbitrated loop or fabric login topology. For additional information about supported cluster configurations, refer to the HP 9000 Servers Configuration Guide, available through your HP representative.
Introduction to Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC Basic Serviceguard clusters are usually configured in a single data center, often in a single room, to provide protection against failures in CPUs, interface cards, and software.
Introduction to Serviceguard Extension for RAC Extended Distance Cluster Using Serviceguard Extension for RAC 38 Chapter 1
Serviceguard Configuration for Oracle 10g RAC 2 Serviceguard Configuration for Oracle 10g RAC This chapter shows the additional planning and configuration that is needed to use Oracle Real Application Clusters 10g with Serviceguard.
Serviceguard Configuration for Oracle 10g RAC Interface Areas Interface Areas This section documents interface areas where there is expected interaction between SGeRAC and Oracle 10g Cluster Software and RAC. Group Membership API (NMAPI2) The NMAPI2 client links with the SGeRAC provided NMAPI2 library for group membership service. The group membership is layered on top of the SGeRAC cluster membership where all the primary group members are processes within cluster nodes.
Serviceguard Configuration for Oracle 10g RAC Interface Areas CSS Timeout When SGeRAC is on the same cluster as Oracle Cluster Software, the CSS timeout is set to a default value of 600 seconds (10 minutes) at Oracle software installation. This timeout is configurable with Oracle tools and should not be changed without ensuring that the CSS timeout allows enough time for Serviceguard Extension for RAC (SGeRAC) reconfiguration and to allow multipath (if configured) reconfiguration to complete.
Serviceguard Configuration for Oracle 10g RAC Interface Areas Automated Oracle Cluster Software Startup and Shutdown The preferred mechanism that allows Serviceguard to notify Oracle Cluster Software to start and to request Oracle Cluster Software to shutdown is the use of Serviceguard packages. Monitoring Oracle Cluster Software daemon monitoring is performed through programs initiated by the HP-UX init process.
Serviceguard Configuration for Oracle 10g RAC Interface Areas SGeRAC configurations, the CSS MISSCOUNT value is set to 600 seconds. Multipath failover time is typically between 30 to 120 seconds (For information on Multipathing and HP-UX 11i v3; “About Multipathing” on page 58. OCR and Vote Device Shared storage for the OCR and Vote device should be on supported shared storage volume managers with multipath configured and with either the correct multipath failover time or CSS timeout.
Serviceguard Configuration for Oracle 10g RAC Interface Areas If CRS is not configured to start the listener automatically at Oracle Cluster Software startup, the listener startup can be automated with supported commands, such as srvctl and lsnrctl, through scripts or SGeRAC packages. If the SGeRAC package is configured to start the listener, the SGeRAC package would contain the virtual IP address required by the listener.
Serviceguard Configuration for Oracle 10g RAC Interface Areas single point of failure, the CSS heartbeat network should be configured with redundant physical networks under SGeRAC monitoring. Since SGeRAC does not support heartbeat over Hyperfabric (HF) networks, the preferred configuration is for CSS and Serviceguard to share the same cluster interconnect.
Serviceguard Configuration for Oracle 10g RAC RAC Instances RAC Instances Automated Startup and Shutdown CRS can be configured to automatically start, monitor, restart, and halt RAC instances. If CRS is not configured to automatically start the RAC instance at Oracle Cluster Software startup, the RAC instance startup can be automated through scripts using supported commands, such as srvctl or sqlplus, in a SGeRAC package to start and halt RAC instances. NOTE srvctl and sqlplus are Oracle commands.
Serviceguard Configuration for Oracle 10g RAC Network Planning for Cluster Communication Network Planning for Cluster Communication The network for cluster communication is used for private internal cluster communications. Although one network adapter per node is sufficient for the private network, it is recommended that a minimum of two network adapters for each network to be used for higher availability.
Serviceguard Configuration for Oracle 10g RAC Network Planning for Cluster Communication • RAC GCS (cache fusion) traffic may be very high, so an additional dedicated heartbeat network for Serviceguard needs to be configured. • Some networks, such as Infiniband, are not supported by CFS/CVM, so the CSS-HB/RAC-IC traffic may need to be on a separate network that is different from SG-HB network.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle Cluster Software Planning Storage for Oracle Cluster Software Oracle Cluster Software requires shared storage for the Oracle Cluster Registry (OCR) and a vote device. Automatic Storage Management can not be used for the OCR and vote device since these files must be accessible before Oracle Cluster Software starts. The minimum required size for the OCR is 100MB and for the vote disk is 20 MB.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC Planning Storage for Oracle 10g RAC Volume Planning with SLVM Storage capacity for the Oracle database must be provided in the form of logical volumes located in shared volume groups. The Oracle software requires at least two log files for each Oracle instance, several Oracle control files and data files for the database itself.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC CFS and CVM are not supported on all versions of HP-UX (on HP-UX releases that support them; see “About Veritas CFS and CVM from Symantec” on page 22). NOTE Chapter 2 For specific CFS Serviceguard Storage Management Suite product information: • For CFS 4.1, see the HP Serviceguard Storage Management Suite Version A.01.01 Release Notes. • For CFS 5.0, see the HP Serviceguard Storage Management Suite Version A.02.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vg_ops/rora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vg_ops/rora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vg_ops/ropsctl1.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC Volume Planning with CVM Storage capacity for the Oracle database must be provided in the form of volumes located in shared disk groups. The Oracle software requires at least two log files for each Oracle instance, several Oracle control files and data files for the database itself.
Serviceguard Configuration for Oracle 10g RAC Planning Storage for Oracle 10g RAC ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW VOLUME NAME SIZE (MB) Oracle Cluster Registry: _____/dev/vx/rdsk/ops_dg/ora_ocr_____100___ (once per cluster) Oracle Cluster Vote Disk: ____/dev/vx/rdsk/ops_dg/ora_vote_____20___ (once per cluster) Oracle Control File: _____/dev/vx/rdsk/ops_dg/opsctl1.
Serviceguard Configuration for Oracle 10g RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC includes updating the software and rebuilding the kernel to support high availability cluster operation for Oracle Real Application Clusters.
Serviceguard Configuration for Oracle 10g RAC Veritas Cluster Volume Manager (CVM) and Cluster File System (CFS) Veritas Cluster Volume Manager (CVM) and Cluster File System (CFS) You may choose to configure cluster storage with the Veritas Cluster Volume Manager (CVM) instead of the Volume Manager (VxVM).
Serviceguard Configuration for Oracle 10g RAC Veritas Cluster Volume Manager (CVM) and Cluster File System (CFS) NOTE Chapter 2 CVM (and CFS - Cluster File System) are supported on some, but not all, current releases of HP-UX. See the latest release notes for your version of Serviceguard at http://www.docs.hp.com -> High Availability - > Serviceguard.
Serviceguard Configuration for Oracle 10g RAC Veritas Storage Management Products Veritas Storage Management Products Veritas Volume Manager (VxVM) 3.5 is not supported on HP-UX 11i v3. If you are running VxVM 3.5 as part of HP-UX (VxVM-Base), version 3.5 will be upgraded to version 4.1 when you upgrade to HP-UX 11i v3.
Serviceguard Configuration for Oracle 10g RAC About Device Special Files About Device Special Files HP-UX releases up to and including 11i v2 use a naming convention for device files that encodes their hardware path. For example, a device file named /dev/dsk/c3t15d0 would indicate SCSI controller instance 3, SCSI target 15, and SCSI LUN 0. HP-UX 11i v3 introduces a new nomenclature for device files, known as agile addressing (sometimes also called persistent LUN binding).
Serviceguard Configuration for Oracle 10g RAC Support for the SGeRAC Toolkit Support for the SGeRAC Toolkit The SGeRAC Toolkit provides documentation and scripts to simplify the integration of SGeRAC and the Oracle 10g RAC stack. It also manages the dependency between Oracle Clusterware and Oracle RAC instances with a full range of storage management options supported in the Serviceguard/SGeRAC environment.
Serviceguard Configuration for Oracle 10g RAC Configuration File Parameters Configuration File Parameters You need to code specific entries for all the storage groups that you want to use in an Oracle RAC configuration. If you are using LVM, the OPS_VOLUME_GROUP parameter is included in the cluster ASCII file. If you are using Veritas CVM, the STORAGE_GROUP parameter is included in the package ASCII file.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Cluster Communication Network Monitoring This section describes the various network configurations for cluster communications in a SGeRAC/10g RAC cluster, and how the package configuration parameter CLUSTER_INTERCONNECT_SUBNET can be used to recover from Oracle cluster communications network failures in certain configurations.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Figure 2-1 Chapter 2 Single Network for Cluster Communications 63
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Alternate Configuration - Fast Reconfiguration with Low Node Timeout A high RAC-IC traffic may interfere with SG-HB traffic and cause unneccessary node timeouts if Serviceguard cluster configuration parameter NODE_TIMEOUT is low. If NODE_TIMEOUT cannot be increased, use of an additional network dedicated for SG-HB alone avoids unnecessary node timeouts when RAC-IC traffic is high.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Each primary and standby pair protects against a single failure. With the SG-HB on more than one subnet, a single subnet failure will not trigger a Serviceguard reconfiguration. If the subnet with CSS-HB fails, unless subnet monitoring is used, CSS will resolve the interconnect subnet failure with a CSS cluster reconfiguration.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring PACKAGE_NAME CRS_PACKAGE PACKAGE_TYPE MULTI_NODE LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO DEPENDENCY_NAME CI-PACKAGE DEPENDENCY_CONDITION CI-PACKAGE=UP DEPENDENCY_LOCATION SAME_NODE Oracle Cluster Interconnect Subnet Package: Package to monitor the CSS-HB subnet NOTE 66 PACKAGE_NAME CI-PACKAGE PACKAGE_TYPE MULTI_NODE LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED YES CLUSTER
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Alternate Configuration - Multiple RAC Databases When there are multiple independent RAC databases on the same cluster and if there is insufficient bandwidth over a single network, a separate network can be used for different database interconnect traffic. This will avoid RAC-IC traffic of one database from interfering with that of another.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring for IMR time interval prior to resolving the subnet failure. In SGeRAC configurations, default value of IMR time interval may be as high as seventeen minutes. CLUSTER_INTERCONNECT_SUBNET can be configured for RAC instance MNP to monitor the RAC-IC subnet that is different from CSS-HB subnet.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Faster Failover Configuration (SGeFF and SGeRAC) Faster Failover configurations with SGeFF require at least two separate SG-HB networks, and are restricted to two nodes. Figure 2-4 Faster Failover Configuration As shown in Figure 2-4, the Faster Failover configuration uses two SG-HBs on two primary networks with no standby, which enables the quickest determination of node failure and faster failover.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring • First network with primary for SG-HB #1 (lan1). • Second network with primary for SG-HB #2 (lan2). • Third network with primary and standby for CSS-HB and RAC-IC (lan3/lan4). • Single failure is protected by primary/standby. If the subnet with CSS-HB fails, unless subnet monitoring is used, CSS will resolve the interconnect subnet failure with a CSS cluster reconfiguration.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring The following is an example of the relevant package configuration parameters: Oracle Clusterware Package: PACKAGE_NAME CRS_PACKAGE PACKAGE_TYPE MULTI_NODE LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO DEPENDENCY_NAME CI-PACKAGE DEPENDENCY_CONDITION CI-PACKAGE=UP DEPENDENCY_LOCATION SAME_NODE Oracle Cluster Interconnect Subnet Package: Package to monitor the CSS-HB subnet NOTE Chapter 2 PACKAGE
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Guidelines for Changing Cluster Parameters This section describes the guidelines for changing the default values of certain Oracle clusterware and Serviceguard cluster configuration parameters. The guidelines vary depending on whether clustering interconnect subnet monitoring is used to monitor the CSS HB subnet or not.
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring • Serviceguard cluster configuration parameter NODE_TIMEOUT Then the CSS misscount parameter should be the greater of either: • 195 seconds or • 25 times Serviceguard NODE_TIMEOUT + 15 seconds. Limitations of Cluster Communication Network Monitor The Cluster Interconnect Monitoring feature does not coordinate with any feature handling subnet failures (including self).
Serviceguard Configuration for Oracle 10g RAC Cluster Communication Network Monitoring Cluster Interconnect Monitoring Restrictions In addition to the above limitations the Cluster Interconnect Monitoring feature has the following restrictions: • Cluster Lock device/Quorum Server/Lock Lun must be configured in the cluster. • CLUSTER_INTERCONNECT_SUBNET can be used to monitor only IPv4 subnets.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating a Storage Infrastructure with LVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Cluster Volume Manager (CVM), or Veritas Volume Manager (VxVM).
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section “Installing Oracle Real Application Clusters.” Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes. NOTE It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n system.dbf option lets you specify the name of the logical volume; and the -L 408 option allocates 408 megabytes. If Oracle performs resilvering of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of “NONE” by disabling both mirror write caching and mirror consistency recovery.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating RAC Volume Groups on Disk Arrays The procedure described in this section assumes that you are using RAID-protected disk arrays and LVM’s physical volume links (PV links) to define redundant data paths from each node in the cluster to every logical unit on the array. On your disk arrays, you should use redundant I/O channels from each node, connecting them to separate controllers on the array.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM # mknod /dev/vg_ops/group c 64 0xhh0000 The major number is always 64, and the hexadecimal minor number has the form 0xhh0000 where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Creating Logical Volumes for RAC on Disk Arrays After you create volume groups and add PV links to them, you define logical volumes for data, logs, and control files. The following are some examples: # # # # lvcreate lvcreate lvcreate lvcreate -n -n -n -n ops1log1.log -L 4 /dev/vg_ops opsctl1.ctl -L 4 /dev/vg_ops system.dbf -L 28 /dev/vg_ops opsdata1.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with LVM Table 2-1 Required Oracle File Names for Demo Database (Continued) Logical Volume Name LV Size (MB) Raw Logical Volume Path Name Oracle File Size (MB)* opsusers.dbf 128 /dev/vg_ops/ropsusers.dbf 120 opsdata1.dbf 208 /dev/vg_ops/ropsdata1.dbf 200 opsdata2.dbf 208 /dev/vg_ops/ropsdata2.dbf 200 opsdata3.dbf 208 /dev/vg_ops/ropsdata3.dbf 200 opsspfile1.ora 5 /dev/vg_ops/ropsspfile1.ora 5 pwdfile.
Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure Displaying the Logical Volume Infrastructure To display the volume group, use the vgdisplay command: # vgdisplay -v /dev/vg_ops Exporting the Logical Volume Infrastructure Before the Oracle volume groups can be shared, their configuration data must be exported to other nodes in the cluster. This is done either in Serviceguard Manager or by using HP-UX commands, as shown in the following sections.
Serviceguard Configuration for Oracle 10g RAC Displaying the Logical Volume Infrastructure # mkdir /dev/vg_ops # mknod /dev/vg_ops/group c 64 0xhh0000 For the group file, the major number is always 64, and the hexadecimal minor number has the form 0xhh0000 where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 4.
Serviceguard Configuration for Oracle 10g RAC Installing Oracle Real Application Clusters Installing Oracle Real Application Clusters NOTE Some versions of Oracle RAC requires installation of additional software. Refer to your version of Oracle for specific requirements. Before installing the Oracle Real Application Cluster software, make sure the storage cluster is running.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File Cluster Configuration ASCII File The following is an example of an ASCII configuration file generated with the cmquerycl command using the -w full option on a system with Serviceguard Extension for RAC. The OPS_VOLUME_GROUP parameters appear at the end of the file.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File # stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Access Control Policy Parameters.
Serviceguard Configuration for Oracle 10g RAC Cluster Configuration ASCII File # # # # # # Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN # # # # # # List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Creating a Storage Infrastructure with CFS In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Volume Manager (VxVM), or Veritas Cluster Volume Manager (CVM). You can also use a mixture of volume types, depending on your needs.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes. In the example below, both the Oracle RAC software and datafiles reside on CFS. There is a single Oracle home. Three CFS file systems are created for Oracle home, Oracle datafiles, and for the Oracle Cluster Registry (OCR) and vote device.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # cmviewcl The following output will be displayed: CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running 5. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM/CFS stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. This procedure is described in the Managing Serviceguard Fourteenth Edition user’s guide Appendix G. 7.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol1 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol2 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol3 The
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Package name “SG-CFS-MP-3” was generated to control the resource. Mount point “/cfs/mnt3” that was associated with the cluster. NOTE The diskgroup and mount point multi-node packages (SG-CFS-DG_ID# and SG-CFS-MP_ID#) do not monitor the health of the disk group and mount point. They check that the application packages that depend on them have access to the disk groups and mount points.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS SG-CFS-DG-1 up SG-CFS-MP-1 up SG-CFS-MP-2 up SG-CFS-MP-3 up CAUTION running running running running enabled enabled enabled enabled no no no no Once you create the disk group and mount point packages, it is critical that you administer the cluster with the cfs commands, including cfsdgadm, cfsmntadm, cfsmount, and cfsumount.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CFS Mount point “/cfs/mnt2” was disassociated from the cluster # cfsmntadm delete /cfs/mnt3 The following output will be generated: Mount point “/cfs/mnt3” was disassociated from the cluster Cleaning up resource controlling shared disk group “cfsdg1” Shared disk group “cfsdg1” was disassociated from the cluster. NOTE The disk group package is deleted if there is no dependency. 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Creating a Storage Infrastructure with CVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Volume Manager (VxVM), or Veritas Cluster Volume Manager (CVM).
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Using CVM 4.x or later This section has information on how to set up the cluster and the system multi-node package with CVM (without the CFS filesystem); (on HP-UX releases that support them; see “About Veritas CFS and CVM from Symantec” on page 22). Preparing the Cluster and the System Multi-node Package for use with CVM 4.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM # vxdg -s init ops_dg c4t4d0 4. Creating Volumes and Adding a Cluster Filesystem # vxassist -g ops_dg make vol1 10240m # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make vol3 300m 5.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM This policy can be re-set on a disk group basis by using the vxedit command, as follows: # vxedit set diskdetpolicy=global NOTE The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the Veritas Volume Manager. Using CVM 3.x This section has information on how to prepare the cluster and the system multi-node package with CVM 3.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM Starting the Cluster and Identifying the Master Node Run the cluster, which will activate the special CVM package: # cmruncl After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes.
Serviceguard Configuration for Oracle 10g RAC Creating a Storage Infrastructure with CVM To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: # /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2 Creating Disk Groups for RAC Use the vxdg command to create disk groups.
Serviceguard Configuration for Oracle 10g RAC Creating Volumes Creating Volumes Use the vxassist command to create logical volumes. The following is an example: # vxassist -g ops_dg make log_files 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files.
Serviceguard Configuration for Oracle 10g RAC Creating Volumes NOTE 108 The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the Veritas Volume Manager.
Serviceguard Configuration for Oracle 10g RAC Oracle Demo Database Files Oracle Demo Database Files The following set of volumes is required for the Oracle demo database which you can create during the installation process. Table 2-2 Volume Name Required Oracle File Names for Demo Database Size (MB) Raw Device File Name Oracle File Size (MB) opsctl1.ctl 118 /dev/vx/rdsk/ops_dg/opsctl1.ctl 110 opsctl2.ctl 118 /dev/vx/rdsk/ops_dg/opsctl2.ctl 110 opsctl3.ctl 118 /dev/vx/rdsk/ops_dg/opsctl3.
Serviceguard Configuration for Oracle 10g RAC Oracle Demo Database Files Table 2-2 Volume Name Required Oracle File Names for Demo Database (Continued) Size (MB) Raw Device File Name Oracle File Size (MB) opsundotbs1.dbf 508 /dev/vx/rdsk/ops_dg/opsundotbs1.dbf 500 opsundotbs2.dbf 508 /dev/vx/rdsk/ops_dg/opsundotbs2.dbf 500 opsexmple1.dbf 168 /dev/vx/rdsk/ops_dg/opsexample1.dbf 160 Create these files if you wish to build the demo database.
Serviceguard Configuration for Oracle 10g RAC Adding Disk Groups to the Cluster Configuration Adding Disk Groups to the Cluster Configuration For CVM 4.x or later, if the multi-node package was configured for disk group activation, the application package should be configured with package dependency to ensure the CVM disk group is active. For CVM 3.5 and CVM 4.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) Prerequisites for Oracle 10g (Sample Installation) The following sample steps prepare a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. 1. Create Inventory Groups on each Node Create the Oracle Inventory group if one does not exist, create the OSDBA group, and create the Operator Group (optional). # groupadd oinstall # groupadd dba # groupadd oper 2.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) 5. Enable Remote Access (ssh and remsh) for Oracle User on all Nodes 6. Create File System for Oracle Directories In the following samples, /mnt/app is a mounted file system for Oracle software. Assume there is a private disk c4t5d0 at 18 GB size on all nodes. Create the local file system on each node.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # usermod -d /mnt/app/oracle oracle 9. Create Oracle Base Directory (For RAC Binaries on Cluster File System) If installing RAC binaries on Cluster File System, create the oracle base directory once since this is CFS directory visible by all nodes. The CFS file system used is /cfs/mnt1.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # ORACLE_BASE=/mnt/app/oracle; export ORACLE_BASE # mkdir -p $ORACLE_BASE/oradata/ver10 # chown -R oracle:oinstall $ORACLE_BASE/oradata # chmod -R 755 $ORACLE_BASE/oradata The following is a sample of the mapping file for DBCA: system=/dev/vg_ops/ropssystem.dbf sysaux=/dev/vg_ops/ropssysaux.dbf undotbs1=/dev/vg_ops/ropsundotbs01.dbf undotbs2=/dev/vg_ops/ropsundotbs02.dbf example=/dev/vg_ops/ropsexample1.
Serviceguard Configuration for Oracle 10g RAC Prerequisites for Oracle 10g (Sample Installation) # chmod 755 VOTE # chown -R oracle:oinstall /cfs/mnt3 b. Create Directory for Oracle Demo Database on CFS Create the CFS directory to store Oracle database files. Run commands only on one node.
Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g Cluster Software Installing Oracle 10g Cluster Software The following sample steps for a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. Installing on Local File System Logon as a “oracle” user: $ export DISPLAY={display}:0.0 $ cd <10g Cluster Software disk directory> $ ./runInstaller Use following guidelines when installing on a local file system: 1.
Serviceguard Configuration for Oracle 10g RAC Installing Oracle 10g RAC Binaries Installing Oracle 10g RAC Binaries The following sample steps for a SGeRAC cluster for Oracle 10g. Refer to the Oracle documentation for Oracle installation details. Installing RAC Binaries on a Local File System Logon as a “oracle” user: $ export ORACLE BASE=/mnt/app/oracle $ export DISPLAY={display}:0.0 $ cd <10g RAC installation disk directory> $ .
Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database Creating a RAC Demo Database This section demonstrates the steps for creating a demo database with datafiles on raw volumes with SLVM or CVM, or with Cluster File System (on HP-UX releases that support them; see “About Veritas CFS and CVM from Symantec” on page 22).
Serviceguard Configuration for Oracle 10g RAC Creating a RAC Demo Database Use following guidelines when installing on a local file system: a. In this sample, the database name and SID prefix are ver10. b. Select the storage option for raw devices. Creating a RAC Demo Database on CFS Export environment variables for “oracle” user: export ORACLE_BASE=/cfs/mnt1/oracle export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORA_CRS_HOME=/mnt/app/crs/oracle/product/10.2.
Serviceguard Configuration for Oracle 10g RAC Verifying Oracle Disk Manager is Configured Verifying Oracle Disk Manager is Configured NOTE The following steps are specific to CFS 4.1 or later. 1. Check the license for CFS 4.1 or later. # /opt/VRTS/bin/vxlictest -n “VERITAS Storage Foundation for Oracle” -f “ODM” output: ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm output for CFS 4.1: VRTSodm 4.1m VRTSodm.ODM-KRN 4.1m VRTSodm.ODM-MAN 4.1m VRTSodm.ODM-RUN 4.
Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Use Oracle Disk Manager Library Configuring Oracle to Use Oracle Disk Manager Library NOTE The following steps are specific to CFS 4.1 or later. 1. Login as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home for Oracle 10g For HP 9000 Systems: $ rm ${ORACLE_HOME}/lib/libodm10.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm10.
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running Verify that Oracle Disk Manager is Running NOTE The following steps are specific to CFS 4.1 or later. 1. Start the cluster and Oracle database (if not already started) 2.
Serviceguard Configuration for Oracle 10g RAC Verify that Oracle Disk Manager is Running # kcmodule -P state odm Output: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: For CFS 4.1: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1 For CFS 5.0: Oracle instance running with ODM: VERITAS 5.0 ODM Library, Version 1.
Serviceguard Configuration for Oracle 10g RAC Configuring Oracle to Stop Using Oracle Disk Manager Library Configuring Oracle to Stop Using Oracle Disk Manager Library NOTE The following steps are specific to CFS 4.1 or later. 1. Login as Oracle user 2. Shutdown database 3. Change directories: $ cd ${ORACLE_HOME}/lib 4. Remove the file linked to the ODM library For HP 9000 Systems: $ rm libodm10.sl $ ln -s ${ORACLE_HOME}/lib/libodmd10.sl \ ${ORACLE_HOME}/lib/libodm10.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC It is recommended to start and stop Oracle Cluster Software in a Serviceguard package, as that will ensure that Oracle Cluster Software will start after SGeRAC is started and will stop before SGeRAC is halted.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC When the Oracle Cluster Software required storage is configured on SLVM volume groups or CVM disk groups, the Serviceguard package should be configured to activate and deactivate the required storage in the package control script. As an example, modify the control script to activate the volume group in shared mode and set VG in the package control script for SLVM volume groups.
Serviceguard Configuration for Oracle 10g RAC Using Serviceguard Packages to Synchronize with Oracle 10g RAC • DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp2 SG-CFS-MP-2=UP SAME_NODE DEPENDENCY_NAME DEPENDENCY_CONDITION DEPENDENCY_LOCATION mp3 SG-CFS-MP-3=UP SAME_NODE Starting and Stopping Oracle Cluster Software In the Serviceguard package control script, configure the Oracle Cluster Software start in the customer_defined_run_cmds function For 10g 10.1.0.04 or later: /sbin/init.d/init.
Serviceguard Configuration for Oracle 9i RAC 3 Serviceguard Configuration for Oracle 9i RAC This chapter shows the additional planning and configuration that is needed to use Oracle Real Application Clusters 9i with Serviceguard.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Planning Database Storage The files needed by the Oracle database must be placed on shared storage that is accessible to all RAC cluster nodes. This section shows how to plan the storage using SLVM, Veritas CFS, or Veritas CVM. Volume Planning with SLVM Storage capacity for the Oracle database must be provided in the form of logical volumes located in shared volume groups.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage ORACLE LOGICAL VOLUME WORKSHEET FOR LVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File 1:_____/dev/vg_ops/ropsctl1.ctl_______108______ Oracle Control File 2: ___/dev/vg_ops/ropsctl2.ctl______108______ Oracle Control File 3: ___/dev/vg_ops/ropsctl3.ctl______104______ Instance 1 Redo Log 1: ___/dev/vg_ops/rops1log1.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Storage Planning with CFS With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes. The following software needs to be installed in order to use this configuration: NOTE • SGeRAC • CFS For specific CFS Serviceguard Storage Management Suite product information refer to the HP Serviceguard Storage Management Suite Version A.01.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Table 3-1 RAC Software, Archive, Datafiles, SRVM RAC Software, Archive Configuration NOTE RAC Datafiles, SRVM 3 Local FS CFS 4 Local FS Raw (SLVM or CVM) Mixing files between CFS database files and raw volumes is allowable, but not recommended. RAC datafiles on CFS requires Oracle Disk Manager (ODM).
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage CFS and SGeRAC is available in selected HP Serviceguard Storage Management Suite bundles. For CFS 4.1, refer to the HP Serviceguard Storage Management Suite Version A.01.01 Release Notes. For CFS 5.0, refer to the HP Serviceguard Storage Management Suite Version A.02.00 Release Notes.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage Fill out the Veritas Volume worksheet to provide volume names for volumes that you will create using the Veritas utilities. The Oracle DBA and the HP-UX system administrator should prepare this worksheet together. Create entries for shared volumes only. For each volume, enter the full pathname of the raw volume device file. Be sure to include the desired size in MB. Following is a sample worksheet filled out.
Serviceguard Configuration for Oracle 9i RAC Planning Database Storage ORACLE LOGICAL VOLUME WORKSHEET FOR CVM Page ___ of ____ =============================================================================== RAW LOGICAL VOLUME NAME SIZE (MB) Oracle Control File 1: ___/dev/vx/rdsk/ops_dg/opsctl1.ctl______100______ Oracle Control File 2: ___/dev/vx/rdsk/ops_dg/opsctl2.ctl______100______ Oracle Control File 3: ___/dev/vx/rdsk/ops_dg/opsctl3.
Serviceguard Configuration for Oracle 9i RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC Installing Serviceguard Extension for RAC includes updating the software and rebuilding the kernel to support high availability cluster operation for Oracle Real Application Clusters.
Serviceguard Configuration for Oracle 9i RAC Veritas Cluster Volume Manager (CVM) and Cluster File System (CFS) Veritas Cluster Volume Manager (CVM) and Cluster File System (CFS) You may choose to configure cluster storage with the Veritas Cluster Volume Manager (CVM) instead of the Volume Manager (VxVM).
Serviceguard Configuration for Oracle 9i RAC Veritas Cluster Volume Manager (CVM) and Cluster File System (CFS) NOTE Chapter 3 CVM (and CFS - Cluster File System) are supported on some, but not all, current releases of HP-UX. See the latest release notes for your version of Serviceguard at http://www.docs.hp.com -> High Availability - > Serviceguard.
Serviceguard Configuration for Oracle 9i RAC Veritas Storage Management Products Veritas Storage Management Products Veritas Volume Manager (VxVM) 3.5 is not supported on HP-UX 11i v3. If you are running VxVM 3.5 as part of HP-UX (VxVM-Base), version 3.5 will be upgraded to version 4.1 when you upgrade to HP-UX 11i v3.
Serviceguard Configuration for Oracle 9i RAC About Device Special Files About Device Special Files HP-UX releases up to and including 11i v2 use a naming convention for device files that encodes their hardware path. For example, a device file named /dev/dsk/c3t15d0 would indicate SCSI controller instance 3, SCSI target 15, and SCSI LUN 0. HP-UX 11i v3 introduces a new nomenclature for device files, known as agile addressing (sometimes also called persistent LUN binding).
Serviceguard Configuration for Oracle 9i RAC Operating System Parameters Operating System Parameters The maximum number of Oracle server processes cmgmsd can handle is 8192. When there are more than 8192 server processes connected to cmgmsd, then cmgmsd will start to reject new requests. Oracle foreground server processes are needed to handle the requests of the DB client connected to the DB instance. Each foreground server process can either be a “dedicated” or a “shared” server process.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Creating a Storage Infrastructure with LVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Cluster Volume Manager (CVM), or Veritas Volume Manager (VxVM).
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Creating Volume Groups and Logical Volumes If your volume groups have not been set up, use the procedure in the next sections. If you have already done LVM configuration, skip ahead to the section “Installing Oracle Real Application Clusters.” Selecting Disks for the Volume Group Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM where hh must be unique to the volume group you are creating. Use the next hexadecimal number that is available on your system, after the volume groups that are already configured. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 3.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM PVG-strict, that is, it occurs between different physical volume groups; the -n redo1.log option lets you specify the name of the logical volume; and the -L 28 option allocates 28 megabytes. NOTE It is important to use the -M n and -c y options for both redo logs and control files. These options allow the redo log files to be resynchronized by SLVM following a system crash before Oracle recovery proceeds.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM If Oracle performs the “resilvering” of RAC data files that are mirrored logical volumes, choose a mirror consistency policy of “NONE” by disabling both mirror write caching and mirror consistency recovery. With a mirror consistency policy of “NONE”, SLVM does not perform the resynchronization.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM the array. If you are using SAM, choose the type of disk array you wish to configure, and follow the menus to define alternate links. If you are using LVM commands, specify the links on the command line. The following example shows how to configure alternate links using LVM commands. The following disk configuration is assumed: 8/0.15.0 8/0.15.1 8/0.15.2 8/0.15.3 8/0.15.4 8/0.15.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM # ls -l /dev/*/group 3. Use the pvcreate command on one of the device files associated with the LUN to define the LUN to LVM as a physical volume. # pvcreate -f /dev/rdsk/c0t15d0 It is only necessary to do this with one of the device file names for the LUN. The -f option is only necessary if the physical volume was previously used in some other volume group. 4.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Oracle Demo Database Files The following set of files is required for the Oracle demo database which you can create during the installation process. Table 3-2 Required Oracle File Names for Demo Database Logical Volume Name LV Size (MB) Raw Logical Volume Path Name Oracle File Size (MB)* opsctl1.ctl 108 /dev/vg_ops/ropsctl1.ctl 100 opsctl2.ctl 108 /dev/vg_ops/ropsctl2.ctl 100 opsctl3.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with LVM Table 3-2 Required Oracle File Names for Demo Database (Continued) Logical Volume Name LV Size (MB) Raw Logical Volume Path Name Oracle File Size (MB)* opsundotbs2.dbf 320 /dev/vg_ops/ropsundotbs2.dbf 312 opsexample1.dbf 168 /dev/vg_ops/ropsexample1.dbf 160 opscwmlite1.dbf 108 /dev/vg_ops/ropscwmlite1.dbf 100 opsindx1.dbf 78 /dev/vg_ops/ropsindx1.dbf 70 opsdrsys1.dbf 98 /dev/vg_ops/ropsdrsys1.
Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure Displaying the Logical Volume Infrastructure To display the volume group, use the vgdisplay command: # vgdisplay -v /dev/vg_ops Exporting the Logical Volume Infrastructure Before the Oracle volume groups can be shared, their configuration data must be exported to other nodes in the cluster. This is done either in Serviceguard Manager or by using HP-UX commands, as shown in the following sections.
Serviceguard Configuration for Oracle 9i RAC Displaying the Logical Volume Infrastructure # mkdir /dev/vg_ops # mknod /dev/vg_ops/group c 64 0xhh0000 For the group file, the major number is always 64, and the hexadecimal minor number has the format: 0xhh0000 where hh must be unique to the volume group you are creating. If possible, use the same number as on ftsys9. Use the following command to display a list of existing volume groups: # ls -l /dev/*/group 4.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle Real Application Clusters Installing Oracle Real Application Clusters NOTE Some versions of Oracle RAC requires installation of additional software. Refer to your version of Oracle for specific requirements. Before installing the Oracle Real Application Cluster software, make sure the cluster is running.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File Cluster Configuration ASCII File The following is an example of an ASCII configuration file generated with the cmquerycl command using the -w full option on a system with Serviceguard Extension for RAC. The OPS_VOLUME_GROUP parameters appear at the end of the file.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST is the host name or IP address of the system that is running the quorum server process. The QS_POLLING_INTERVAL (microseconds) is the interval at which Serviceguard checks to make sure the quorum server is running.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File NODE_NAME ever3a NETWORK_INTERFACE lan0 STATIONARY_IP15.244.64.140 NETWORK_INTERFACE lan1 HEARTBEAT_IP192.77.1.1 NETWORK_INTERFACE lan2 # List of serial device file names # For example: # SERIAL_DEVICE_FILE /dev/tty0p0 # Primary Network Interfaces on Bridged Net 1: lan0. # Warning: There are no standby network interfaces on bridged net 1. # Primary Network Interfaces on Bridged Net 2: lan1.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File # stop increasing before the card is considered down. NETWORK_FAILURE_DETECTION INOUT # Package Configuration Parameters. # Enter the maximum number of packages which will be configured in the cluster. # You can not add packages beyond this limit. # This parameter is required. MAX_CONFIGURED_PACKAGES 150 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Access Control Policy Parameters.
Serviceguard Configuration for Oracle 9i RAC Cluster Configuration ASCII File # # # # # # Example: to configure a role for user john from node noir to administer a cluster and all its packages, enter: USER_NAME john USER_HOST noir USER_ROLE FULL_ADMIN # # # # # # List of cluster aware LVM Volume Groups. These volume groups will be used by package applications via the vgchange -a e command. Neither CVM or VxVM Disk Groups should be used here.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS Creating a Storage Infrastructure with CFS In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Volume Manager (VxVM), or Veritas Cluster Volume Manager (CVM). You can also use a mixture of volume types, depending on your needs.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS With CFS, the database software and database files (control, redo, data files), and archive logs may reside on a cluster file system, which is visible by all nodes. In the following example, both the Oracle software and datafiles reside on CFS. There is a single Oracle home.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS # cmviewcl The following output will be displayed: CLUSTER ever3_cluster NODE ever3a ever3b STATUS up STATUS up up STATE running running 5. Configure the Cluster Volume Manager (CVM) Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM/CFS stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS group must be halted. This procedure is described in the Managing Serviceguard Fourteenth Edition user’s guide on HP-UX 11i v2 Appendix G. 7. Initializing Disks for CVM/CFS You need to initialize the physical disks that will be employed in CVM disk groups.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/vol2 The following output will be displayed: version 6 layout 10485760 sectors, 10485760 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vx/rdsk/cfsdg1/volsrvm The following output will be displayed: version 6 layout 307200 sectors, 307200 blo
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS 13. Mount Cluster Filesystem # cfsmount /cfs/mnt1 # cfsmount /cfs/mnt2 # cfsmount /cfs/cfssrvm 14. Check CFS Mount Points # bdf | grep cfs /dev/vx/dsk/cfsdg1/vol1 10485760 19651 9811985 0% /cfs/mnt1 /dev/vx/dsk/cfsdg1/vol2 10485760 19651 9811985 0% /cfs/mnt2 /dev/vx/dsk/cfsdg1/volsrvm 307200 1802 286318 1% /cfs/cfssrvm 15.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS # cfsumount /cfs/cfssrvm 2.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CFS The following output will be generated: Stopping CVM...
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Creating a Storage Infrastructure with CVM In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Volume Manager (VxVM), or Veritas Cluster Volume Manager (CVM).
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Using CVM 4.x or later This section has information on how to prepare the cluster and the system multi-node package with CVM 4.x or later only (without the CFS filesystem); (on HP-UX releases that support them; see “About Veritas CFS and CVM from Symantec” on page 22). For more detailed information on how to configure CVM 4.x or later, refer the Managing Serviceguard Fourteenth Edition user’s guide.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Configure the system multi-node package, SG-CFS-pkg, to configure and start the CVM stack. Unlike VxVM-CVM-pkg, the SG-CFS-pkg does not restrict heartbeat subnets to a single subnet and supports multiple subnets. # cmapplyconf -P /etc/cmcluster/cfs/SG-CFS-pkg.conf # cmrunpkg SG-CFS-pkg When CVM starts up, it selects a master node, which is the node from which you must issue the disk group configuration commands.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example: # vxdg -s init ops_dg c4t4d0 4. Creating Volumes and Adding a Cluster Filesystem # vxassist -g ops_dg make vol1 10240m # vxassist -g ops_dg make vol2 10240m # vxassist -g ops_dg make volsrvm 300m 5.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM well. The alternate policy is ‘local’, which means that if one node cannot see a specific mirror copy, then CVM will deactivate access to the volume for that node only.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM After the above command completes, start the cluster and create disk groups for shared use as described in the following sections. Starting the Cluster and Identifying the Master Node Run the cluster, which will activate the special CVM package: # cmruncl After the cluster is started, it will now run with a special system multi-node package named VxVM-CVM-pkg, which is on all nodes.
Serviceguard Configuration for Oracle 9i RAC Creating a Storage Infrastructure with CVM Initializing Disks for CVM Initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM).
Serviceguard Configuration for Oracle 9i RAC Creating Volumes Creating Volumes Use the vxassist command to create logical volumes. For example: # vxassist -g log_files make ops_dg 1024m This command creates a 1024 MB volume named log_files in a disk group named ops_dg. The volume can be referenced with the block device file /dev/vx/dsk/ops_dg/log_files or the raw (character) device file /dev/vx/rdsk/ops_dg/log_files.
Serviceguard Configuration for Oracle 9i RAC Creating Volumes NOTE 176 The specific commands for creating mirrored and multi-path storage using CVM are described in the HP-UX documentation for the Veritas Volume Manager.
Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files Oracle Demo Database Files The following set of volumes is required for the Oracle demo database which you can create during the installation process. Table 3-3 Volume Name Required Oracle File Names for Demo Database Size (MB) Raw Device File Name Oracle File Size (MB) opsctl1.ctl 108 /dev/vx/rdsk/ops_dg/opsctl1.ctl 100 opsctl2.ctl 108 /dev/vx/rdsk/ops_dg/opsctl2.ctl 100 opsctl3.ctl 108 /dev/vx/rdsk/ops_dg/opsctl3.
Serviceguard Configuration for Oracle 9i RAC Oracle Demo Database Files Table 3-3 Volume Name Required Oracle File Names for Demo Database (Continued) Size (MB) Raw Device File Name Oracle File Size (MB) opsundotbs1.dbf 320 /dev/vx/rdsk/ops_dg/opsundotbs1.dbf 312 opsundotbs2.dbf 320 /dev/vx/rdsk/ops_dg/opsundotbs2.dbf 312 opsexample1.dbf 168 /dev/vx/rdsk/ops_dg/opsexample1.dbf 160 opscwmlite1.dbf 108 /dev/vx/rdsk/ops_dg/opscwmlite1.dbf 100 opsindx1.
Serviceguard Configuration for Oracle 9i RAC Adding Disk Groups to the Cluster Configuration Adding Disk Groups to the Cluster Configuration For CVM 4.x or later, if the multi-node package was configured for disk group activation, the application package should be configured with package dependency to ensure the CVM disk group is active. For CVM 3.5 and CVM 4.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC Installing Oracle 9i RAC The following sample steps for a SGeRAC cluster for Oracle 9i. Refer to the Oracle documentation for Oracle installation details. Install Oracle Software into CFS Home Oracle RAC software is installed using the Oracle Universal Installer. This section describes installation of Oracle RAC software onto a CFS home. 1. Oracle Pre-installation Steps a. Create user accounts.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC # ll total 0 drwxr-xr-x lost+found 2 root root 96 Jun 3 11:43 drwxr-xr-x oradat 2 oracle dba 96 Jun 3 13:45 d. Set up CFS directory for Server Management. Preallocate space for srvm (200MB) # prealloc /cfs/cfssrvm/ora_srvm 209715200 # chown oracle:dba /cfs/cfssrvm/ora_srvm 2. Install Oracle RAC Software a.
Serviceguard Configuration for Oracle 9i RAC Installing Oracle 9i RAC LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:$ORACLE_HOM E/rdbms/lib SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32 export LD_LIBRARY_PATH SHLIB_PATH export CLASSPATH=/opt/java1.3/lib CLASSPATH=$CLASSPATH:$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$O RACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib export CLASSPATH export DISPLAY={display}:0.0 2. Set up Listeners with Oracle Network Configuration Assistant $ netca 3.
Serviceguard Configuration for Oracle 9i RAC Verify that Oracle Disk Manager is Configured Verify that Oracle Disk Manager is Configured NOTE The following steps are specific to CFS 4.1 or later. 1. Check the license for CFS 4.1 or later. # /opt/VRTS/bin/vxlictest -n “VERITAS Storage Foundation for Oracle” -f “ODM” output: ODM feature is licensed 2. Check that the VRTSodm package is installed: # swlist VRTSodm output for CFS 4.1: VRTSodm 4.1m VRTSodm.ODM-KRN 4.1m VRTSodm.ODM-MAN 4.1m VRTSodm.ODM-RUN 4.
Serviceguard Configuration for Oracle 9i RAC Configure Oracle to use Oracle Disk Manager Library Configure Oracle to use Oracle Disk Manager Library NOTE The following steps are specific to CFS 4.1 or later. 1. Logon as Oracle user 2. Shutdown database 3. Link the Oracle Disk Manager library into Oracle home using the following commands: For HP 9000 systems: $ rm ${ORACLE_HOME}/lib/libodm9.sl $ ln -s /opt/VRTSodm/lib/libodm.sl \ ${ORACLE_HOME}/lib/libodm9.
Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running Verify Oracle Disk Manager is Running NOTE The following steps are specific to CFS 4.1 or later. 1. Start the cluster and Oracle database (if not already started) 2.
Serviceguard Configuration for Oracle 9i RAC Verify Oracle Disk Manager is Running 3. Verify that Oracle Disk Manager is loaded with the following command: # kcmodule -P state odm The following output will be displayed: state loaded 4. In the alert log, verify the Oracle instance is running. The log should contain output similar to the following: For CFS 4.1: Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1 For CFS 5.0: Oracle instance running with ODM: VERITAS 5.
Serviceguard Configuration for Oracle 9i RAC Configuring Oracle to Stop using Oracle Disk Manager Library Configuring Oracle to Stop using Oracle Disk Manager Library NOTE The following steps are specific to CFS 4.1 or later. 1. Login as Oracle user 2. Shutdown the database 3. Change directories: $ cd ${ORACE_HOME}/lib 4. Remove the file linked to the ODM library: For HP 9000 systems: $ rm libodm9.sl $ ln -s ${ORACLE_HOME}/lib/libodmd9.sl \ ${ORACLE_HOME}/lib/libodm9.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using Packages to Configure Startup and Shutdown of RAC Instances To automate the startup and shutdown of RAC instances on the nodes of the cluster, you can create packages which activate the appropriate volume groups and then run RAC. Refer to the section “Creating Packages to Launch Oracle RAC Instances” NOTE The maximum number of RAC instances for Oracle 9i is 127 per cluster.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 1. Shut down the Oracle applications, if any. 2. Shut down Oracle. 3. Deactivate the database volume groups or disk groups. 4. Shut down the cluster (cmhaltnode or cmhaltcl). If the shutdown sequence described above is not followed, cmhaltcl or cmhaltnode may fail with a message that GMS clients (RAC 9i) are active or that shared volume groups are active.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances If you are using CVM disk groups for the RAC database, be sure to include the name of each disk group on a separate STORAGE_GROUP line in the configuration file. If you are using CFS or CVM for RAC shared storage with multi-node packages, the package containing the RAC instance should be configured with package dependency to depend on the multi-node packages.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances own script, and copying it to all nodes that can run the package. This script should contain the cmmodpkg -e command and activate the package after RAC and the cluster manager have started. Adding or Removing Packages on a Running Cluster You can add or remove packages while the cluster is running, subject to the limit of MAX_CONFIGURED_PACKAGES.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances First, generate a control script template: # cmmakepkg -s /etc/cmcluster/pkg1/control.sh You may customize the script, as described in the section, “Customizing the Package Control Script.” Customizing the Package Control Script Check the definitions and declarations at the beginning of the control script using the information in the Package Configuration worksheet.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances NOTE • Add service command(s) • Add a service restart parameter, if desired. Use care in defining service run commands. Each run command is executed by the control script in the following way: • The cmrunserv command executes each run command and then monitors the process id of the process created by the run command.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Customizing the Control Script for RAC Instances Use the package control script to perform the following: • Activation and deactivation of RAC volume groups. • Startup and shutdown of the RAC instance. • Monitoring of the RAC instance. Set RAC environment variables in the package control script to define the correct execution environment for RAC.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using the Command Line to Configure an Oracle RAC Instance Package Serviceguard Manager provides a template to configure package behavior that is specific to an Oracle RAC Instance package. The RAC Instance package starts the Oracle RAC instance, monitors the Oracle processes, and stops the RAC instance.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 5. Copy the Oracle shell script templates from the ECMT default source directory to the package directory. # cd /etc/cmcluster/pkg/${SID_NAME} # cp -p /opt/cmcluster/toolkit/oracle/* Example: # cd /etc/cmcluster/pkg/ORACLE_TEST0 # cp -p /opt/cmcluster/toolkit/oracle/* Edit haoracle.conf as per README 6. Gather the package service name for monitoring Oracle instance processes.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances Using Serviceguard Manager to Configure Oracle RAC Instance Package Serviceguard Manager can be used to configure an Oracle RAC instance. Refer to the Serviceguard Manager documentation for specific configuration information. NOTE Chapter 3 Serviceguard Manager is the graphical user interface for Serviceguard. For version A.11.
Serviceguard Configuration for Oracle 9i RAC Using Packages to Configure Startup and Shutdown of RAC Instances 198 Chapter 3
Maintenance and Troubleshooting 4 Maintenance and Troubleshooting This chapter includes information about carrying out routine maintenance on an Real Application Cluster configuration. As presented here, these tasks differ in some details from the similar tasks described in the Managing Serviceguard documentation.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Reviewing Cluster and Package States with the cmviewcl Command A cluster or its component nodes may be in several different states at different points in time. Status information for clusters, packages and other cluster elements is shown in the output of the cmviewcl command and in some displays in Serviceguard Manager.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Examples of Cluster and Package States The following is an example of the output generated shown for the cmviewcl command: CLUSTER cluster_mo NODE minie STATUS up STATUS up Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY up NODE mo STATUS up Quorum_Server_Status: NAME STATUS white up Network_Parameters: INTERFACE STATUS PRIMARY up PRIMARY up STANDBY
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command mo up Script_Parameters: ITEM STATUS Service up Service up Service up Service up Service up PACKAGE SG-CFS-DG-1 NODE_NAME minie enabled MAX_RESTARTS 0 5 5 0 0 STATUS up STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-pkg NODE_NAME mo STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-pkg PACKAGE SG-CFS-MP-1 NODE_NAME minie STATUS up STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 PACKAGE SG-CFS-MP-3 NODE_NAME minie STATUS up STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 NODE_NAME mo STATUS up Dependency_Parameters: DEPENDENCY_NAME SG-CFS-DG-1 SATISFIED yes STATE running SWITCHING enabled SATISFIED yes STATE running STATE running AUTO_RUN enabled SYSTEM no SWITCHI
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Node Status and State The status of a node is either up (active as a member of the cluster) or down (inactive in the cluster), depending on whether its cluster daemon is running or not. Note that a node might be down from the cluster perspective, but still up and running HP-UX. A node may also be in one of the following states: • Failed. A node never sees itself in this state.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Package Switching Attributes Packages also have the following switching attributes: • Package Switching. Enabled means that the package can switch to another node in the event of failure. • Switching Enabled for a Node. Enabled means that the package can switch to the referenced node.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command MEMBER the ID number of a member of a group PID the Process ID of the group member MEMBER_NODE the Node on which the group member is running Service Status Services have only status, as follows: • Up. The service is being monitored. • Down. The service is not running. It may have halted or failed. • Uninitialized.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Failover and Failback Policies Packages can be configured with one of two values for the FAILOVER_POLICY parameter: • CONFIGURED_NODE. The package fails over to the next node in the node list in the package configuration file. • MIN_PACKAGE_NODE. The package fails over to the node in the cluster with the fewest running packages on it.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Start configured_node Failback manual Node_Switching_Parameters: NODE_TYPE STATUS SWITCHING Primary up enabled NAME ftsys9 (cur rent) NODE ftsys10 STATUS up STATE running Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up PATH 28.1 32.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Quorum Server Status: NAME STATUS lp-qs up ... NODE ftsys10 STATUS up STATE running STATE running Quorum Server Status: NAME STATUS lp-qs up STATE running CVM Package Status If the cluster is using the VERITAS Cluster Volume Manager for disk storage, the system multi-node package CVM-VxVM-pkg must be running on all active nodes for applications to be able to access CVM disk groups.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command ITEM STATUS Service up VxVM-CVM-pkg.srv MAX_RESTARTS 0 RESTARTS 0 NAME Status After Moving the Package to Another Node After issuing the following command: # cmrunpkg -n ftsys9 pkg2 the output of the cmviewcl -v command is as follows: CLUSTER example NODE ftsys9 STATUS up STATUS up STATE running Network_Parameters: INTERFACE STATUS PRIMARY up STANDBY up PATH 56/36.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Policy_Parameters: POLICY_NAME CONFIGURED_VALUE Failover min_package_node Failback manual Script_Parameters: ITEM STATUS NAME MAX_RESTARTS Service up service2.1 0 Subnet up 15.13.168.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Both packages are now running on ftsys9 and pkg2 is enabled for switching. Ftsys10 is running the daemon and no packages are running on ftsys10.
Maintenance and Troubleshooting Reviewing Cluster and Package States with the cmviewcl Command Subnet Resource Subnet Resource Subnet Resource Subnet up up up up up up up manx burmese burmese tabby tabby persian persian 192.8.15.0 /resource/random 192.8.15.0 /resource/random 192.8.15.0 /resource/random 192.8.15.
Maintenance and Troubleshooting Online Reconfiguration Online Reconfiguration The online reconfiguration feature provides a method to make configuration changes online to a Serviceguard Extension for RAC (SGeRAC) cluster. Specifically, this provides the ability to add and/or delete nodes from a running SGeRAC Cluster, and to reconfigure SLVM Volume Group (VG) while it is being accessed by only one node.
Maintenance and Troubleshooting Managing the Shared Storage Managing the Shared Storage Single Node Online volume Re-Configuration (SNOR) The SLVM Single Node Online volume Re-configuration (SNOR) feature provides a method for changing the configuration for an active shared VG in a SGeRAC cluster. SLVM SNOR allows the reconfiguration of a shared volume group and of logical and physical volumes in the VG. This is done while keeping the VG active on a single node in exclusive mode.
Maintenance and Troubleshooting Managing the Shared Storage # vgchange -a e -x vg_shared NOTE Ensure that none of the mirrored logical volumes in this volume group have Consistency Recovery set to MWC (refer lvdisplay(1M)). Changing the mode back to “shared” will not be allowed in that case, since Mirror Write Cache consistency recovery (MWC) is not valid in volume groups activated in shared mode. 5.
Maintenance and Troubleshooting Managing the Shared Storage The vgimport(1M)/vgexport(1M) sequence will not preserve the order of physical volumes in the /etc/lvmtab file. If the ordering is significant due to the presence of active-passive devices, or if the volume group has been configured to maximize throughput by ordering the paths accordingly, the ordering would need to be repeated. 7. Change the activation mode back to “shared” on the node in the cluster where the volume group vg_shared is active.
Maintenance and Troubleshooting Managing the Shared Storage This command is issued from the configuration node only, and the cluster must be running on all nodes for the command to succeed. Note that both the -S and the -c options are specified. The -S y option makes the volume group shareable, and the -c y option causes the cluster id to be written out to all the disks in the volume group.
Maintenance and Troubleshooting Managing the Shared Storage NOTE Do not share volume groups that are not part of the RAC configuration unless shared access is controlled. Deactivating a Shared Volume Group Issue the following command from each node to deactivate the shared volume group: # vgchange -a n /dev/vg_ops Remember that volume groups remain shareable even when nodes enter and leave the cluster.
Maintenance and Troubleshooting Managing the Shared Storage 4. From node 1, use the vgchange command to deactivate the volume group: # vgchange -a n /dev/vg_ops 5. Use the vgchange command to mark the volume group as unshareable: # vgchange -S n -c n /dev/vg_ops 6. Prior to making configuration changes, activate the volume group in normal (non-shared) mode: # vgchange -a y /dev/vg_ops 7. Use normal LVM commands to make the needed changes.
Maintenance and Troubleshooting Managing the Shared Storage 13. Use the vgimport command, specifying the map file you copied from the configuration node. In the following example, the vgimport command is issued on the second node for the same volume group that was modified on the first node: # vgimport -v -m /tmp/vg_ops.map /dev/vg_ops /dev/dsk/c0t2d0/dev/dsk/c1t2d0 14.
Maintenance and Troubleshooting Managing the Shared Storage One node will identify itself as the master. Create disk groups from this node. Similarly, you can delete VxVM or CVM disk groups provided they are not being used by a cluster node at the time. NOTE For CVM without CFS, if you are adding a disk group to the cluster configuration, make sure you also modify any package or create the package control script that imports and deports this disk group.
Maintenance and Troubleshooting Removing Serviceguard Extension for RAC from a System Removing Serviceguard Extension for RAC from a System If you wish to remove a node from Serviceguard Extension for RAC operation, use the swremove command to delete the software. Note the following: NOTE Chapter 4 • The cluster service should not be running on the node from which you will be deleting Serviceguard Extension for RAC.
Maintenance and Troubleshooting Monitoring Hardware Monitoring Hardware Good standard practice in handling a high availability system includes careful fault monitoring so as to prevent failures if possible or at least to react to them swiftly when they occur.
Maintenance and Troubleshooting Adding Disk Hardware Adding Disk Hardware As your system expands, you may need to add disk hardware. This also means modifying the logical volume structure. Use the following general procedure: 1. Halt packages. 2. Ensure that the Oracle database is not active on either node. 3. Deactivate and mark as unshareable any shared volume groups. 4. Halt the cluster. 5. Deactivate automatic cluster startup. 6. Shutdown and power off system before installing new hardware. 7.
Maintenance and Troubleshooting Replacing Disks Replacing Disks The procedure for replacing a faulty disk mechanism depends on the type of disk configuration you are using and on the type of Volume Manager software. For a description of replacement procedures using VERITAS VxVM or CVM, refer to the chapter on “Administering Hot-Relocation” in the VERITAS Volume Manager Administrator’s Guide. Additional information is found in the VERITAS Volume Manager Troubleshooting Guide.
Maintenance and Troubleshooting Replacing Disks Replacing a Mechanism in an HA Enclosure Configured with Exclusive LVM Non-Oracle data that is used by packages may be configured in volume groups that use exclusive (one-node-at-a-time) activation. If you are using exclusive activation and software mirroring with MirrorDisk/UX and the mirrored disks are mounted in a high availability disk enclosure, you can use the following steps to hot plug a disk mechanism: 1.
Maintenance and Troubleshooting Replacing Disks Online Replacement of a Mechanism in an HA Enclosure Configured with Shared LVM (SLVM) If you are using software mirroring for shared concurrent activation of Oracle RAC data with MirrorDisk/UX and the mirrored disks are mounted in a high availability disk enclosure use the following LVM command options to change/replace disks via OLR (On Line Replacement): NOTE This procedure supports either LVM or SLVM VG and is “online” (activated), which uses an “online
Maintenance and Troubleshooting Replacing Disks 2. Replace new disk. The new disk size needs to be of equal or greater size. This is required whether or not the disk replacement is online or offline. 3. Restore the LVM header to the new disk using the following command: # vgcfgrestore -n [vg name] [pv raw path] It is only necessary to perform the vgcfgrestore operation once from any node on the cluster. 4.
Maintenance and Troubleshooting Replacing Disks If you are using software mirroring for shared concurrent activation of Oracle RAC data with MirrorDisk/UX and the mirrored disks are mounted in a high availability disk enclosure, use the following steps to carry out offline replacement: 1. Make a note of the physical volume name of the failed mechanism (for example, /dev/dsk/c2t3d0). 2. Deactivate the volume group on all nodes of the cluster: # vgchange -a n vg_ops 3.
Maintenance and Troubleshooting Replacing Disks On-line Hardware Maintenance with In-line SCSI Terminator Serviceguard allows on-line SCSI disk controller hardware repairs to all cluster nodes if you use HP’s in-line terminator (C2980A) on nodes connected to the end of the shared FW/SCSI bus. The in-line terminator cable is a 0.5 meter extension cable with the terminator on the male end, which connects to the controller card for an external bus.
Maintenance and Troubleshooting Replacing Disks Figure 4-1 F/W SCSI Buses with In-line Terminators The use of in-line SCSI terminators allows you to do hardware maintenance on a given node by temporarily moving its packages to another node and then halting the original node while its hardware is serviced. Following the replacement, the packages can be moved back to the original node.
Maintenance and Troubleshooting Replacing Disks 4. Disconnect the node from the in-line terminator cable or Y cable if necessary. The other nodes accessing the bus will encounter no problems as long as the in-line terminator or Y cable remains connected to the bus. 5. Replace or upgrade hardware on the node, as needed. 6. Reconnect the node to the in-line terminator cable or Y cable if necessary. 7. Reconnect power and reboot the node. If AUTOSTART_CMCLD is set to 1 in the /etc/rc.config.
Maintenance and Troubleshooting Replacement of I/O Cards Replacement of I/O Cards After an I/O card failure, you can replace the card using the following steps. It is not necessary to bring the cluster down to do this if you are using SCSI inline terminators or Y cables at each node. 1. Halt the node by using Serviceguard Manager or the cmhaltnode command. Packages should fail over normally to other nodes. 2. Remove the I/O cable from the card.
Maintenance and Troubleshooting Replacement of LAN Cards Replacement of LAN Cards If you have a LAN card failure, which requires the LAN card to be replaced, you can replace it on-line or off-line depending on the type of hardware and operating system you are running. It is not necessary to bring the cluster down to do this. Off-Line Replacement The following steps show how to replace a LAN card off-line. These steps apply to both HP-UX 11.0 and 11i: 1. Halt the node by using the cmhaltnode command. 2.
Maintenance and Troubleshooting Replacement of LAN Cards After Replacing the Card After the on-line or off-line replacement of LAN cards has been done, Serviceguard will detect that the MAC address (LLA) of the card has changed from the value stored in the cluster binary configuration file, and it will notify the other nodes in the cluster of the new MAC address. The cluster will operate normally after this.
Maintenance and Troubleshooting Monitoring RAC Instances Monitoring RAC Instances The DB Provider provides the capability to monitor RAC databases. RBA (Role Based Access) enables a non-root user to have the capability to monitor RAC instances using Serviceguard Manager.
Maintenance and Troubleshooting Monitoring RAC Instances 238 Chapter 4
Software Upgrades A Software Upgrades Serviceguard Extension for RAC (SGeRAC) software upgrades can be done in the two following ways: • rolling upgrade • non-rolling upgrade Instead of an upgrade, moving to a new version can be done with: • migration with cold install Rolling upgrade is a feature of SGeRAC that allows you to perform a software upgrade on a given node without bringing down the entire cluster. SGeRAC supports rolling upgrades on version A.11.
Software Upgrades One advantage of both rolling and non-rolling upgrades versus cold install is that upgrades retain the pre-existing operating system, software and data. Conversely, the cold install process erases the pre-existing system; you must re-install the operating system, software and data. For these reasons, a cold install may require more downtime.
Software Upgrades Rolling Software Upgrades Rolling Software Upgrades SGeRAC version A.11.15 and later allow you to roll forward to any higher revision provided all of the following conditions are met: • The upgrade must be done on systems of the same architecture (HP 9000 or Integrity Servers). • All nodes in the cluster must be running on the same version of HP-UX. • Each node must be running a version of HP-UX that supports the new SGeRAC version.
Software Upgrades Rolling Software Upgrades NOTE It is optional to set this parameter to “1”. If you want the node to join the cluster at boot time, set this parameter to “1”, otherwise set it to “0”. 6. Restart the cluster on the upgraded node (if desired). You can do this in Serviceguard Manager, or from the command line, issue the Serviceguard cmrunnode command. 7. Restart Oracle (RAC, CRS, Clusterware, OPS) software on the local node. 8.
Software Upgrades Rolling Software Upgrades Example of Rolling Upgrade The following example shows a simple rolling upgrade on two nodes, each running standard Serviceguard and RAC instance packages, as shown in Figure A-1. (This and the following figures show the starting point of the upgrade as SGeRAC A.11.15 for illustration only. A roll to SGeRAC version A.11.16 is shown.) SGeRAC rolling upgrade requires the same operating system version on all nodes.
Software Upgrades Rolling Software Upgrades Step 1. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 1. 2. Halt node 1. This will cause the node’s packages to start up on an adoptive node. You can do this in Serviceguard Manager, or from the command line issue the following: # cmhaltnode -f node1 This will cause the failover package to be halted cleanly and moved to node 2. The Serviceguard daemon on node 1 is halted, and the result is shown in Figure A-2.
Software Upgrades Rolling Software Upgrades Step 2. Upgrade node 1 and install the new version of Serviceguard and SGeRAC (A.11.16), as shown in Figure A-3. NOTE If you install Serviceguard and SGeRAC separately, Serviceguard must be installed before installing SGeRAC. Figure A-3 Node 1 Upgraded to SG/SGeRAC 11.
Software Upgrades Rolling Software Upgrades Step 3. 1. Restart the cluster on the upgraded node (node 1) (if desired). You can do this in Serviceguard Manager, or from the command line issue the following: # cmrunnode node1 2. At this point, different versions of the Serviceguard daemon (cmcld) are running on the two nodes, as shown in Figure A-4. 3. Start Oracle (RAC, CRS, Clusterware, OPS) software on node 1.
Software Upgrades Rolling Software Upgrades Step 4. 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on node 2. 2. Halt node 2. You can do this in Serviceguard Manager, or from the command line issue the following: # cmhaltnode -f node2 This causes both packages to move to node 1; see Figure A-5. 3. Upgrade node 2 to Serviceguard and SGeRAC (A.11.16) as shown in Figure A-5. 4. When upgrading is finished, enter the following command on node 2 to restart the cluster on node 2: # cmrunnode node2 5.
Software Upgrades Rolling Software Upgrades Step 5. Move PKG2 back to its original node. Use the following commands: # cmhaltpkg pkg2 # cmrunpkg -n node2 pkg2 # cmmodpkg -e pkg2 The cmmodpkg command re-enables switching of the package, which is disabled by the cmhaltpkg command. The final running cluster is shown in Figure A-6.
Software Upgrades Rolling Software Upgrades Limitations of Rolling Upgrades The following limitations apply to rolling upgrades: • During a rolling upgrade, you should issue Serviceguard/SGeRAC commands (other than cmrunnode and cmhaltnode) only on a node containing the latest revision of the software. Performing tasks on a node containing an earlier revision of the software will not work or will cause inconsistent results.
Software Upgrades Non-Rolling Software Upgrades Non-Rolling Software Upgrades A non-rolling upgrade allows you to perform a software upgrade from any previous revision to any higher revision or between operating system versions. For example, you may do a non-rolling upgrade from SGeRAC A.11.14 on HP-UX 11i v1 to A.11.16 on HP-UX 11i v2, given both are running the same architecture.
Software Upgrades Non-Rolling Software Upgrades Steps for Non-Rolling Upgrades Use the following steps for a non-rolling software upgrade: 1. Halt Oracle (RAC, CRS, Clusterware, OPS) software on all nodes in the cluster. 2. Halt all nodes in the cluster. # cmhaltcl -f 3. If necessary, upgrade all the nodes in the cluster to the new HP-UX release. 4. Upgrade all the nodes in the cluster to the new Serviceguard/SGeRAC release. 5. Restart the cluster. Use the following command: # cmruncl 6.
Software Upgrades Non-Rolling Software Upgrades Limitations of Non-Rolling Upgrades The following limitations apply to non-rolling upgrades: 252 • Binary configuration files may be incompatible between releases of Serviceguard. Do not manually copy configuration files between nodes. • It is necessary to halt the entire cluster when performing a non-rolling upgrade.
Software Upgrades Non-Rolling Software Upgrades Migrating a SGeRAC Cluster with Cold Install There may be circumstances when you prefer a cold install of the HP-UX operating system rather than an upgrade. The cold install process erases the pre-existing operating system and data and then installs the new operating system and software; you must then restore the data. CAUTION The cold install process erases the pre-existing software, operating system, and data.
Software Upgrades Non-Rolling Software Upgrades 254 Appendix A
Blank Planning Worksheets B Blank Planning Worksheets This appendix reprints blank planning worksheets used in preparing the RAC cluster. You can duplicate any of these worksheets that you find useful and fill them in as a part of the planning process.
Blank Planning Worksheets LVM Volume Group and Physical Volume Worksheet LVM Volume Group and Physical Volume Worksheet VG and PHYSICAL VOLUME WORKSHEET Page ___ of ____ ========================================================================== Volume Group Name: ______________________________________________________ PV Link 1 PV Link2 Physical Volume Name:_____________________________________________________ Physical Volume Name:_____________________________________________________ Physical Volume Name:
Blank Planning Worksheets VxVM Disk Group and Disk Worksheet VxVM Disk Group and Disk Worksheet DISK GROUP WORKSHEET Page ___ of ____ =========================================================================== Disk Group Name: __________________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name:______________________________________________________ Physical Volume Name:____________________________________________________
Blank Planning Worksheets Oracle Logical Volume Worksheet Oracle Logical Volume Worksheet NAME SIZE Oracle Control File 1: _____________________________________________________ Oracle Control File 2: _____________________________________________________ Oracle Control File 3: _____________________________________________________ 258 Instance 1 Redo Log 1: _____________________________________________________ Instance 1 Redo Log 2: _____________________________________________________ Instance 1 Red
Index A activation of volume groups in shared mode, 218 adding packages on a running cluster, 191 administration cluster and package states, 200 array replacing a faulty mechanism, 226, 228, 229 AUTO_RUN parameter, 189 AUTO_START_TIMEOUT in sample configuration file, 87, 155 B building a cluster CVM infrastructure, 100, 168 building an RAC cluster displaying the logical volume infrastructure, 84, 152 logical volume infrastructure, 75, 143 building logical volumes for RAC, 82, 149 C CFS, 92, 98, 160 deleting
Index in sample package control script, 193 FS_MOUNT_OPT in sample package control script, 193 G GMS group membership services, 28 group membership services define, 28 H hardware adding disks, 225 monitoring, 224 heartbeat subnet address parameter in cluster manager configuration, 61 HEARTBEAT_INTERVAL in sample configuration file, 87, 155 HEARTBEAT_IP in sample configuration file, 87, 155 high availability cluster defined, 16 I in-line terminator permitting online hardware maintenance, 231 installing Orac
Index Oracle demo database files, 82, 109, 150, 177 optimizing packages for large numbers of storage units, 193 Oracle demo database files, 82, 109, 150, 177 Oracle 10 RAC installing binaries, 118 Oracle 10g RAC introducing, 39 Oracle 9i RAC installing, 180 introducing, 129 Oracle Disk Manager configuring, 121 Oracle Parallel Server starting up instances, 188 Oracle RAC installing, 86, 154 Oracle10g installing, 117 P package basic concepts, 17, 18 moving status, 210 state, 207 status and state, 204 switchi
Index SERVICE_RESTART in sample package control script, 193 Serviceguard Extension for RAC installing, 55, 137 introducing, 15 shared mode activation of volume groups, 218 deactivation of volume groups, 219 shared volume groups making volume groups shareable, 217 sharing volume groups, 84, 152 SLVM making volume groups shareable, 217 SNOR configuration, 215 software upgrades, 239 state cluster, 207 node, 204 of cluster and package, 200 package, 204, 207 status cluster, 203 halting node, 212 moving package,