HP Serviceguard Enterprise Cluster Master Toolkit User Guide HP Part Number: 5900-2145 Published: December 2012 Edition: 1
© Copyright 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................7 Installing and Uninstalling Enterprise Cluster Master Toolkit............................................................8 2 Using the Oracle Toolkit in an HP Serviceguard Cluster...................................9 Overview................................................................................................................................9 Supported Versions...................
3 Using the Sybase ASE Toolkit in a Serviceguard Cluster on HP-UX ..................63 Overview..............................................................................................................................63 Sybase Information.................................................................................................................63 Setting up the Application........................................................................................................
Toolkit user configuration..................................................................................................105 Creating Serviceguard Package Using Modular Method............................................................107 Error Handling.....................................................................................................................107 Maintaining the Apache Web Server .....................................................................................
Warranty information.......................................................................................................143 HP Authorized Resellers.........................................................................................................143 Documentation Feedback.......................................................................................................143 New and Changed Information in this Edition..........................................................................
1 Introduction The Enterprise Cluster Master Toolkit (ECMT) is a set of templates and scripts that allow the configuration of Serviceguard packages for internet servers and for third-party Database Management Systems. This unified set of high availability tools is being released on HP-UX 11i v2 and 11i v3. Each Toolkit is a set of scripts specifically for an individual application to start, stop, and monitor the application. This set also helps to integrate popular applications into a Serviceguard cluster.
Table 1 Toolkit Name/Application Extension and Application Name Toolkit Name/ Application Extension Application Name Serviceguard Extension for RAC (SGeRAC) Oracle Real application clusters Serviceguard Extension for SAP (SGeSAP) SAP Serviceguard Extension for Oracle E-Business Suite (SGeEBS) Oracle E-Business Suite HP Serviceguard Toolkits for Database Replication Solutions Oracle Data Guard and DB2 HADR HA NFS Toolkit Network File System (NFS) HP Serviceguard Toolkit for Integrity Virtual Serv
2 Using the Oracle Toolkit in an HP Serviceguard Cluster This chapter describes the High Availability Oracle Toolkit designed for use in a HP Serviceguard environment. This chapter covers the basic steps for configuring an Oracle instance in a HP-UX cluster, and is intended for users who want to integrate an Oracle Database Server with Serviceguard. You must be familiar with the installation and configuration procedures, for ServiceGuard configuration and Oracle Database Server concepts.
package). For more information, see “Configuring and packaging Oracle single-instance database to co-exist with SGeRAC packages” (page 56). Support for Oracle Database Without ASM Setting up the application To set up the application, install the Oracle in /home/oracle on all the cluster nodes with Oracle as the database administrator, and then configure the shared storage. 1.
If you need help in creating, importing, or managing the volume group or disk group and filesystem, see Building an HA Cluster Configuration in the Serviceguard user manual available at http://www.hp.com/go/hpux-serviceguard-docs-> HP Serviceguard. • Configuring shared file system using CFS The shared file system can be a CFS mounted file system. To configure an Oracle package in a CFS environment, the Serviceguard CFS packages must be running so that the Oracle package can access CFS mounted file systems.
NOTE: If you decide to store the configuration information on a local disk and propagate the information to all nodes, ensure that pfile/spfile, the password file, and all control files and data files are on shared storage. For this set up, you must set up symbolic links to the pfile and the password file from /home/oracle/dbs as follows: ln -s /ORACLE_TEST0/dbs/initORACLE_TEST0.ora \ ${ORACLE_HOME}/dbs/initORACLE_TEST0.ora ln -s /ORACLE_TEST0/dbs/orapwORACLE_TEST0.
NOTE: This setup is not supported if Oracle 10g Release 1 is configured with LVM or VxVM. If Oracle 10g Release 1 is configured with LVM or VxVM, local configuration is recommended. The above configuration is supported in Oracle 10g Release 2 and Oracle 11g, only if Oracle's Automatic Storage Management (ASM) is not configured on that node.
Setting up the Toolkit Toolkit Overview Use swinstall to properly install both Serviceguard and the Enterprise Cluster Master Toolkit (referred to as the ECMT), which includes the scripts for Oracle. After installing the toolkit, six scripts and a README file is created in the/opt/cmcluster/ toolkit/oracle directory. Two more scripts and one file is installed that is used only for modular packages.
Table 2 Legacy Package Scripts (continued) Script Name Description Alert Notification Scriptv(SGAlert.sh) This script is used to send an email to the email address specified by the value of the ALERT_MAIL_ID package attribute, whenever there are critical problems with the package. Interface Script (toolkit.sh) This script is the interface between the Serviceguard package control script and the Oracle toolkit. Table 3 Variable or Parameter Name in haoracle.
Table 3 Variable or Parameter Name in haoracle.conf file (continued) NOTE: Setting MAINTENANCE_FLAG to "yes" and touching the oracle.debug file in the package directory puts the package in toolkit maintenance mode. Serviceguard A.11.19 release has a new feature which allows the maintenance of individual components of the package while the package is still up. This feature is called Package Maintenance mode and is available only for modular packages.
Table 3 Variable or Parameter Name in haoracle.conf file (continued) package halt script from completing, therefore preventing the standby node from starting the instance. The value of TIME_OUT must be less than the HALT_SCRIPT_TIMEOUT value set in the package configuration file. IfHALT_SCRIPT_TIMEOUT is not defined then, it is the sum of all the SERVICE_HALT_TIMEOUT's defined in the package. This variable has no effect on the package failover times.
single service to monitor for all listeners, you must not pass the listener name to the service command. The service_cmd in the package configuration file appears as follows: Attribute Name 2. Description service_name oracle_listener_monitor service_cmd “$SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_monitor_listener ” service_restart none service_fail_fast_enabled no service_halt_timeout 300 A separate service to monitor each listener: This service is recommended if listeners are critical.
NOTE: Services configured using both the approaches in a single package is not supported. Configure all the listeners using a single service, or use a separate service for each of them. Ensure that the elements in the LISTENER_RESTART array and the LISTENER_PASS array corresponds to those in the LISTENER_NAME array in the package configuration file. When some listeners do not require a restart value and others do not require a password, ordering the elements in the arrays becomes difficult.
The attribute, DB_SERVICE is used to start and stop the DB service through the DB package. This attribute is commented by default. You can either set the attribute DB_SERVICE to all or specify the DB service names that needs to be started and stopped by the DB package. To start and stop all DB services configured in the DB, managed by this package, set this attribute to all.
1. 2. The attribute PARENT_ENVIRONMENT in the package configuration file must be set to “yes”. The required environment variables must be defined in the file customer.conf and placed in the package directory (TKIT_DIR). WARNING! HP recommends you not to override the Oracle toolkit attributes defined in the package configuration through this file.
/dev/vx/dsk/DG0_ORACLE_TEST0/lvol1 /dev/vx/dsk/DG0_ORACLE_TEST0/lvol1 mounted at /ORACLE_TEST0 (the logical volume) (the filesystem) If you are using CFS Make sure that the Serviceguard CFS packages are running so that the Oracle package can access CFS mounted file systems. For more information on how to configure Servicegaurd CFS packages, see Serviceguard Manual. Create a directory /ORACLE_TEST0 on all cluster nodes. Mount the CFS file system on /ORACLE_TEST0 using the Servicegurad CFS packages.
There should be one set of configuration and control script files for each Oracle instance. The Serviceguard package control script (ORACLE_TEST0.cntl). Below are some examples of modifications to the Serviceguard package control script you need to make to customize to your environment. If you are using LVM VOLUME GROUPS Define the volume groups that are used by the Oracle instance.
The service name must be the same as defined in the package configuration file. Always call the Oracle executable script with start for the SERVICE_CMD definitions. SERVICE_NAME[0]=ORACLE_${SID_NAME} SERVICE_CMD[0]="/etc/cmcluster/pkg/${SID_NAME}/tookit.sh monitor" SERVICE_RESTART[0]="-r 2" For example: SERVICE_NAME[0]=ORACLE_TEST0 SERVICE_CMD[0]="/etc/cmcluster/pkg/ORACLE_TEST0/toolkit.
else reason="auto" fi /etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh stop $reason test_return 52 } • The Serviceguard package configuration file (ORACLE_TEST0.conf). The package configuration file is created with "cmmakepkg -p", and must be placed in the following location: '/etc/cmcluster/pkg/${SID_NAME}/${SID_NAME}.conf' For example: /etc/cmcluster/pkg/ORACLE_TEST0/ORACLE_TEST0.conf The configuration file should be edited according to the comments provided in that file.
DEPENDENCY_CONDITION DEPENDENCY_LOCATION For example: DEPENDENCY_NAME Oracle_dependency DEPENDENCY_CONDITION SG-CFS-MP-1 = up DEPENDENCY_LOCATION same_node NOTE: If the Oracle database is running in a cluster where SGeRAC packages are also running, the ECMT Oracle single-instance database package must be made dependent on the SGeRAC Oracle clusterware multi-node package (OC MNP). The dependency type should be 'SAME_NODE=up'.
CLEANUP_BEFORE_STARTUP=no USER_SHUTDOWN_MODE=abort ALERT_MAIL_ID= The parameters ASM_DISKGROUP, ASM_VOLUME_GROUP, ASM_HOME, ASM_USER, ASM_SID, and KILL_ASM_FOREGROUNDS must be set only for a database package using ASM. For more information on ASM, see “Supporting Oracle ASM Instance and Oracle Database with ASM” (page 28). After the Serviceguard environment is set up, each clustered Oracle instance must have the following files in the related package directory.
Example 1 Another example to create the Serviceguard package using modular method: 1. Create the modular package configuration file pkg.conf by including the ECMT Oracle toolkit module. # cmmakepkg -m ecmt/oracle/oracle pkg.conf where: nl 'ecmt/Oracle/Oracle' is the ECMT Oracle toolkit module name. pkg.conf is the name of the package configuration file. 2. Configure the following Serviceguard parameters in the pkg.
This section discusses the use of the Oracle database server feature called Automatic Storage Management (ASM) in HP Serviceguard for single database instance failover. SGeRAC supports ASM with Oracle RAC on both ASM over raw devices and ASM over SLVM. For Oracle single-instance failover, Serviceguard support is for ASM over LVM where the members of the ASM disk groups are raw logical volumes managed by LVM.
NOTE: For more information on the proposed framework for ASM integration with Serviceguard, see whitepaper High Availability Support for Oracle ASM with Serviceguard available at: www.hp.com/go/hpux-serviceguard-docs —> HP Enterprise Cluster Master Toolkit . The Oracle toolkit uses Multi-Node Package (MNP) and the package dependency feature to integrate Oracle ASM with HP Serviceguard.
Figure 1 Oracle database storage hierarchy without and with ASM Why ASM over LVM? As mentioned above, we require ASM disk group members in Serviceguard configurations to be raw logical volumes managed by LVM. We leverage existing HP-UX capabilities to provide multipathing for LVM logical volumes, using either the PV Links feature, or separate products such as HP StorageWorks Secure Path that provide multipathing for specific types of disk arrays.
• should not span multiple PVs. • and should not share a PV with other LVs. The idea is that ASM provides the mirroring, striping, slicing, and dicing functionality as needed and LVM supplies the multipathing functionality not provided by ASM. Figure 2 indicates this one-to-one mapping between LVM PVs and LVs used as ASM disk group members. Further, because of the default retry behavior of LVM, the I/O operation on an LVM LV might take an indefinitely long time.
Command sequence for configuring LVM volume groups: 1. Create the volume group with the two PVs, incorporating the two physical paths for each (create hh as the next hexadecimal number that is available on the system, after the volume groups that are already configured). # # # # # # # # 2.
Serviceguard support for ASM on HP-UX 11i v3 onwards This document describes how to configure Oracle ASM database with Serviceguard for high availability using the ECMT Oracle toolkit. Look at the ECMT support matrix available at http:// www.hp.com/go/hpux-serviceguard-docs -> HP Enterprise Cluster Master Toolkit for the supported versions of ECMT, Oracle, and Serviceguard. Note that for a database failover, each database must store its data in its own disk group.
On ASM instance failure, all dependent database instances are brought down and are started on the adoptive node. How Toolkit Starts, Stops, and Monitors the Database instance The Toolkit failover package for the database instance provides start and stop functions for the database instance and has a service for checking the status of the database instance. There is a separate package for each database instance. Each database package has a simple dependency on the ASM package.
Figure 4 ASM environment when DB1 fails on node 2 Serviceguard Toolkit Internal File Structure HP provides a set of scripts for the framework proposed for ASM integration with Serviceguard. The ECMT Oracle scripts contain the instance specific logic to start/stop/monitor both the ASM and the database instance. These scripts support both legacy and the modular method of packaging.
Figure 5 Internal file structure for legacy packages Modular packages use the package configuration file for the ASM or database instance on the Serviceguard specific side. The package configuration parameters are stored in the Serviceguard configuration database at cmapplyconf time, and are used by the package manager in its actions on behalf of this package.
ASM File Descriptor Release When an ASM disk group is dismounted on a node in the Serviceguard cluster, the ASM instance closes the related descriptors of files opened on the raw volumes underlying the members of that ASM disk group. x Consider a configuration in which there are multiple databases using ASM to manage their storage in a Serviceguard cluster. Assume each database stores its data in its own exclusive set of ASM disk groups.
• To start the ASM instance on all the configured nodes, invoke cmrunpkg on the ASM MNP. • Configure the database failover package using the ECMT Oracle scripts. You must follow HP instructions in the READ me file. • To start the database instance on the node, invoke cmrunpkg on the database package. For troubleshooting, look at subdirectories for database and ASM instances under $ORACLE_BASE/admin/, where log and trace files for these instances are deposited.
NOTE: If the Oracle database is running in a cluster where SGeRAC packages are also running, the Oracle database must be disabled from being started automatically by the Oracle Clusterware.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description ASM_HOME The home directory where ASM is installed. This parameter must be set for both the ASM and the ASM database instance packages. ASM_USER The user name of the Oracle ASM administrator. This parameter must be set for both ASM instance and ASM database instance packages. ASM_SID The ASM session name. This uniquely identifies an ASM instance.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description NOTE: If the Oracle database package is dependent on the SGeRAC clusterware multi-node package (OC MNP), the Oracle database package automatically goes into toolkit maintenance mode when the SGeRAC OC MNP is put into toolkit maintenance mode. To put the SGeRAC OC MNP into toolkit maintenance mode, its MAINTENANCE_FLAG attribute must be set to 'yes' and a file oc.
ASM Package Configuration Example Oracle Legacy Package Configuration Example ASM Package Configuration Example 43
• ASM Multi-Node Package Setup and Configuration NOTE: cluster. This package must not be created if SGeRAC packages are created in the same Configuring ASM package: 1. Create your own ASM package directory under /etc/cmcluster and copy over the scripts in the bundle. 2. Log in as root: # mkdir /etc/cmcluster/asm_package_mnp # cd /etc/cmcluster/asm_package_mnp 3. 4. Copy the framework scripts to the following location : cp /opt/cmcluster/toolkit/oracle/* . Edit the configuration file haoracle.
7. Add in the customer_defined_run_cmds function: /etc/cmcluster/asm_package_mnp/toolkit.sh start 8. Add in the customer_defined_halt_cmds function: if [ $SG_HALT_REASON = "user_halt" ]; then reason="user" else reason="auto" fi /etc/cmcluster/asm_package_mnp/toolkit.
MONITOR_PROCESSES[5]=ora_reco_${SID_NAME} MONITOR_PROCESSES[6]=ora_rbal_${SID_NAME} MONITOR_PROCESSES[7]=ora_asmb_${SID_NAME} MAINTENANCE_FLAG=yes MONITOR_INTERVAL=30 TIME_OUT=30 KILL_ASM_FOREGROUNDS=yes PARENT_ENVIRONMENT=yes CLEANUP_BEFORE_STARTUP=no USER_SHUTDOWN_MODE=abort OC_TKIT_DIR=/etc/cmcluster/crs # This attribute is needed only when this toolkit is used in an SGeRAC cluster. 5. Generate the database package configuration file and the control script in the database package directory.
SERVICE_NAME[1]="ORACLE_LSNR_SRV" SERVICE_CMD[1]="/etc/cmcluster/db1_package/toolkit.sh monitor_listener" SERVICE_RESTART[1]="-r 2" 11. Configure the Package IP and the SUBNET. 12. Add in the customer_defined_run_cmds function: /etc/cmcluster/db1_package/toolkit.sh start 13. Add in the customer_defined_halt_cmds function: if [ $SG_HALT_REASON = "user_halt" ]; then reason="user" else reason="auto" fi /etc/cmcluster/db1_package/toolkit.
Configure the following Serviceguard parameters package_name - Set to any name desired. package_type - Set to "multi_node". Edit the service parameters if necessary. The service parameters are preset to: service_name oracle_service service_cmd "$SGCONF/scripts/ecmt/Oracle/tkit_module.sh oracle_monitor" service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name oracle_listener_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.
e. If SGeRAC packages are configured in the same cluster, the ASM MNP package should not be created and the Oracle database package should depend on the SGeRAC Clusterware package instead of the ASM MNP. Use the following definition DEPENDENCY_NAME asm_dependency DEPENDENDY_CONDITION =up DEPENDENCY_LOCATION same_node LVM logical volumes are used in disk groups, therefore, specify the names of the volume groups on which the ASM diskgroups reside on, for the attribute "vg". f. g.
Edit the package configuration file pkg.conf and configure the following Serviceguard parameters: package_name - Set to any name desired. package_type - Set to "failover". Edit the service parameters if necessary. The service parameters are preset to: service_name oracle_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.
6) For the ASM_SID attribute, specify "+ASM". ECMT will automatically discover the ASM SID on a node. ASM_SID=+ASM 7) Apply the package configuration. # cmapplyconf -P db1pkg.conf This command applies the package configuration to the cluster It also creates the toolkit configuration directory defined by the package attribute TKIT_DIR on all target nodes, if it is not already present, and then creates the toolkit configuration file in it with the values specified in the db1pkg.conf file.
6. 7. 8. Copy the framework scripts provided in the bundle to the database instance package directory. Configure the configuration file haoracle.conf file as mentioned in this document for the database package. Do one of the following: • Create a new package ASCII file and control script. • Edit the existing package ASCII and control scripts in the database package directory. • Configure the parameters as mentioned for the database package. 9.
For more information, see the Whitepaper Migrating Packages from Legacy to Modular Style,October 2007 and also the Whitepaper on modular package support in Serviceguard at http://www.hp.com/go/hpux-serviceguard-docs -> HP Enterprise Cluster Master Toolkit 8. To add the values to this modular package configuration file, from the older toolkit configuration file, issue the following command: # cmmakepkg -i modular.ascii -m ecmt/oracle/oracle -t modular1.ascii 9.
Error Handling On startup, the Oracle shell script checks for the existence of the init${SID_NAME}.ora or spfile${SID_NAME}.ora file in the shared ${ORACLE_HOME}/dbs directory. If this file does not exist, the database cannot be started on any node until the situation is corrected. The Oracle shell script halts the package on that node and tries to start it on the standby node.
NOTE: Plain text listener passwords cannot contain any space between characters when used with Oracle toolkit. Database Maintenance There might be situations, when the Oracle database must be shut down for maintenance purposes like changing configuration, without migrating the instance to standby node. NOTE: "yes".
NOTE: • If the package fails during maintenance (example, the node crashes), the package does not automatically fail over to an adoptive node. You must start the package up on an adoptive node. For more information, see the latest Managing Serviceguard manual available at http://www.hp.com/go/hpux-serviceguard-docs—>HP Serviceguard .
Attributes newly added to ECMT Oracle toolkit The following attributes are added to ECMT Oracle toolkit. These attributes must be populated only for coexistence in an SGeRAC cluster. When there are no SGeRAC packages configured in the same cluster, these attributes must be left empty. • ORA_CRS_HOME: When using ECMT oracle toolkit in a coexistence environment, this attribute must be set to Oracle CRS HOME.
service_name oracle_listener_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_monitor_listener" service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name oracle_hang_service service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.sh oracle_hang_monitor 30 failover" service_restart none service_fail_fast_enabled no service_halt_timeout 300 If the listener is configured, uncomment the second set of service parameters which are used to monitor the listener.
on all target nodes, if not already present and then creates the toolkit configuration file in it with the values specified in the pkg2.conf file. • Open the haoracle.conf file generated in the package directory (TKIT_DIR). Configuring a legacy failover package for an Oracle database using ASM in a Coexistence Environment To configure an ECMT legacy package for an Oracle database: 1.
# cmmakepkg -p db1pkg.conf # cmmakepkg -s db1pkg.cntl Edit the package configuration file db1pkg.conf: PACKAGE_NAME - Set to any name desired. PACKAGE_TYPE - Set to FAILOVER. RUN_SCRIPT /etc/cmcluster/db1_package/db1pkg.cntl HALT_SCRIPT /etc/cmcluster/db1_package/db1pkg.
# cmrunpkg Check the package status using cmviewcl. Verify that the database instance is running. Repeat these steps for each database instance. ECMT Oracle Toolkit Maintenance Mode To place the single-instance ECMT Oracle toolkit package in maintenance mode, set the package attribute MAINTENANCE_FLAG to ‘yes’ . Also, a file named ‘oracle.debug’ must exist in the package directory.
provides functionalities like managing disk groups, disk redundancies, and allows management of database objects without specifying mount points and filenames. Currently, with the Oracle EBS DB Tier Rapid installation you can save the database in either shared file system or local file system. ASM is currently not supported using EBS DB Tier Rapid Install. To support ASM for EBS DB Tier, migrate from non-ASM storage to ASM storage.
3 Using the Sybase ASE Toolkit in a Serviceguard Cluster on HP-UX This chapter describes the High Availability Sybase Adaptive Server Enterprise (ASE) Toolkit designed for use in a Serviceguard environment. This document covers the basic steps to configure a Sybase ASE instance in Serviceguard Cluster. With this configuration you can integrate a Sybase ASE Server with Serviceguard in HP-UX environments.
HP recommends this configuration. Here, the configuration and database files are stored on shared disks (but the Sybase ASE binaries can be on the local storage). The configuration files and data files reside on a shared location, therefore, there is no additional work to ensure that all nodes have the same configuration at any point in time. Another alternative setup is to place the Sybase ASE binaries on the shared location along with the configuration and database files.
3. Create a volume group, logical volume, and file system to hold the necessary configuration information and symbolic links to the Sybase ASE executables. This file system is defined as SYBASE in the package configuration file and master control script.
• Sybase ASE stored procedures • Sybase ASE transaction log files After defining the shared volume groups/logical volumes/file systems for these entities, see Sybase ASE documentation to create the database. Setting up the Toolkit • Toolkit Setup After the toolkit installation is complete, the README and SGAlert.sh files are available in the /opt/cmcluster/toolkit/sybase directory. One script is present in the /etc/cmcluster/scripts/ecmt/sybase directory.
Table 7 Sybase ASE attributes (continued) Sybase ASE Attributes Description ASE_RUNSCRIPT Path of the script to start ASE instance. USER_SHUTDOWN_MODE The parameter used to specify the instance shutdown mode only when a shutdown is initiated by the user and not due to a failure of a service. This parameter can take values "wait" or "nowait". If "wait" is specified, the instance is shutdown using the wait option, which is a clean shutdown.
database is 'SYBASE0', follow the instructions in the section Building an HA Cluster Configuration in the latest Managing Serviceguard manual to create the following: /dev/vg0_SYBASE0 (the volume group) /dev/vg0_SYBASE0/lvol1 (the logical volume) /dev/vg0_SYBASE0/lvol1 (the filesystem) mounted at /SYBASE0 1. 2. 3. 4. 5. Consider that Sybase ASE is installed in /home/sybase; create symbolic links to all subdirectories under /home/sybase.
ecmt/sybase/sybase/TIME_OUT 30 ecmt/sybase/sybase/RECOVERY_TIMEOUT 30 ecmt/sybase/sybase/MONITOR_PROCESSES dataserver ecmt/sybase/sybase/MAINTENANCE_FLAG yes Define the volume groups that are used by the Sybase ASE instance. File systems associated with these volume groups are defined as follows: vg /dev/vg00_SYBASE0 vg /dev/vg01_SYBASE0 Define the file systems which are used by the Sybase ASE instance.
4. Edit the package configuration file. NOTE: Sybase toolkit configuration parameters in the package configuration file are prefixed by ecmt/sybase/sybase when used in Serviceguard A.11.19.00. For Example: /etc/cmcluster/pkg/sybase_pkg/pkg.conf The configuration file must be edited as indicated by the comments in that file. The package name must be unique within the cluster. For Example: PACKAGE_NAME sybase NODE_NAME node1 NODE_NAME node2 Set the TKIT_DIR variable as the path of .
Table 8 Parameters in USER_ROLE User Role Description MONITOR Can perform cluster and package view operations. PACKAGE_ADMIN Can perform package administration, and use cluster and package view commands. FULL_ADMIN Can to perform cluster administration, package administration, and cluster and package view operations. Assign any of these roles to users who are not configured as root users. Root users are usually given complete control on the cluster using the FULL_ADMIN value.
MIG29 master tcp ether 192.168.10.1 5000 query tcp ether 192.168.10.1 5000 If you require a re-locatable hostname to be associated with this re-locatable IP-address, do the followingf: 1. Use the re-locatable hostname in the srvbuild.adaptive_server.rs and similar files for all the other 'server' processes and use srvbuildres utility to configure the servers. 2. Make the (relocatable_,P relocatable_hostname) pairs entry in the /etc/hosts (and also in the nameserver, if necessary).
$ export SG_PACKAGE=SYBASE0 $ $SGCONF/scripts/ecmt/sybase/tkit_module.sh stop 4. 5. Perform maintenance actions (for example, change the configuration parameters in the parameter file of the Sybase ASE instance. If this file is changed, distribute the new file to all cluster nodes). Start the Sybase ASE database instance again if it is stopped: $ export SG_PACKAGE=SYBASE0 $ SGCONF/scripts/ecmt/sybase/tkit_module.sh start 6.
4 Using the DB2 Database Toolkit in a Serviceguard Cluster in HP-UX DB2 is an RDBMS product from IBM. This chapter describes the High Availability toolkit for DB2 V9.1, V9.5 and V9.7 designed to be used in a Serviceguard environment. This chapter covers the basic steps to configure DB2 instances in a Serviceguard cluster. For more information on support matrix, see compatibility matrix available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard .
Setting up the application as follows: 1. Create a volume group, a logical volume, and a file system to hold the database. The volume group, logical volume, and file system parameters have to be defined in the Serviceguard package configuration file. The volume group and file system must be uniquely named within the cluster, therefore, include the identity of the database instance in their names.
8. In a physical multi-partitioned database installation, edit the DB2 node configuration file db2nodes.cfg in the exported instance owner home directory to define the physical and logical database partitions that participate in a DB2 instance. For example: [payroll_inst@node1 ~]> vi /home/payroll_inst/sqllib/db2nodes.cfg 0 node1 0 node1 1 node1 1 node1 2 node2 0 node2 3 node2 1 node2 9. To enable the communication between the partitions, edit the /etc/services, /etc/ hosts, .
NOTE: • For multiple physical and logical partition configuration of DB2, the number of ports added in the services file must be sufficient for the number of partitions created in the current node and the number of partitions created on the other nodes. This ensures that enough ports are available for all partitions to startup on a single node, if all packages managing different partitions are started on that node.
NOTE: In a Serviceguard cluster environment, the fault monitor facility provided by DB2 must be turned off. Fault monitor facility is a sequence of processes that work together to ensure that DB2 instance is running. Fault monitor facility is specifically designed for non-clustered environments and has the flexibility to be turned off if the DB2 instance is running in a cluster environment.
For legacy packages, there is one user configuration script (hadb2.conf) and three functional scripts (toolkit.sh, hadb2.sh, and hadb2.mon) which work with each other to integrate DB2 with the Serviceguard package control scripts. For modular packages, there is an Attribute Definition File (ADF), a Toolkit Module Script (tkit_module.sh), and a Toolkit Configuration File Generator Script (tkit_gen.sh) that work with the three scripts (toolkit.sh, hadb2.sh and hadb2.
Table 10 Variables in hadb2.conf File (continued) Variable Name Description MONITOR_INTERVAL The time interval in seconds between the checks to ensure that the DB2 database is running. Default value is 30 seconds. TIME_OUT The amount of time, in seconds, to wait for the DB2 shutdown to complete before killing the DB2 processes defined in MONITOR_PROCESSES.
After waiting for a few minutes, check for the several existence of DB2 processes which are identified by "db2" , ps -ef | grep db2. Bring the database down, unmount, and deactivate the volume group. ./toolkit.sh stop umount /mnt/payroll vgchange -a n /dev/vg0_payroll Repeat this step on all other clustered nodes that must be configured to run the package and ensure that DB2 can be brought up and down successfully.
PARTITION_NUMBER PARTITION_NUMBER 0 1 NOTE: Logical partitions of a database can be grouped to run in a single package while physical partitions need separate packages for each. Set the MONITOR_PROCESSES variable to process names, which must be monitored and cleaned up to ensure fail safe shutdown of the DB2 Instance or Partition. The actual monitoring is done by a command called db2gcf. The MONITOR_PROCESSES is only used to kill the processes that remain alive after the shutdown of the database.
fs_umount_opt fs_fsck_opt "" "" This example above defines the variable for two database partitions, that is, partition 0 and partition 1, NODE0000 and NODE0001. The variables are named NODE0000 in accordance with DB2 directory structure. 5. Use cmcheckconf command to check the validity of the specified configuration. For example, #cmcheckconf -P pkg.conf 6. If the cmcheckconf command does not report any errors, use the cmapplyconf command to add the package into Serviceguard environment.
function customer_defined_run_cmds { # Start the DB2 database. /etc/cmcluster/pkg/db2_pkg/toolkit.sh start test_return 51 } 5. Edit the customer_defined_halt_cmds function to run the toolkit.sh script with the stop option. For example, function customer_defined_halt_cmds { /etc/cmcluster/pkg/db2_pkg/toolkit.sh stop test_return 52 } The Serviceguard package configuration file (pkg.conf). 6.
PARTITION_NUMBER[1]=1 MONITOR_PROCESSES[0]=db2sysc MONITOR_PROCESSES[1]=db2acd MAINTENANCE_FLAG=YES MONITOR_INTERVAL=30 TIME_OUT=30 12. After setting up the Serviceguard environment, each clustered DB2 instance must have the following files in the related package directory. For example, the DB2 package, located at /etc/cmcluster/pkg/db2_pkg, must contain the following files list in Table 12. Table 12 DB2 Package Files File Name Description $PKG.
$ cd /etc/cmcluster/pkg/db2_pkg/ $ $PWD/toolkit.sh stop • Perform maintenance actions (For example, changing the configuration parameters in the parameter file of the DB2 instance. If this file is changed, you must distribute the new file to all cluster nodes). • Start the DB2 database instance again if you have stopped it: $ cd /etc/cmcluster/pkg/db2_pkg/ $ $PWD/toolkit.sh start • To continue monitoring, enable monitoring scripts by removing db2.debug file: $ rm -f /etc/cmcluster/pkg/db2_pkg/db2.
5 Using MySQL Toolkit in a HP Serviceguard Cluster This chapter describes the method to configure the MySQL Database Server application toolkit under HP Serviceguard cluster environment using MySQL Toolkit. This toolkit supports the Enterpise MySQL Database Server Application 5.0.56 and later. You must be familiar with the HP Serviceguard cluster configuration and MySQL Database server concepts and their installation and configuration procedures.
The following Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/ mysql. Table 14 ADF File in Modular Package in MySQL File Name Description mysql.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging. The ADF is used to generate a modular package ASCII template file.
To run MySQL in a HP Serviceguard environment: 1. Install the same version of MySQL Database Server software on each node. 2. Place the configuration files on shared storage that is accessible to all nodes. Each node that is configured to run the package must have access to the configuration files. 3. Configure a unique configuration for each database in multiple databases of the same cluster.
3. 4. Copy the configuration file /etc/my.cnf to /MySQL_1/my.cnf. Modify /MySQL_1/my.cnf to configure the DB for your unique environment. Changes may include specific assignments to the following parameters: [mysqld] * datadir=/MySQL_1/mysql * socket=/MySQL_1/mysql/mysql.sock * port= mysqld_safe] * err-log=/MySQL_1/mysql/mysqld.err * pid-file=/etc/cmcluster/pkg/mysql1/mysqld.
Table 18 Parameters in MySQL Configuration File (my.cnf) File/Directory name Description [mysqld] datadir=/MySQL_1/ mysql # Data Directory for MySQL DB socket=/MySQL_1/ mysql/mysql.sock # Socket file for Client port=3310 # Port Number for Client # Communication # Communication [mysqld_safe] err-log=/MySQL_1/ mysql/mysqld. # Safe-mysqld's pid-file=/etc/ cmcluster/pkg/ MySQL1/mysqld.pid # pid file Path # Error Log file NOTE: mysqld_safe was previously called safe_mysqld.
Table 19 User Variables in hamysql.conf file (continued) File name Description MAINTENANCE_FLAG (for example, MAINTENANCE_FLAG="yes") This variable enables or disables maintenance mode for the MySQL package. By default, this is set to "yes". To disable this feature MAINTENANCE_FLAG must be set to "no". When MySQL needs maintenance, the file /mysql.debug must be created. During this maintenance period, the MySQL process monitoring is paused.
Table 21 Package Control Script Parameters (continued) Parameter Name [control script parameters] Control script Description FS_TYPE "ext2" # FS type is "Extended 2" FS_MOUNT_OPT "-o rw" # mount with read/write options SUBNET "192.70.183.0" # Package Subnet IP "192.70.183.171" # Relocatable IP #The service name must be the same as defined in the package. #configuration file. SERVICE_NAME="mysql1_monitor" SERVICE_CMD="/etc/cmcluster/pkg/MYSQL1/toolkit.
NOTE: Mysql toolkit configuration parameters in the package configuration file are prefixed by ecmt/mysql/mysql when used in Serviceguard A.11.19.00 or later. For Example: /etc/cmcluster/pkg/mysql_pkg/pkg.conf You must edit the configuration file as indicated by the comments in that file. The package name must be unique within the cluster. For Example: PACKAGE_NAME mysql NODE_NAME node1 NODE_NAME node2 Set the TKIT_DIR variable as the path of .
cmmodpkg -e -n node1 -n node2 mysql_1 cmmodpkg -e mysql_1 11. If the package does not start, to start the package run the cmrunpkg command. cmrunpkg mysql_1 Repeat these procedures to create multiple MySQL instances running in the Serviceguard environment. Database Maintenance At regular intervals, the MySQL database needs maintenance like changing configuration, without migrating the instance to migrate to standby node.
NOTE: ◦ If the package fails during maintenance (for example, the node crashes), the package will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. For more information, see the latest Managing Serviceguard manual available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard . This feature is enabled only when the configuration variable MAINTENANCE_FLAG is set to "yes" in the MySQL toolkit configuration file.
6 Using an Apache Toolkit in a HP Serviceguard Cluster This chapter describes the HP Apache toolkit that runs in the HP Serviceguard environment. You can install, configure, and run the Apache web server application in a Serviceguard clustered environment. To use this toolkit you must be familiar with Serviceguard and the Apache web server, including installation, configuration, and execution. NOTE: • This toolkit supports: HP Serviceguard versions ◦ A.11.19 ◦ A.11.
Table 24 (page 98) lists the files that are installed for the modular method of packaging. apache.1 is an Attribute Definition File (ADF) that is installed in /etc/cmcluster/modules/ecmt/apache. Table 24 Files in Modular Method Packaging File Name Description tkit_module.sh This script is called by the Master Control Script and acts as an interface between the Master Control Script and the Toolkit interface script (toolkit.sh).
The following are the two methods can be used to configure Apache Web server: • Local configuration. • Shared configuration. Local configuration In a local configuration, you must place the configuration and other web-site files on a single node, and then replicate the files to all other nodes in the cluster. Also, in a typical local configuration, nothing is shared between the nodes.
an Apache instance so that it listens to specific IP Addresses, this directive must be changed to "Listen ”. NOTE: The default configuration file is available at: /opt/hpws22/apache/conf/httpd.conf You must disable the automatic start of the standard Apache default installation if Apache is run from Serviceguard so that no application is running on the server when the system starts up. • You must create a separate, distinct SERVER_ROOT directory for each Apache Serviceguard package.
1. 2. 3. 4. 5. 6. 7. 8. 9. Create a Volume Group "vg01" for a shared storage. Create a Logical Volume "lvol1" on the volume group "vg01". Construct a new file system on the Logical Volume "lvol1". Create a directory named /shared/apache_1 on a local disk. Repeat this step on all nodes configured to run the package. Mount device /dev/vg01/lvol1 to the /shared/apache_1. Copy all files from /opt/hpws22/apache/conf to /shared/apache_1/conf. Create a directory "logs" under the /shared/apache_1/.
Under shared configuration, choose to put Apache binaries as well in shared file system. The configuration for the Apache Web Server on the shared file system mounted at /mnt/apache can be configured using two methods. To create a shared configuration for the Apache Web Server on the shared file system mounted at /mnt/apache, use one of the following methods: Method #1 1. Create the shared storage to store the apache files for all nodes configured to run the Apache package.
NOTE: To increase the number of packages that can be added to this cluster, modify the cluster configuration file and set the variable MAX_CONFIGURED_PACKAGES to reflect the number of packages to be added to the cluster. Once the cluster is edited, change it through thecmapplyconf -C cluster_config_file Before working on the package configuration, create a directory (for example, /etc/cmcluster/ pkg/http_pkg1) for this package to run. This directory must belong to a single Apache package.
| VG[0]="vg01" | VXVM_DG[0]="DG_00" | LV[0]="/dev/vg01/lvol1" | LV[0]="/dev/vx/dsk/DG_00/LV_00 FS[0]="/shared/apache1" | FS[0]="/shared/apache1" FS_TYPE[0]="vxfs" | FS_TYPE[0]="vxfs" FS_MOUNT_OPT[0]="-o rw" | FS_MOUNT_OPT[0]="-o rw" IP[0]="192.168.1" SUBNET[0]="192.168.0.0" SERVICE_NAME="http1_monitor" SERVICE_CMD="/etc/cmcluster/pkg/http_pkg1/toolkit.sh SERVICE_RESTART="-r 0" monitor" Edit the customer_defined_run_cmds function to run the toolkit.sh script with the start option.
/etc/cmcluster/scripts/ecmt/apache directory. The third file is located in the /etc/ cmcluster/modules/ecmt/apache directory. For legacy packages, one user configuration script (hahttp.conf) and three functional scripts (toolkit.sh, hahttp.sh, and hahttp.mon) works together to integrate Apache web server with the Serviceguard package control script.
Table 26 Configuration Variables (continued) Configuration Variables Description that the Apache instance is running properly after the maintenance phase. NOTE: If youMAINTENANCE_FLAG to "yes" and touch the apache.debug file in the package directory, the package is placed in the toolkit maintenance mode. Serviceguard A.11.19 release has a new feature which enables individual components of the package to be maintained while the package is still up.
http_pkg.cntl hahttp.conf hahttp.mon hahttp.sh toolkit.sh 3. #Package control file #Apache toolkit user configuration file #Apache toolkit monitor program #Apache toolkit main script #Interface file between the package #control file and the toolkit Apply the Serviceguard package configuration using the command cmapplyconf -P http_pkg.conf. Use the same procedures to create multiple Apache instances for Serviceguard packages that will be running on the cluster.
you must check the Apache error log files. The Apache log files can be located by the value set for the ErrorLog directive in the Apache instance configuration file httpd.conf. In general, the value is /logs/error_log. Maintaining the Apache Web Server At regular intervals, the MySQL database needs maintenance like changing configuration, without migrating the instance to migrate to standby node.
7 Using Tomcat Toolkit in a HP Serviceguard Cluster This chapter describes the toolkit that integrates and runs HP Tomcat in the HP Serviceguard environment. It is intended for users who want to install, configure, and execute the Tomcat servlet engine application in a Serviceguard clustered environment. To use this document, you must be familiar with Serviceguard and the Tomcat Servlet engine, including installation, configuration, and execution. This toolkit supports: HP Serviceguard versions: • A.11.
Table 28 ADF File for Modular Method of Packaging File Name Description tomcat.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging. The ADF is used to generate a modular package ASCII template file. After installation, the following files are located in /etc/cmcluster/scripts/ecmt/tomcat .
Tomcat Package Configuration Overview To start up, Tomcat read the server.xml file from the conf sub-directory of the CATALINA_BASE directory which is configured in the toolkit user configuration file hatomcat.conf. The configuration rules include the following: • Each node must have the same version of the HP-UX based Tomcat Servlet Engine. • Each node must have the same CATALINA_BASE directory where identical copies of the configuration file for each instance are placed.
Configuring the Tomcat Server with Serviceguard To manage a Tomcat Server with Serviceguard, modify the default Tomcat configuration. Before you create and configure Serviceguard packages, make sure that the following configurations are complete for the Tomcat Server application on all cluster nodes: When the Tomcat Server is installed, the default instance may be automatically configured to start during system startup via the runlevel (rc) script "hpws22_tomcatconf" in the /etc/rc.config.
6. 7. 8. Copy all files from "/opt/hpws22/tomcat/conf" to "/shared/tomcat_1/conf" Create a directory "logs" under the "/shared/tomcat_1/". Update the Tomcat configuration files present in "/shared/tomcat_1/conf" directory and change the Tomcat instance configurations to suit your requirement.
4. 5. On the other nodes in the cluster remove or rename the /opt/hpws22/tomcat directory. Configure the hatomcat.conf file as required for the Tomcat Server package on all nodes in case of legacy packages and the Tomcat toolkit parameters in the package ASCII file in case of modular packages. NOTE: The following sections describe the method for creating the Serviceguard package using the legacy method.
RUN_SCRIPT /etc/cmcluster/pkg/tomcat_pkg1/http_pkg.cntl HALT_SCRIPT /etc/cmcluster/pkg/tomcat_pkg1/http_pkg.cntl SERVICE_NAME tomcat1_monitor 2. Create a Serviceguard package control file with command cmmakepkg -s tomcat_pkg.cntl. Edit the package control file according to the instructions provided in the file, and then customize it to your environment.
3. 4. Configure the Tomcat user configuration file hatomcat.conf as explained in “Creating Serviceguard Package Using Modular Method” (page 116). Copy this package configuration directory to all other package nodes. Use the same procedure to create multiple Tomcat packages (multiple Tomcat instances) that will be running on the cluster. Creating Serviceguard Package Using Modular Method To create Serviceguard package using Modular method: 1. Create a directory for the package.
Table 30 Legal Package Scripts Script Name Description User Configuration file (hatomcat.conf) This script contains a list of pre-defined variables that can be customized for the user's environment. This script provides the user a simple format of the user configuration data. This file is sourced by the toolkit main script hatomcat.sh. Main Script (hatomcat.sh) This script contains a list of internal-use variables and functions that support the start and stop functions of a Tomcat instance.
Table 31 User Configuration Variables (continued) User Configuration Variables Description periodically checking whether this port is listening. If multiple instances of tomcat are configured, this port needs be unique for each instance. The default value is 8081. MONITOR_INTERVAL (for example, MONITOR_INTERVAL=5) Specify a time interval in seconds for monitoring the Tomcat instance. The monitor process checks the Tomcat daemons at this interval to validate they are running.
and tries to start it on another node. To troubleshoot and to check, you must check error log files. The Tomcat log files are available at $CATALINA_BASE/logs directory. Tomcat Server Maintenance For maintenance, the Tomcat Server must be brought down. For example, when you want to change configuration of Tomcat but do not want to migrate to another node.
Configuring Apache Web Server with Tomcat in a Single Package NOTE: This section contains details on configuring Apache web server with Tomcat in a single package only for the legacy method of packaging. For configuring Apache and Tomcat in a single package using the modular method of packaging, see whitepaper Modular package support in Serviceguard for Linux and ECM Toolkits available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard Enterprise Cluster Master Toolkit.
Example 3 For example: VG[0]="vg01" LV[0]="/dev/vg01/lvol1" FS[0]="/share/pkg_1" FS_MOUNT_OPT[0]="-o rw" FS_UMOUNT_OPT[0]="" FS_FSCK_OPT[0]="" FS_TYPE[0]="vxfs" Configure the two services one for Tomcat and Apache instances respectively SERVICE_NAME[0]="tomcat_pkg1.monitor" SERVICE_CMD[0]="/etc/cmcluster/pkg/tomcat_pkg1/toolkit.sh monitor" SERVICE_RESTART[0]="" SERVICE_NAME[1]="http_pkg1.monitor" SERVICE_CMD[1]="/etc/cmcluster/pkg/http_pkg1/toolkit.
8 Using SAMBA Toolkit in a Serviceguard Cluster This chapter describes the High Availability SAMBA Toolkit for use in the Serviceguard environment. You can install and configure the SAMBA toolkit in a Serviceguard cluster. You must be familiar with Serviceguard configuration and HP CIFS Server application concepts and installation or configuration procedures. NOTE: • This toolkit supports: HP Serviceguard versions: ◦ A.11.19 ◦ A.11.20 • The version of HP CIFS Server included with HP-UX.
Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/samba . Table 33 Attribute Definition File (ADF) File Name Description samba.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging. The ADF is used to generate a modular package ASCII template file.
components on all nodes. If the shared file system allows only read operations, local configuration is easy to maintain. But if the file system allows write operations also, the administrator must propagate updates to all the package nodes. Shared Configuration In a shared configuration, the HP CIFS Server file systems and configuration files are all on the shared storage.
Replace the "XXX.XXX.XXX.XXX/xxx.xxx.xxx.xxx" with one relocatable IP address and subnet mask for the Serviceguard package. Copy the workgroup line from the /etc/opt/samba/smb.conf file. Copy the NetBIOS name line from the same file, or enter the UNIX host name for the server on the NetBIOS name line. Add in the rest of desired configuration items.
on all cluster nodes. Mount the CFS filesystem on /shared/smb1 using the CFS packages. Use /shared/smb1 to hold the necessary files and configuration information. WARNING! CIFS supports CFS with a limitation. A running package fails if another cluster node tries to start samba using the configuration used by this package. Do not attempt to start another samba instance on a different cluster node using the shared configuration that is being used by any samba package.
RUN_SCRIPT /etc/cmcluster/smb1/smb_pkg.cntl HALT_SCRIPT /etc/cmcluster/smb1/smb_pkg.cntl SERVICE_NAME smb1_monitor If you are using CFS mounted file system, configure the dependency of this Samba package on an SG CFS package. If the Samba package is configured to depend on an SG CFS package, the Samba package runs as only if the dependee package is running. If the package fails, the dependent Samba package also fails.
test_return 51 } 5. 6. Configure the user configuration file hasmb.conf as explained in the next section“Setting up the Toolkit” (page 128) and customize it for your environment. Copy this package configuration directory to all other package nodes. You can use the same procedure to create multiple HP CIFS Server packages (multiple HP CIFS Server instances) that will be managed by Serviceguard.
4. Edit the package configuration file. NOTE: Samba toolkit configuration parameters in the package configuration file are prefixed by ecmt/samba/samba when used in Serviceguard A.11.19.00 or later. For Example: /etc/cmcluster/pkg/samba_pkg/pkg.conf Edit the configuration file according to the instructions provided in that file. The package name must be unique within the cluster. For Example: PACKAGE_NAME samba NODE_NAME node1 NODE_NAME node2 Set the TKIT_DIR variable as the path of .
Table 36 User Configuration Variables (continued) Configuration Variables Description LOG_DIRECTORY This variable holds the log directory path of the CIFS Server instance of the particular package. By default, the path is /var/opt/samba/${NETBIOS_NAME}/ logs. If the path is different, modify the variable. (for example, LOG_DIRECTORY=/var/opt/samba/ ${NETBIOS_NAME}/logs) This variable holds the PID file of smbd process of the particular package.
NOTE: Before you configure the toolkit, create, the package directory (example, /etc/ cmcluster/smb1), and then copy the toolkit scripts to the package directory. 1. Edit the SAMBA Toolkit user configuration file. In the package directory, edit the user configuration file (hasmb.conf) as indicated by the comments in that file. For example: NETBIOS_NAME=smb1 CONF_FILE=/etc/opt/samba/smb.conf.${NETBIOS_NAME} LOG_DIRECTORY=/var/opt/samba/${NETBIOS_NAME}/logs For CIFS Server version 02.
$ cd /etc/cmcluster/pkg/SMB_1/ $ $PWD/toolkit.sh start • Allow monitoring scripts to continue normally. $ rm -f /etc/cmcluster/pkg/SMB_1/samba.debug A message "Starting Samba toolkit monitoring again after maintenance" appears in the Serviceguard Package Control script log. • Enable the package failover. $ cmmodpkg -e SMB_1 NOTE: ◦ If the package fails during maintenance (for example, the node crashes), the package does not automatically fail over to an adoptive node.
/var/opt/samba/private but a different path may be specified via the smb passwd file parameter. Another important security file is secrets.tdb. Machine account information is among the important contents of this file. This file is updated periodically (as defined in smb.conf by 'machine password timeout', 604800 seconds by default), therefore, HP recommends that you locate secrets.tdb on a shared storage. As with the smbpasswd file, the location of this file is defined by the smb.
Consider that the LMHOSTS file is in the /etc/cmcluster/smb1 directory, change the following command in hasmb.sh: In the start_samba_server function, change as follows: (old) /opt/samba/bin/nmbd -D -1${LOG_DIRECTORY} -s ${CONF_FILE} (new) /opt/samba/bin/nmbd -D -1${LOG_DIRECTORY} -s ${CONF_FILE} \ -H /etc/cmcluster/smb1/lmhosts For more information, see thèlmhosts (1M) manpage.
9 Using HP Serviceguard Toolkit for EnterpriseDB PPAS in an HP Serviceguard Cluster Overview The HP Serviceguard Toolkit for EnterpriseDB PPAS integrates and runs EnterpriseDB Postgres Plus Advanced Server 9.0 in the Serviceguard environment. To work with this toolkit, you must be familiar with Serviceguard and the EnterpriseDB Postgres Plus Advanced Server 9.0, including their installation, configuration, and execution.
Storage considerations Unless otherwise stated, this toolkit supports all the file systems, storage, and volume managers that Serviceguard supports, including CFS. Supported configuration Providing high availability to the EnterpriseDB PPAS instance This configuration provides an automatic failover of the EnterpriseDB PPAS toolkit package to the adoptive node. Figure 8 High availability to the EnterpriseDB PPAS instance In Figure 8, the EDB is configured in a volume group shared between Node1, and Node2.
NOTE: HP supports the placement of the EDB binaries on the shared storage but does not recommend this configuration, if multiple instances are configured. This is because, if multiple EDB instances using the shared EDB binaries are configured, where one of the EDB instance fails and needs to failover to adoptive node, the file system that holds the EDB binary on the primary node must be unmounted, and then remounted on the adoptive node. This impacts other EDB instances hosted on the primary node.
Creating Serviceguard package using the modular method To create a Serviceguard package using the modular method: 1. Create a directory for the package on all the cluster nodes using the following command: # mkdir /etc/cmcluster/EDB 2. 3. cd to the package directory. Create a configuration file, edb.conf using the following command: # cmmakepkg -m ecmt/edb/edb edb.conf 4.
Attributes Description fs_name Specify the logical volume on which the file system is created. For example: /dev/vgdata/vol1 fs_directory Specify the mount point location for the EDB data directory. For example: /EDB/data fs_type It is a type of the filesystem. For example: vxfs ip_address and ip_subnet Assign these attributes when the package is configured to failover to another node. Also, when the package fails over from one node to another node, client traffic IP must be moved to the new node.
NOTE: • Repeat these steps to create multiple EDB instances running in the Serviceguard environment. • EnterpriseDB supports multiple EnterpriseDB instances running on the same node. Each instance of EnterpriseDB must start on a unique port name. Use this toolkit to configure and run the multiple EnterpriseDB packages on the same node. This toolkit is a template for creating a standard modular package for EnterpriseDB PPAS.
Halting packages To halt a package, run the following command: # cmhaltpkg Deleting packages To delete a package from the cluster, run the following command: cmdeleteconf -p This command prompts for a verification, before deleting the files, unless you use the -f option. Troubleshooting This section explains some of the problem scenarios that you might encounter while working with the HP Serviceguard Toolkit for EnterpriseDB PPAS in a Serviceguard cluster.
10 Support and Other resources Information to Collect Before Contacting HP Be sure to have the following information available before you contact HP: • Software product name • Hardware product model number • Operating system type and version • Applicable error message • Third-party hardware or software • Technical support registration number (if applicable) How to Contact HP Use the following methods to contact HP technical support: Use the following methods to contact HP technical support: • S
Warranty information HP will replace defective delivery media for a period of 90 days from the date of purchase. This warranty applies to all Insight Management products. HP Authorized Resellers For the name of the nearest HP authorized reseller, see the following sources: • In the United States, see the HP U.S. service locator website: http://www.hp.com/service_locator • In other locations, see the Contact HP worldwide website: http://www.hp.
[] In command syntax statements, these characters enclose optional content. {} In command syntax statements, these characters enclose required content. | The character that separates items in a linear list of choices. ... Indicates that the preceding element can be repeated one or more times. WARNING An alert that calls attention to important information that, if not understood or followed, results in personal injury.
11 Acronyms and Abbreviations ASM Automatic Storage Management CDB Serviceguard Configuration Database CFS Cluster File System ECMT Enterprise Cluster Master Toolkit HASE High Availability Sybase Adaptive Server Enterprise MNP Multi-Node Package OC MNP Oracle Clusterware Multi-node Package RBA Role Based Access WSS HP Web Server Suite 145
Index I introduction, 7 Installing and Uninstalling Enterprise Cluster Master Toolkit, 8 S Support and other resources Registering for Software Technical Support and Update Service Warranty information, 143 support and other resources, 142 Documentation Feedback, 143 How to Contact HP, 142 HP Authorized Resellers, 143 Information to collect before contacting HP, 142 New and Changed Information in this Edition, 143 Registering for Software Technical Support and Update Service, 142 How to use your software t
using the DB2 database Toolkit in a Serviceguard Cluster in HP-UX0 setting up the Application multiple Instance Configurations, 78 Using the oracle toolkit in an HP Serviceguard Cluster Oracle ASM Support for EBS DB Tier, 61 Supporting Oracle ASM instance and Oracle database with ASM Configuring LVM Volume Groups for ASM Disk Groups, 31 using the oracle toolkit in an HP Serviceguard Cluster, 9 adding the package to the Cluster, 53 ASM Package Configuration Example, 43 configuring and packaging Oracle single