HP Serviceguard Enterprise Cluster Master Toolkit User Guide HP Part Number: 5900-1606 Published: April 2011 Edition: J06.03 and subsequent J-series RVUs, H06.03 and subsequent Edition: H-series RVUs, and G06.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................6 Overview................................................................................................................................6 2 Using the Oracle Toolkit in an HP Serviceguard Cluster...................................8 Overview................................................................................................................................8 Supported versions....
3 Using the Sybase ASE Toolkit in a Serviceguard Cluster on HP-UX ..................59 Overview..............................................................................................................................59 Sybase Information.................................................................................................................59 Setting up the Application........................................................................................................
Setting up The Toolkit ...........................................................................................................105 Toolkit Overview..............................................................................................................105 Toolkit User Configuration.................................................................................................106 Creating Serviceguard package using Modular method.............................................................
1 Introduction The Enterprise Cluster Master Toolkit (ECMT) is a set of templates and scripts that allow the configuration of Serviceguard packages for internet servers as well as for third-party Database Management Systems. This unified set of high availability tools is being released on HP-UX 11i v2 and 11i v3. Each Toolkit is a set of scripts specifically for individual application to start, stop, and monitor the application.
Table 1 Toolkit Name/Application Extension and Application Name Toolkit Name/ Application Extension Application Name Serviceguard Extension for RAC (SGeRAC) Oracle Real application clusters Serviceguard Extension for SAP (SGeSAP) SAP Serviceguard Extension for E-Business Suite (SGeEBS) Oracle E-Business Suite Serviceguard toolkit for Oracle Data Guard (ODG) Oracle Data Guard HA NFS Toolkit Network File System (NFS) HP VM Toolkit HP VM Guest To package applications that are not covered by the S
2 Using the Oracle Toolkit in an HP Serviceguard Cluster This chapter describes the High Availability Oracle Toolkit designed for use in a HP Serviceguard environment. This chapter covers the basic steps for configuring an Oracle instance in a HP-UX cluster, and is intended for users who want to integrate an Oracle Database Server with Serviceguard.
package). For more information, see the section “Configuring and packaging Oracle single-instance database to co-exist with SGeRAC packages” (page 52). Support For Oracle Database without ASM Setting Up The Application To set up the application, it is assumed that the Oracle should be installed in /home/Oracle on all the cluster nodes by 'Oracle' as the database administrator, and that shared storage must be configured. 1. 2. 3.
If you need help creating, importing, or managing the volume group or disk group and filesystem, see Building an HA Cluster Configuration in the Serviceguard user manual available at http://www.hp.com/go/hpux-serviceguard-docs-> HP Serviceguard. • Configuring shared file system using CFS The shared file system can be a CFS mounted file system.
NOTE: If you opted to store the configuration information on a local disk and propagate the information to all nodes, ensure that pfile/spfile, the password file, and all control files and data files are on shared storage. For this set up, you will need to set up symbolic links to the pfile and the password file from /home/Oracle/dbs as follows: ln -s /ORACLE_TEST0/dbs/initORACLE_TEST0.ora \ ${ORACLE_HOME}/dbs/initORACLE_TEST0.ora ln -s /ORACLE_TEST0/dbs/orapwORACLE_TEST0.
NOTE: This setup is not supported if Oracle 10g Release 1 is configured with LVM or VxVM. If Oracle 10g Release 1 is configured with LVM or VxVM then local configuration is recommended. The above configuration is supported in Oracle 10g Release 2 and Oracle 11g, but subject to a condition that Oracle's Automatic Storage Management (ASM) is not configured on that node.
NOTE: If you are using VxVM, create appropriate disk groups as required. If you are using CFS mounted file systems, you can have ${ORACLE_HOME}/dbs and database reside in the same CFS file system. You can also have multiple Oracle databases corresponding to multiple Oracle packages residing in the same CFS file system. However, it is recommended to have different CFS file systems for different Oracle packages.
Table 2 Legacy Package Scripts (continued) Script Name Description Listener Monitor Script (halistener.mon) This script will be called by haoracle.sh to monitor the configured listeners. The script makes use of a command supplied by Oracle to check the status of the listener. Database Hang Monitor Scripts (hadbhang.mon, hagetdbstatus.sh, hatimeoutdbhang.sh) The hadbhang.mon script will be called by haOracle.sh to monitor the Oracle instance for possible 'hung' state. hadbhang.
Table 3 Variable or Parameter Name in haoracle.conf file (continued) MAINTENANCE_FLAG This variable will enable or disable maintenance mode for the Oracle database package. By default this is set to "yes". In order to disable this feature MAINTENANCE_FLAG should be set to "no". When Oracle Database or ASM needs to be maintained, then a "/Oracle.debug" file needs to be touched. During this maintenance period Oracle database or ASM instance's process monitoring is paused.
Table 3 Variable or Parameter Name in haoracle.conf file (continued) OC_TKIT_DIR This parameter must be populated only in case of Oracle database packages being created using ECMT Oracle toolkit provided that SGeRAC packages are also running in the same cluster and Oracle database packages being dependent on SGeRAC OC MNP package. This parameter must point to the working directory of the SGeRAC OC MNP.
NOTE: The following three scripts are used only during the modular method of packaging. Table 4 Modular Package Scripts Script Name Description Attribute Definition File (oracle.1) For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging. The ADF is used to generate a package ASCII template file. Module Script (tkit_module.
2. to the database, then it is assumed that the database is hung. It can have any positive integer as value. The default value for TIMEOUT is 30 (seconds). This should not be confused with the TIME_OUT package attribute or the service_halt_timeout attribute. ACTION: This attribute defines the action that must be taken if a database hang is detected. Currently, this attribute can take one of the following values: • log - Log a message. A message is logged to the package log everytime a hang is detected.
If you are using LVM or VxVM Follow the instructions in the chapter Building an HA Cluster Configuration in the manual Managing ServiceGuard manual available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard to create a logical volume infrastructure on a shared disk. The disk must be available to all clustered nodes that will be configured to run this database instance. Create a file system to hold the necessary configuration information and symbolic links to the Oracle executables.
If you are using VxVM - unmount and deport the disk group, $ umount /ORACLE_TEST0 $ vxdg deport /dev/vx/dsk/DG0_ORACLE_TEST0 Repeat this step on all other cluster nodes to be configured to run the package to ensure Oracle can be brought up and down successfully. • Create the Serviceguard package using legacy method. The following steps in this section describes the method for creating the Serviceguard package using the legacy method.
NOTE: One of these file systems must be the shared file system/logical volume containing the Oracle Home configuration information ($ORACLE_HOME). The name of the instance is used to name the volume groups, logical volumes and file systems. LV[0]=/dev/vg00_${SID_NAME}/lvol1 FS[0]=/${SID_NAME} For example: LV[0]=/dev/vg00_ORACLE_TEST0/lvol1 FS[0]=/ORACLE_TEST0 If you are using VxVM • DISK GROUPS Define the disk groups that are used by the Oracle instance.
If the database must be monitored for a 'hang' condition, then another service has to be added as shown below: SERVICE_NAME[2]=DB_HANG_0 SERVICE_CMD[2]="/etc/cmcluster/pkg/ORACLE_TEST0/toolkit.sh hang_monitor 30 failure" SERVICE_RESTART[2]="-r 2" The service restart counter can be reset to zero for this service by using Serviceguard command cmmodpkg. The service restart counter is incremented each time the service fails.
The configuration file should be edited as indicated by the comments in that file. The package name must be unique within the cluster. For clarity, use the $SID_NAME to name the package. PACKAGE_NAME PACKAGE_NAME ORACLE_TEST0 List the names of the clustered nodes to be configured to run the package, using the NODE_NAME parameter: NODE_NAME node1 NODE_NAME node2 The service name must match the service name used in the package control script.
NOTE: If the Oracle database is running in a cluster where SGeRAC packages are also running, then the ECMT Oracle single-instance database package must be made dependent on the SGeRAC Oracle clusterware multi-node package (OC MNP). The dependency type should be 'SAME_NODE=up'. This is because, when the Oracle clusterware is halted, it halts the Oracle single-instance database. By putting this dependency, we ensure that the database package is always halted first and then the SGeRAC OC MNP is halted.
After setting up the Serviceguard environment, each clustered Oracle instance should have the following files in the related package directory. For example, the ORACLE_TEST0 package, located at /etc/cmcluster/pkg/ORACLE_TEST0, would contain the following files: Table 5 Files in ORACLE_TEST0 File Name Description ${SID_NAME}.conf Serviceguard package configuration file ${SID_NAME}.cntl Serviceguard package control script toolkit.sh Toolkit Interface Script haoracle.
For more information on modular packages, see the whitepaper Modular package support in Serviceguard for Linux and ECM Toolkits available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard Enterprise Cluster Master Toolkit. Also, refer the whitepaper "Migrating Packages from Legacy to Modular Style, October 2007" for more information. You can find this whitepaper at http://www.hp.com/go/ hpux-serviceguard-docs -> HP Enterprise Cluster Master Toolkit . 5.
Following are the supported versions of HP Serviceguard: • A.11.19 • A.11.20 Oracle versions supported with ASM are 10.2.0.4, 11.1.0.6, and 11.1.0.7 with interim patches 7330611 and 7225720 installed. Before these patches were released by Oracle, ASM kept descriptors open on ASM disk group member volumes even after the ASM disk group had been dismounted. This prevented the deactivation of the LVM volume groups.
Figure 1 Oracle database storage hierarchy without and with ASM Why ASM over LVM? As mentioned above, we require ASM disk group members in Serviceguard configurations to be raw logical volumes managed by LVM. We leverage existing HP-UX capabilities to provide multipathing for LVM logical volumes, using either the PV Links feature, or separate products such as HP StorageWorks Secure Path that provide multipathing for specific types of disk arrays.
• should not span multiple PVs • and should not share a PV with other LVs. The idea is that ASM provides the mirroring, striping, slicing, and dicing functionality as needed and LVM supplies the multipathing functionality not provided by ASM. Figure 2 indicates this one-to-one mapping between LVM PVs and LVs used as ASM disk group members. Further, the default retry behavior of LVM could result in an I/O operation on an LVM LV taking an indefinitely long period of time.
1. Create the volume group with the two PVs, incorporating the two physical paths for each (choosing hh to be the next hexadecimal number that is available on the system, after the volume groups that are already configured). # # # # # # # # 2.
Serviceguard support for ASM on HP-UX 11i v3 onwards This document describes how to configure Oracle ASM database with Serviceguard for high availability using the ECMT Oracle toolkit. Look at the ECMT support matrix available at http:// www.hp.com/go/hpux-serviceguard-docs -> HP Enterprise Cluster Master Toolkit for the supported versions of ECMT, Oracle, and Serviceguard. Note that for a database failover, each database should store its data in its own disk group.
that invokes the function fails at this point and the Serviceguard package manager fails the corresponding ASM MNP instance. On ASM instance failure, all dependent database instances will be brought down and will be started on the adoptive node. How Toolkit Starts, Stops and Monitors the Database instance The Toolkit failover package for the database instance provides start and stop functions for the database instance and has a service for checking the status of the database instance.
Figure 4 ASM environment when DB1 fails on node 2 Serviceguard Toolkit Internal File Structure HP provides a set of scripts for the framework proposed for ASM integration with Serviceguard. The ECMT Oracle scripts contain the instance specific logic to start/stop/monitor both the ASM and the database instance. These scripts support both legacy and the modular method of packaging. Even though, Legacy method of packaging is supported, it is deprecated now and will not be supported in future.
Figure 5 Internal file structure for legacy packages Modular packages use the package configuration file for the ASM or database instance on the Serviceguard specific side. The package configuration parameters are stored in the Serviceguard configuration database at cmapplyconf time, and are used by the package manager in its actions on behalf of this package.
ASM File Descriptor Release When an ASM disk group is dismounted on a node in the Serviceguard cluster, it is expected that the ASM instance closes the related descriptors of files opened on the raw volumes underlying the members of that ASM disk group. However, there may be a possibility that processes of the ASM instance and client processes to the ASM instance may not close the descriptors.
Assume that the Serviceguard cluster, ASM instance and one or more database instances are already installed and configured. • Halt the ASM and database instances. • Configure the ASM MNP using the ECMT Oracle scripts provided by HP following the instructions in the README file in the scripts bundle. • Start the ASM instance on all the configured nodes by invoking cmrunpkg on the ASM MNP.
NOTE: If the Oracle database is running in a cluster where SGeRAC packages are also running, then the Oracle database must be disabled from being started automatically by the Oracle Clusterware.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description ASM_DISKGROUP This parameter gives the list of all ASM disk groups used by the database instance. This parameter must be set only for the ASM database instance package. NOTE: ASM_VOLUME_GROUP For ASM instance package, no value must be set for this parameter. This parameter gives the list of volume groups used in the disk groups for the ASM database instance.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description MAINTENANCE_FLAG This variable will enable or disable toolkit maintenance mode for the Oracle database package and ASM MNP. By default, this is set to "yes". To disable this feature, MAINTENANCE_FLAG must be set to "no". When Oracle Database or ASM must be maintained, then a file "/Oracle.debug” must be created.
Table 6 Variables or Parameters in haoracle.conf file (continued) Parameter Name Description KILL_ASM_FOREGROUNDS When ASM is being used, this parameter is used to specify if the ASM foreground processes of the form Oracle having file descriptors open on the dismounted disk group volumes should be killed during database package halt.
Generate the ASM MNP package configuration file, control script and edit the parameters in these files for the ASM MNP in the package directory. # # cmmakepkg -p asmpkg.conf cmmakepkg -s asmpkg.cntl In the package configuration file asmpkg.conf, edit the following parameters: PACKAGE_NAME - Set to any name desired. PACKAGE_TYPE - Set to MULTI_NODE. FAILOVER_POLICY, FAILBACK_POLICY - Should be commented out. RUN_SCRIPT /etc/cmcluster/asm_package_mnp/asmpkg.
ORA_CRS_HOME=/app/Oracle/crs # . This attribute is needed only when this toolkit is used in an SGeRAC cluster. INSTANCE_TYPE=database ORACLE_HOME=/ORACLE_TEST0 ORACLE_ADMIN=Oracle SID_NAME=ORACLE_TEST0 ASM=yes ASM_DISKGROUP[0]=asm_dg1 ASM_DISKGROUP[1]=asm_dg2 ASM_VOLUME_GROUP[0]=vgora_asm1 ASM_VOLUME_GROUP[1]=vgora_asm2 ASM_HOME=/ASM_TEST0 ASM_USER=Oracle ASM_SID=+ASM LISTENER=yes LISTENER_NAME=LSNR_TEST0 LISTENER_PASS= PFILE=${ORACLE_HOME}/dbs/init${SID_NAME}.
Configure the service parameters: SERVICE_NAME ORACLE_DB1_SRV SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 If listener is configured and needs to be monitored, configure another set of service parameters: SERVICE_NAME ORACLE_LSNR_SRV SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 Edit the package control script db1pkg.cntl as shown below: Since LVM logical volumes are used in disk groups, Set VGCHANGE to "vgchange -a e" and specify the name(s) of the volume groups in VG[0], VG[1].
"TKIT_DIR" in the modular package configuration file. Serviceguard uses the toolkit scripts in the configuration directory by default. If the scripts are not found in the configuration directory, Serviceguard takes them from the installation directory. This feature is useful for customers wanting to use modified versions of the toolkit. 1. ASM Multi-Node Package Setup and Configuration NOTE: cluster.
service_restart none service_fail_fast_enabled no service_halt_timeout 300 service_name Oracle_listener_service service_cmd "$SGCONF/scripts/ecmt/Oracle/tkit_module.sh Oracle_monitor_listener" service_restart none service_fail_fast_enabled no service_halt_timeout 300 If the listener is not configured, comment the second set of service parameters which are used to monitor the listener.
NOTE: When configuring a Oracle package in a SGeRAC cluster, command line interface should be used to create the package and the Serviceguard Manager interface should not be used. a. Disable the Oracle database instance from being managed automatically by the Oracle Clusterware.
1) Set attribute ASM to yes. ASM = yes 2) Specify all the ASM Disk Groups used by the database using the ASM_DISKGROUP attribute. ASM_DISKGROUP[0]=asm_dg1 ASM_DISKGROUP[1]=asm_dg2 3) Specify all the LVM Volume Groups used by the ASM Disk Groups using the ASM_VOLUME_GROUP attribute. ASM_VOLUME_GROUP[0]=vgora_asm1 ASM_VOLUME_GROUP[1]=vgora_asm2 4) For the ASM_HOME attribute, specify the ASM Home directory. In case of Oracle 11g R2 this is same as Oracle clusterware home directory.
2. 3. • Edit the configuration file haoracle.conf in the package directory. Leave the INSTANCE_TYPE to the default value "database". Configure values for all parameters that were present in the older toolkit configuration file, that is, ORACLE_HOME, ORACLE_ADMIN, SID_NAME, LISTENER, LISTENER_NAME, LISTENER_PASSWORD, PFILE, MONITORED_PROCESSES, MAINTENANCE_FLAG, MONITOR_INTERVAL, TIME_OUT. Leave the value of ASM to the default value "no". The new parameters can be left as they are by default.
"modular1.ascii" now has values for certain attributes that were present in the older toolkit configuration file. Edit the value for TKIT_DIR (where the new toolkit configuration file should be generated or where the toolkit scripts are copied to in case of a configuration directory mode of operation). Leave the INSTANCE_TYPE to the default value database. Leave the value of ASM to the default value “no". The new toolkit parameters can be left as they are by default. 4.
$ cmmodpkg -e -n -n ORACLE_TEST0 $ cmmodpkg -e ORACLE_TEST0 If necessary, consult the manual Managing ServiceGuard manual available at http://www.hp.com/ go/hpux-serviceguard-docs —>HP Serviceguard for information on managing packages. Node-specific Configuration On many clusters, the standby nodes might be lower end systems than the primary node. An SMP machine might be backed up by a uniprocessor, or a machine with a large main memory may be backed up by a node with less memory.
The 'ORACLE_TEST0' instance should now be reachable using the name 'ORACLE_TEST0' regardless of the node on which it is running. Listener: Set up a listener process for each Oracle instance that executes on a Serviceguard cluster, making sure that the listener can move with the instance to another node. Effectively, ensure the listener configured to a specific Oracle instance is assigned to a unique port. In the current release you can configure ONE listener to be monitored by the toolkit.
monitoring and entering maintenance mode" appears in the Serviceguard Package Control script log. • If required, stop the Oracle database instance as shown below: $ cd /etc/cmcluster/pkg/ORACLE_TEST0/ $ $PWD/toolkit.sh stop • Perform maintenance actions (for example, changing the configuration parameters in the parameter file of the Oracle instance. If this file is changed, please remember to distribute the new file to all cluster nodes).
1. 2. 3. The Oracle Clusterware must be configured in a MNP package using SGeRAC Toolkit. See the Using Serviceguard Extension for RAC" User Guide (published March 2011 or later) available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard Extension for RAC/opt/cmcluster/SGeRAC/toolkit/README file for configuring Oracle Clusterware (SGeRAC OC MNP package) as a package. The ASM Multi-node package (MNP) must not be created using the ECMT toolkit.
Configuring a modular failover package for an Oracle database using ASM in a Coexistence Environment To configure an ECMT modular failover package for an Oracle database: 1. Log in as the Oracle administrator and run the following command to set the database management policy to manual: $ORACLE_HOME/bin/srvctl modify database -d -y MANUAL 2. Create a database package directory under /etc/cmcluster Log in as root: # mkdir /etc/cmcluster/db1_package 3.
ASM_DISKGROUP[1] asm_dg2 • Specify all the LVM Volume Groups used by the ASM Disk Groups using the ASM_VOLUME_GROUP attribute: ASM_VOLUME_GROUP[0] vgora_asm1 ASM_VOLUME_GROUP[1] vgora_asm2 • For the ASM_HOME attribute, specify the ASM Home directory.
ASM_HOME=/ASM_TEST0 ASM_USER=oracle ASM_SID=+ASM LISTENER=yes LISTENER_NAME=LSNR_TEST0 LISTENER_PASS=LSNR_TEST0_PASS PFILE=${ORACLE_HOME}/dbs/init${SID_NAME}.
SERVICE_NAME[1]="ORACLE_LSNR_SRV" SERVICE_CMD[1]="/etc/cmcluster/db1_package/toolkit.sh monitor_listener" SERVICE_RESTART[1]="-r 2" Configure the Package IP and the SUBNET. Add in the customer_defined_run_cmds function: /etc/cmcluster/db1_package/toolkit.sh start Add in the customer_defined_halt_cmds function: if [ $SG_HALT_REASON = "user_halt" ]; then reason="user" else reason="auto" fi /etc/cmcluster/db1_package/toolkit.
“floating” IP address. This floating IP address is called as package IP in Serviceguard terms. The floating IP address is one that is defined for exclusive use by the database as its networking end point and is normally included in the Domain Name Service so that the database can be located by its applications. The symbolic name that is associated with the floating IP address is the “virtual hostname” and it is this name that will be used as the database’s hostname within the EBS topology configuration.
3 Using the Sybase ASE Toolkit in a Serviceguard Cluster on HP-UX This chapter describes the High Availability Sybase Adaptive Server Enterprise (ASE) Toolkit designed for use in a Serviceguard environment. This document covers the basic steps to configure a Sybase ASE instance in Serviceguard Cluster, and is intended for users who want to integrate a Sybase ASE Server with Serviceguard in HP-UX environments.
If the choice is to store the configuration files on a local disk, the configuration must be replicated to local disks on all nodes configured to run the package. Here the Sybase ASE binaries reside on the local storage. If any change is made to the configuration files, the file must be copied to all nodes. The user is responsible to ensure the systems remain synchronized. Shared Configuration This is the recommended configuration.
NOTE: The dataserver and the monitor server, both need to be installed on the same disk as monitor server is dependent on the presence of the dataserver for its working. 2. Make sure that the 'sybase' user has the same user id and group id on all nodes in the cluster. Create the user or group with the following commands and ensure the uid/gid by editing the /etc/passwd file: # groupadd sybase # useradd -g sybase -d -p sybase 3.
/dev/vg02_SYBASE0/lvol1 #Logical volume Sybase ASE data /dev/vg02_SYBASE0/lvol2 #Logical volume Sybase ASE data See the Sybase ASE documentation to determine which format is more appropriate for the setup environment that user prefers.
Table 7 Sybase ASE attributes (continued) Sybase ASE Attributes Description ASE_SERVER Name of the Sybase ASE instance set during installation or configuration of the ASE. This uniquely identifies an ASE instance ALERT_MAIL ID Sends an e-mail message to the specified e-mail address when packages fail. This e-mail is generated only when packages fail, and not when a package is halted by the operator.
1. 2. On package start-up, it starts the Sybase ASE instance and launches the monitor process. On package halt, it stops the Sybase ASE instance and monitor process. This script also contains the functions for monitoring the Sybase ASE instance. By default, only the "€˜dataserver€" process of ASE is monitored. This process is contained in the variable MONITOR_PROCESSES. Sybase Package Configuration Example • Package Setup and Configuration 1.
The following is an example of specifying Sybase ASE specific variables: ecmt/sybase/sybase/TKIT_DIR /tmp/SYBASE0 ecmt/sybase/sybase/SYBASE /home/sybase ecmt/sybase/sybase/SYBASE_ASE ASE-15_0 ecmt/sybase/sybase/SYBASE_OCS OCS-15_0 ecmt/sybase/sybase/SYBASE_ASE_ADMIN sybase ecmt/sybase/sybase/SALOGIN sa ecmt/sybase/sybase/SAPASSWD somepasswd NOTE: Keep this commented if the password for the administrator is not set. Along with other package attributes, this password is also stored in the Cluster Database.
to determine when a package has exceeded its restart limit as defined by the "service_restart" parameter in the package control script. To reset the restart counter execute the following command cmmodpkg [-v] [-n node_name] -R -s service_name package_name After setting up the Serviceguard environment, each clustered Sybase ASE Instance should have the following files in the toolkit specific directories: /etc/cmcluster/scripts/ecmt/sybase/tkit_module.
Adding the Package to the Cluster After the setup is complete, add the package to the Serviceguard cluster and start it up. cmapplyconf -P SYBASE0 cmmodpkg -e -n -n SYBASE0 cmmodpkg -e SYBASE0 If necessary, consult the Managing ServiceGuard manual available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard manual for information on managing packages.
Node-specific Configuration Ensure that all files that are required by Sybase ASE to start up cleanly are updated and present on each node in the cluster or accessible through shared storage. It is the user's responsibility to replicate all files across all nodes, when using local configuration. Error-Handling During start up, the ASE server would require the IP/hostname that is mentioned in the interfaces file to be associated with a node name that it is configured to run on.
do for setting up a single-instance of ASE for failover in a Serviceguard cluster. Consult Sybase ASE documentation for a detailed description on how to setup ASE in a cluster. • Sybase ASE interfaces file For setting up a single-instance of ASE in a Serviceguard cluster, the ASE instance should be available at the same IP address across all nodes in the cluster. To achieve this, the interfaces file of the ASE instance, available at the $SYBASE directory of that instance, should be edited.
following procedure should be used: NOTE: The example assumes that the package name is SYBASE0, the package directory is /opt/cmcluster/pkg/SYBASE0, and the SYBASE_HOME is configured as /SYBASE0. • Disable the failover of the package through cmmodpkg command: $ cmmodpkg -d SYBASE0 • Pause the monitor script. Create an empty file /sybase.debug as shown below: $ touch /sybase.
4 Using the DB2 database Toolkit in a Serviceguard Cluster in HP-UX DB2 is a RDBMS product from IBM. This chapter describes the High Availability toolkit for DB2 V9.1, V9.5 and V9.7 designed to be used in a Serviceguard environment. This chapter covers the basic steps to configure DB2 instances in a Serviceguard cluster. For more information on support matrix, see compatibility matrix available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard .
2. 3. 4. the Serviceguard user manual available at http://www.hp.com/go/hpux-serviceguard-docs -> HP Serviceguard. Make sure that the minimum hardware and the software prerequisites are met before the product installation is initiated. The latest installation requirements are listed on IBM web site: http://www-306.ibm.com/software/data/db2/9/sysreqs.html. Use the DB2 Setup Wizard or db2install script to install the database server deferring the instance creation.
2 node2 0 node2 3 node2 1 node2 9. To enable the communication between the partitions, edit the /etc/services, /etc/ hosts, .rhosts file with the required entries. Edit the/etc/hosts file by adding the IP address and hostname of the DB2 server. For example: [payroll_inst@node1 ~]> vi /etc/hosts 10.0.0.1 DBNODE.domainname.com DBNODE Edit the ~/.rhosts file for the instance owner: For example: [payroll_inst@node1 ~]> vi home/payroll_inst/.rhosts node1 payroll_inst node2 payroll_inst 10.
NOTE: In case of multiple physical and logical partition configuration of DB2, the number of ports added in the services file has to be sufficient for the number of partitions created in the current node as well as the number of partitions created on the other nodes. This is to ensure that enough ports are available for all partitions to startup on a single node if all packages managing different partitions are started on that node.
NOTE: In a Serviceguard cluster environment, the fault monitor facility provided by DB2 must be turned off. Fault monitor facility is a sequence of processes that work together to ensure that DB2 instance is running. Fault monitor facility is specifically designed for non-clustered environments and has the flexibility to be turned off if the DB2 instance is running in a cluster environment.
will be predefined, include the identity of the database instance as per the name set in the /etc/ cmcluster/modules/ecmt/db2/ directory. For legacy packages, there will be one user configuration script (hadb2.conf) and three functional scripts (toolkit.sh), hadb2.sh and hadb2.mon) which work with each other to integrate DB2 with the Serviceguard package control scripts. For modular packages, there is an Attribute Definition File (ADF), a Toolkit Module Script (tkit_module.
Table 10 Variables in hadb2.conf File (continued) Variable Name Description MONITOR_PROCESSES This is the list of critical processes of a DB2 instance. NOTE: These processes must not be running after DB2 is shutdown. If the processes are still running they will be killed by sending a SIGKILL. MAINTENANCE_FLAG This variable will enable or disable maintenance mode for DB2 package. By default, this is set to "yes". In order to disable this feature MAINTENANCE_FLAG should be set to "no".
nodes that will be configured to run this database instance. Since the volume group and file system have to be uniquely named within the cluster, use the name of the database instance in the name. Assuming, the name of the database instance is 'payroll_inst', follow the instructions in the see Building an HA Cluster Configuration chapter of Managing ServiceGuard manual available at http://www.hp.
NOTE: Following section below describe the methods for creating Serviceguard package using the modular and legacy method. For more information on creating Serviceguard package using modular method, see white paper Modular package support in Serviceguard for Linux and ECM Toolkits available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard Enterprise Cluster Master Toolkit. Creating Serviceguard package using Modular method.
• Create the Serviceguard package using Modular method. Follow these steps to create Serviceguard package using Modular method: 1. Create a directory for the package: #mkdir /etc/cmcluster/pkg/db2_pkg/ 2. Copy the toolkit template and script files from db2 directory: #cd /etc/cmcluster/pkg/db2_pkg/ #cp /opt/cmcluster/toolkit/db2/* ./ 3. Create a configuration file (pkg.conf) as follows: #cmmakepkg -m ecmt/db2/db2 4. pkg.conf Edit the package configuration file.
Set the MONITOR_INTERVAL in seconds to specify how often the partition or instance is monitored. For example, MONITOR_INTERVAL 30 Set the TIME_OUT variable to the time that the toolkit must wait for completion of a normal shut down, before initiating a forceful halt of the application. For example, TIME_OUT 30 Set the monitored_subnet variables to the subnet that is monitored for the package.
5. Use cmcheckconf command to check for the validity of the configuration specified. For example, #cmcheckconf -P pkg.conf 6. If the cmcheckconf command does not report any errors, use the cmapplyconf command to add the package into Serviceguard environment. For example, #cmapplyconf -P pkg.conf Creating Serviceguard package using legacy method.
• Create the Serviceguard package using legacy method. Follow these steps to create Serviceguard package using legacy method. mkdir /etc/cmcluster/pkg/db2_pkg/ Copy the toolkit files from db2 cd /etc/cmcluster/pkg/db2_pkg/ cp /opt/cmcluster/toolkit/db2/* ./ Create a configuration file (pkg.conf) and package control script pkg.cntl) as follows: cmmakepkg -p pkg.conf cmmakepkg -s pkg.cntl NOTE: There should be one set of configuration and control script files for each DB2 instance.
/etc/cmcluster/pkg/db2_pkg/toolkit.sh stop test_return 52 } The Serviceguard package configuration file (pkg.conf). The package configuration file is created with cmmakepkg -p, and should be put in the following location: /etc/cmcluster/pkg/db2_pkg/ For example: /etc/cmcluster/pkg/db2_pkg/pkg.conf The configuration file should be edited as indicated by the comments in that file. The package name needs to be unique within the cluster. For clarity, use the name of the database instance to name the package.
Table 12 DB2 Package Files File Name Description $PKG.cntl Serviceguard package control script for legacy packaging style. $PKG.conf Serviceguard package configuration file. hadb2.sh Main shell script of the toolkit. hadb2.mon Monitor the health of the application. hadb2.conf Toolkit DB2 configuration file. toolkit.sh Interface between pkg.cntl and hadb2.sh. Adding the Package to the Cluster After the setup is complete, add the package to the SG cluster and start it up. cmapplyconf -P pkg.
$ cd /etc/cmcluster/pkg/db2_pkg/ $ $PWD/toolkit.sh start • Enable monitoring scripts to continue monitoring by removing db2.debug file: $ rm -f /etc/cmcluster/pkg/db2_pkg/db2.debug The message "Starting DB2 toolkit monitoring again after maintenance" appears in the Serviceguard Package Control script log. • Enable the package failover: $ cmmodpkg -e db2_payroll If the package fails during maintenance (for example, the node crashes) the package will not automatically fail over to an adoptive node.
5 Using MySQL Toolkit in a HP Serviceguard Cluster This chapter describes the MySQL Toolkit for use in the HP Serviceguard environment. The ideal audience is of those who want to configure the MySQL Database Server application toolkit under HP Serviceguard cluster environment using MySQL Toolkit. This toolkit supports the Enterpise MySQL Database Server Application 5.0.56 and later.
The following three files are also installed and they are used only for the modular method of packaging. The following Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/ mysql. Table 14 ADF File in Modular Package in MySQL File Name Description mysql.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging.
This is the recommended configuration. Here, the configuration and database files are on shared disks, visible to all nodes. Since the storage is shared, there is no additional work to ensure all nodes have the same configuration at any point in time. To run MySQL in a HP Serviceguard environment: • Each node must have the same version of the MySQL Database Server software installed. • Each node that will be configured to run the package must have access to the configuration files.
2. 3. 4. an HA Cluster Configuration chapter of Managing ServiceGuard manual available at http:// www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard . Following the instructions in the documentation for MySQL, create a database on the lvol in /MySQL_1. This information may be viewed online at the following link http:// www.mysql.com/documentation/mysql/bychapter/manual_Tutorial.html#Creating_database. Copy the configuration file /etc/my.cnf to /MySQL_1/my.cnf. Modify /MySQL_1/my.
MySQL Configuration File (my.cnf) The following parameters are contained in the configuration file /etc/my.cnf. This file must be copied to the file system on the shared storage (in our for example, /etc/my.cnf would be copied to /MySQL_1/my.conf). Then parameters need to be manually set with unique values for each DB instance configured. Table 18 Parameters in MySQL Configuration File (my.
Table 19 User Variables in hamysql.conf file File name Description CONFIGURATION_FILE_PATH="/MySQL_1/mysql/my.cnf" Only one of the two variables (either CONFIGURATION_FILE_PATH or DATA_DIRECTORY="/MySQL_1/mysql" or DATA_DIRECTORY) should be defined. If both are defined, CONFIGURATION_FILE_PATH is used and DATA_DIRECTORY is ignored. NOTE: If DATA_DIRECTORY is used, my.cnf MUST reside in the DATA_DIRECTORY location. This directory is also used as the data directory for this instance of the database server.
Table 20 Package Configuration File Parameters (continued) Parameter Name [configuration file parameters] Configuration file Description HALT_SCRIPT # Script to halt the service SERVICE_NAME "mysql_monitor" # Service Name Table 21 Package Control Script Parameters Parameter Name [control script parameters] Control script Description VG vgMySQL # VG created for this package LV /dev/vgMySQL/lvol1 # Logical vol created in VT FS /MySQL_1 # File system for DB FS_TYPE "ext2" #
Creating Serviceguard package using Modular method. Follow the steps below to create Serviceguard package using Modular method: 1. Create a directory for the package. #mkdir /etc/cmcluster/pkg/mysql_pkg/ 2. Copy the toolkit template and script files from mysql directory. #cd /etc/cmcluster/pkg/mysql_pkg/ #cp /opt/cmcluster/toolkit/mysql/* ./ 3. Create a configuration file (pkg.conf) as follows. #cmmakepkg -m ecmt/mysql/mysql pkg.conf 4. Edit the package configuration file.
cmmakepkg -s MySQL1.cntl (control template) 5. 6. Using any editor modify the package templates with your specific information, as described in the preceding section“Package Configuration File and Control Script” (page 92) of this chapter. Change the owner and group of the package directory to the "mysql" user. For Example: chown mysql:mysql /etc/cmcluster/pkg/MySQL1 7. Ensure both root and mysql users have read, write, and execute permissions for the package directory. 8.
A message "Starting MySQL toolkit monitoring again after maintenance" appears in the Serviceguard Package Control script log. • Enable the package failover. $ cmmodpkg -e mysql_1 NOTE: If the package fails during maintenance (for example, the node crashes), the package will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. Please refer to the manual Managing ServiceGuard manual available at http://www.hp.
6 Using an Apache Toolkit in a HP Serviceguard Cluster This chapter describes the toolkit that integrates and runs HP Apache in the HP Serviceguard environment. This chapter is intended for users who want to install, configure, and execute the Apache web server application in a Serviceguard clustered environment. It is assumed that users of this document are familiar with Serviceguard and the Apache web server, including installation, configuration, and execution.
Table 23 Files in Apache Toolkit (continued) File Name Description toolkit.sh Interface between the package control script and the Apache Toolkit main shell script. SGAlert.sh This generates the Alert mail based on package failure The following three files, listed in Table 24 (page 98) are also installed and they are used only for the modular method of packaging. The following Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/apache.
NOTE: In an HP-UX 11.x environment, the Apache server is usually installed in the location /opt/hpws22/apache and the configuration file httpd.conf resides in the "conf" sub-directory under the "SERVER ROOT" directory. Apache Package Configuration Overview Apache starts up by reading the httpd.conf file from the "conf" sub-directory of the SERVER_ROOT directory which is configured in the toolkit user configuration file hahttp.conf.
run its own instance as well. Multiple Apache instance configuration can either be done as a local configuration or shared configuration or a combination of both. Configuring the Apache Web Server with Serviceguard To manage an Apache Web Server by Serviceguard, the default Apache configuration needs to be modified.
Shared Configuration To configure a shared file system which is managed by LVM, create volume group(s) and logical volume(s) on the shared disks and construct a new file system for each logical volume for the Apache Web Server document root (and server root). Static web data such as web pages with no data update features may reside on local disk. However, all web data that needs to be shared must reside on shared storage.
node startup and shutdown, create a one-node package for each node that runs an Apache instance. Active - Passive In an active-passive configuration, an instance of Apache Web Server can run on only one node at any time. A package of this configuration is a typical failover package. The active - passive support on CFS comes with a caution or limitation that, when an Apache instance is up on one node, no attempts should be made to start the same instance of Apache on any another node.
NOTE: The following sections describe the method for creating the Serviceguard package using the legacy method. For information on creating the Serviceguard package using the modular method, see the white paper Modular package support in Serviceguard for Linux and ECM Toolkits available at http://www.hp.
NOTE: When configuring an Active - Active configuration the package configuration file should hold the name of that single node only on which the instance will run. For example, on node1, NODE_NAME parameter in the package configuration file would be edited as NODE NAME node1 and in node2 as, NODE NAME node2 RUN_SCRIPT /etc/cmcluster/pkg/http_pkg1/http_pkg.cntl RUN_SCRIPT /etc/cmcluster/pkg/http_pkg1/http_pkg.cntl HALT_SCRIPT /etc/cmcluster/pkg/http_pkg1/http_pkg.
/etc/cmcluster/pkg/http_pkg1/toolkit.sh start test_return 51 } For example: function customer_defined_halt_cmds { # Stop the Apache Web Server. /etc/cmcluster/pkg/http_pkg1/toolkit.sh stop test_return 51 } NOTE: If CFS mounted file systems are used then volume groups, logical volumes and file systems must not be configured in the package control script but dependency on SG CFS packages must be configured. 3. 4. Configure the Apache user configuration file hahttp.conf as explained in the next section.
Toolkit User Configuration All the user configuration variables are kept in a single file hahttp.conf in shell script format. The variable names and their sample values are listed in Table 26 (page 106): Table 26 Configuration Variables Configuration Variables Description HP_APACHE_HOME (for example, This is the base directory where HP Apache web server is installed.
packages fail. This e-mail is generated only when packages fail, and not when a package is halted by the operator. To send this e-mail message to multiple recipients, a groupe-maill ID must be created and specified for this parameter. When e-mail ID is not specified for this parameter, the script does not send out this e-mail. The following information provides the steps for configuring the toolkit and running the package. This includes configuring the Apache toolkit user configuration file.
The configuration file should be edited as indicated by the comments in that file. The package name needs to be unique within the cluster. For Example: PACKAGE_NAME apache NODE_NAME node1 NODE_NAME node2 Set the TKIT_DIR variable as the path of . For example, TKIT_DIR /etc/cmcluster/pkg/apache_pkg. 5. Use cmcheckconf command to check for the validity of the configuration specified. For Example: #cmcheckconf -P pkg.conf 6.
◦ Start the apache instance again if stopped, cd /etc/cmcluster/pkg/http_pkg1/ $PWD/toolkit.sh start ◦ Allow monitoring scripts to continue normally as shown below: rm -f /etc/cmcluster/pkg/http_pkg1/apache.debug A message "Starting Apache toolkit monitoring again after maintenance" appears in the Serviceguard Package Control script log.
7 Using Tomcat Toolkit in a HP Serviceguard Cluster This chapter describes the toolkit that integrates and runs HP Tomcat in the HP Serviceguard environment. It is intended for users who want to install, configure, and execute the Tomcat servlet engine application in a Serviceguard clustered environment. It is assumed that users of this document are familiar with Serviceguard and the Tomcat Servlet engine, including installation, configuration, and execution.
Table 28 ADF File for Modular Method of Packaging File Name Description tomcat.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging. The ADF is used to generate a modular package ASCII template file. The following files are located in /etc/cmcluster/scripts/ecmt/tomcat after installation.
NOTE: In an HP-UX 11.x environment, the Tomcat server is usually installed in the location /opt/ hpws22/tomcat and the default configuration file server.xml resides in the conf sub-directory under this directory. If HP-UX WSS 2.X is installed, then the Tomcat server will be installed in the location /opt/hpws/tomcat. Tomcat Package Configuration Overview Tomcat starts up by reading the server.
Multiple Tomcat Instances Configuration Tomcat servlet engine is a multi-instance application. More than one instance of the Tomcat can run on a single node simultaneously. For example, if two nodes are each running an instance of Tomcat and one node fails, the Tomcat instance on the failed node can be successfully failed over to the healthy node. In addition, the healthy node can continue to run its own instance as well.
The following is an example of configuring a Tomcat instance that uses shared storage for all Tomcat instance data. NOTE: The procedures below assume that configuring all Tomcat instance files on a shared file system "/shared/tomcat_1" directory, that resides on a logical volume "lvol1" from a shared volume group "/dev/vg01": a. b. c. d. e. f. g. h. 2. Create a Volume Group "vg01" for a shared storage. Create a Logical Volume "lvol1" on the volume group "vg01".
NOTE: As mentioned before, under shared configuration, you can choose to put the Tomcat binaries as well in a shared file system. This can be configured by 2 methods: To create a shared configuration for the Tomcat Server on the shared file system mounted at /mnt/tomcat: a. Method 1 1) Create the shared storage that will be used to store the Tomcat files for all nodes configured to run the Tomcat package. Once that storage has been configured, create the mount point for that shared storage on these nodes.
on a file system "/shared/tomcat_1" directory, that resides on a logical volume "lvol1" in a shared volume group "/dev/vg01". Here, it is assumed that the user has already determined the Serviceguard cluster configuration, including cluster name, node names, heartbeat IP addresses, and so on. See the Managing ServiceGuard manual available at http://www.hp.com/go/ hpux-serviceguard-docs —>HP Serviceguard for more detail.
Example 1 For example: LVM ----VG[0]="vg01" LV[0]="/dev/vg01/lvol1" FS[0]="/shared/tomcat_1" FS_TYPE[0]="vxfs" FS_MOUNT_OPT[0]="-o rw" | | | | | | | | | VxVM -----VXVM_DG[0]="DG_00" LV[0]="/dev/vx/dsk/DG_00/LV_00 FS[0]="/shared/tomcat_1" FS_TYPE[0]="vxfs" FS_MOUNT_OPT[0]="-o rw" IP[0]="192.168.0.1" SUBNET[0]="192.168.0.0" #The service name must be the same as defined in the package #configuration file. SERVICE_NAME[0]="tomcat1_monitor" SERVICE_CMD=[0]"/etc/cmcluster/pkg/tomcat_pkg1/toolkit.
#mkdir /etc/cmcluster/pkg/tomcat_pkg/ 2. Copy the toolkit template and script files from tomcat directory. #cd /etc/cmcluster/pkg/tomcat_pkg/ #cp /opt/cmcluster/toolkit/tomcat/* ./ 3. Create a configuration file (pkg.conf) as follows. #cmmakepkg -m ecmt/tomcat/tomcat pkg.conf 4. Edit the package configuration file. NOTE: Tomcat toolkit configuration parameters in the package configuration file have been prefixed by ecmt/tomcat/tomcat when used in Serviceguard A.11.19.00 or later.
Table 30 Legal Package Scripts Script Name Description User Configuration file (hatomcat.conf) This script contains a list of pre-defined variables that may be customized for the user's environment. This script provides the user a simple format of the user configuration data. This file will be included (that is, sourced) by the toolkit main script hatomcat.sh. Main Script (hatomcat.
Table 31 User Configuration Variables (continued) User Configuration Variables Description MAINTENANCE_FLAG (for example, MAINTENANCE_FLAG ="yes") This variable will enable or disable maintenance mode for the Tomcat package. By default, this is set to "yes". In order to disable this feature, MAINTENANCE_FLAG should be set to "no". When Tomcat needs to be maintained the file, /tomcat.debug needs to be touched. During this maintenance period, the tomcat process monitoring is paused.
MAINTENANCE_FLAG="yes" MONITOR_PORT=8081 2. Distribute all package files in the package directory to the other package nodes. All nodes should have an identical file path for these files. For this example, each package node must have the following files in the package directory: For Example: tomcat_pkg.conf tomcat_pkg.cntl hatomcat.conf hatomcat.mon hatomcat.sh toolkit.sh 3.
4. 5. 6. Perform maintenance actions (for example, changing the configuration of the Tomcat instance, or making changes to the toolkit configuration file, hatomcat.conf. If this file is changed, remember to distribute the new file to all cluster nodes). Start the tomcat instance again if it is stopped using cd /etc/cmcluster/pkg/ tomcat_pkg1/ $PWD/toolkit.sh start. Allow monitoring scripts to continue normally as shown below: rm -f /etc/cmcluster/pkg/tomcat_pkg1/tomcat.
SERVICE_NAME SERVICE_FAIL_FAST_ENABLED SERVICE_HALT_TIMEOUT 7. http_pkg1.monitor NO 300 Edit the Package Control script "pkg1.cntl" and configure the two toolkits as show below:Example 2 For example: VG[0]="vg01" LV[0]="/dev/vg01/lvol1" FS[0]="/share/pkg_1" FS_MOUNT_OPT[0]="-o rw" FS_UMOUNT_OPT[0]="" FS_FSCK_OPT[0]="" FS_TYPE[0]="vxfs" Configure the two services one for Tomcat and Apache instances respectively SERVICE_NAME[0]="tomcat_pkg1.monitor" SERVICE_CMD[0]="/etc/cmcluster/pkg/tomcat_pkg1/toolkit.
NOTE: While bringing down Apache or Tomcat application for maintenance, touch the files tomcat.debug and apache.debug in Tomcat and Apache package directories /etc/ cmcluster/pkg/tomcat_pkg1/ and /etc/cmcluster/pkg/http_pkg1/ respectively. This will ensure that monitoring services of both Apache and Tomcat is paused during the maintenance period. Both Tomcat and Apache applications, are configured in a single package.
8 Using SAMBA Toolkit in a Serviceguard Cluster This chapter describes the High Availability SAMBA Toolkit for use in the Serviceguard environment. The chapter is intended for users who want to install and configure the SAMBA toolkit in a Serviceguard cluster. Readers should be familiar with Serviceguard configuration as well as HP CIFS Server application concepts and installation/configuration procedures. NOTE: • This toolkit supports: HP Serviceguard versions: ◦ A.11.19 ◦ A.11.
The following three files are also installed and they are used only for the modular method of packaging. Attribute Definition File (ADF) is installed in /etc/cmcluster/modules/ecmt/samba . Table 33 Attribute Definition File (ADF) File Name Description samba.1 For every parameter in the legacy toolkit user configuration file, there is an attribute in the ADF. It also has an additional attribute TKIT_DIR which is analogous to the package directory in the legacy method of packaging.
In a typical local configuration, identical copies of the HP CIFS Server configuration files reside in exactly the same locations on the local file system on each node. All HP CIFS Server file systems are shared between the nodes. It is the responsibility of the toolkit administrator to maintain identical copies of the HP CIFS Server components on all nodes. If the shared file system allows only read operations, then local configuration is easy to maintain.
netbios name = smb1 interfaces = XXX.XXX.XXX.XXX/xxx.xxx.xxx.xxx bind interfaces only = yes log file = /var/opt/samba/smb1/logs/log.%m lock directory = /var/opt/samba/smb1/locks pid directory = /var/opt/samba/smb1/locks Replace the "XXX.XXX.XXX.XXX/xxx.xxx.xxx.xxx" with one (space separated) relocatable IP address and subnet mask for the Serviceguard package. Copy the workgroup line from the /etc/opt/samba/smb.conf file.
f. g. 3. Copy all the necessary files depending on the configuration. Unmount "/shared/smb1". Using CFS To configure a Samba package in a CFS environment, the SG CFS packages need to be running in order for the Samba package to access CFS mounted file systems. Please see your Serviceguard Manual for information on how to configure SG CFS packages. Create a directory /shared/smb1 on all cluster nodes. Mount the CFS filesystem on /shared/smb1 using the CFS packages.
$ cd /etc/cmcluster/smb1 $ cp /opt/cmcluster/toolkit/samba/* . #copy to $PWD To create both the package configuration (smb_pkg.conf) and package control (smb_pkg.cntl) files, cd to the package directory (for example, cd /etc/cmcluster/smb1) 1. Create a package configuration file with the command cmmakepkg -p. The package configuration file must be edited as indicated by the comments in that file. The package name must be unique within the cluster.
/etc/cmcluster/smb1/toolkit.sh start test_return 51 } 4. Edit the customer_defined_halt_cmds function in the package control script to execute the toolkit.sh script with the stop option. In the example below, the line /etc/cmcluster/smb1/ toolkit.sh stop was added, and the ":" null command line deleted. EXAMPLE: function customer_defined_halt_cmds { # Stop the HP CIFS Server. /etc/cmcluster/smb1/toolkit.sh stop test_return 51 } 5. 6. Configure the user configuration file hasmb.
Table 35 Legacy Package Scripts (continued) Script Name Description Monitor Script (hasmb.mon) This script contains a list of internal-use variables and functions for monitoring an HP CIFS Server server instance. This script will be called by the toolkit main script ( hasmb.sh) and will constantly monitor two HP CIFS Server daemons, smbd and nmbd. Interface Script (toolkit.sh) This script is an interface between a package control script and the toolkit main script (hasmb.sh ).
#cmcheckconf -P pkg.conf 6. If the cmcheckconf command does not report any errors, use the cmapplyconf command to add the package into Serviceguard environment. For Example: #cmapplyconf -P pkg.conf Toolkit User Configuration All the user configuration variables are kept in a single file in shell script format.
Table 36 User Configuration Variables (continued) Configuration Variables Description MAINTENANCE_FLAG (for example, (MAINTENANCE_FLAG=yes) This variable will enable or disable maintenance mode for the Samba package. By default, this is set to "yes". In order to disable this feature MAINTENANCE_FLAG should be set to "no". When Samba needs to be maintained then a file / samba.debug needs to be touched. During this maintenance period Samba process monitoring is paused.
smb_pkg.conf #Package configuration file smb_pkg.cntl #Package control file hasmb.conf #SAMBA toolkit user config file hasmb.mon #SAMBA toolkit monitor program hasmb.sh #SAMBA toolkit main script toolkit.sh #Interface file between the package #control file and the toolkit 3. Apply the package configuration using the command cmapplyconf -P smb_pkg.conf. Repeat the preceding procedures to create multiple HP CIFS Server packages.
NOTE: If the package fails during maintenance (for example, the node crashes), the package will not automatically fail over to an adoptive node. It is the responsibility of the user to start the package up on an adoptive node. See the manual Managing ServiceGuard manual available at http://www.hp.com/go/hpux-serviceguard-docs —>HP Serviceguard for more details. This feature is enabled only when the configuration variable, MAINTENANCE_FLAG, is set to "yes" in the Samba toolkit configuration file.
• Username Mapping File If the HP CIFS Server configuration is set to use a username mapping file, it should be located on a shared file system. This way, if changes are made, all the nodes will always be up-to-date. The username mapping file location is defined in smb.conf by the parameter 'username map', example, 'username map = /var/opt/samba/shared_vol_1/username.map'. There is no username map file by default.
9 Support and other resources Information to collect before contacting HP Be sure to have the following information available before you contact HP: • Software product name • Hardware product model number • Operating system type and version • Applicable error message • Third-party hardware or software • Technical support registration number (if applicable) How to contact HP Use the following methods to contact HP technical support: • In the United States, see the Customer Service / Contact HP U
HP authorized resellers For the name of the nearest HP authorized reseller, see the following sources: • In the United States, see the HP U.S. service locator web site: http://www.hp.com/service_locator • In other locations, see the Contact HP worldwide web site: http://welcome.hp.com/country/us/en/wwcontact.html Documentation feedback HP welcomes your feedback. To make comments and suggestions about product documentation, send a message to: docsfeedback@hp.
Replaceable The name of a placeholder that you replace with an actual value. [] In command syntax statements, these characters enclose optional content. {} In command syntax statements, these characters enclose required content. | The character that separates items in a linear list of choices. ... Indicates that the preceding element can be repeated one or more times. WARNING An alert that calls attention to important information that, if not understood or followed, results in personal injury.