Continentalclusters Version A.07.
Legal Notices © Copyright 2010 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Table of Contents 1 Continentalclusters Version A.07.01 Release Notes.........................................................................5 Announcements....................................................................................................................5 What’s In This Version .........................................................................................................6 Continentalclusters Product Features......................................................................
List of Tables 1-1 4 Metrocluster Version Requirements for Continentalclusters Oracle RAC Support..........................................................................................................................
1 Continentalclusters Version A.07.01 Release Notes Announcements Continentalclusters is a Hewlett-Packard high availability solution that provides disaster tolerant clustering with no distance limitation. Version A.07.01 of Continentalclusters, which contains enhancements and defect fixes, is being released on HP-UX 11i v2 and HP-UX 11i v3 with the following product number: T2346BA - license, media, and documentation Version A.07.
For more information on supported versions, see the Disaster Tolerant Clusters Products Compatibility and Feature Matrix (Continentalclusters-CC), located at http://www.hp.com/ go/hpux-ha-monitoring-docs -> HP Continentalcluster. What’s In This Version Continentalclusters employs semi-automatic failover of Serviceguard packages from one cluster to another following a cluster event that indicates serious disruption of service on one of the clusters.
Metrocluster Continuous Access EVA, and Metrocluster with EMC SRDF. An alternative to using the pre-integrated data replication solution a customer-selected data replication solution can be chosen by following the integration guidelines. • • • A recovery pair in a Continentalclusters consists of one primary cluster and one recovery cluster. One or more recovery pairs can be configured in a continental cluster with a common recovery cluster.
helps detect configuration discrepancies at the recovery cluster and ensures that the recovery cluster is prepared to handle a disaster recovery scenario. For DR Rehearsal, Continentalclusters is enhanced to allow recovery groups to be configured with a special rehearsal package which is specified as part of the recovery group definition.
with Continentalclusters version A.07.00, recovery groups using CVM disk groups can be recovered by running the cmrecovercl command on any node in the recovery cluster. Following are some of the other important changes in earlier versions of Continentalclusters: • If /etc/cmconcl/ccrac/ccrac.configfile is configured, you must specify the CCRAC_CLUSTER parameter for every recovery pair in the ccrac.config file.
Hardware Requirements Host System Hardware Requirements • • HP 9000 Servers HP Integrity Servers If you are doing data replication with XP series disk arrays, there are both hardware and software requirements. For more information on the hardware and software requirements for Metrocluster Continuous Access XP, refer to the Metrocluster with Continuous Access XP Release Notes, which can be found at http://www.hp.com/go/ hpux-ha-monitoring-docs -> HP Metroclusters.
NOTE: For detailed compatibility and feature information for Continentalclusters, refer to the Disaster Tolerant Clusters Products Compatibility and Feature Matrix (Continentalclusters-CC), located at http://www.hp.com/go/hpux-ha-monitoring-docs -> HP Metrocluster. NOTE: This list is subject to change without notice. Contact your HP representative to ensure you have the most up to date and required versions of HP software and disk array management software.
3. cpio files with the XP system) or the EMC Symmetrix SymCLI software or HP StorageWorks Continuous Access EVA related software on all nodes. If you are using the Oracle 8i Standby Database for logical data replication, purchase the Enterprise Cluster Master toolkit product, and consult your HP representative for more information about installing the template files. Upgrading from Earlier Versions If upgrading Continentalclusters from an earlier version, the following steps are required: 1.
What Manuals are Available for this Version For information about configuring Continentalclusters, refer to the following manual, which is shipped with Continentalclusters version number A.07.00.
• • • VERITAS Volume Manager Reference Guide VERITAS Volume Manager Migration Guide VERITAS Volume Manager for HP-UX Release Notes Further Reading Additional information about Continentalclusters and related high availability topics may be found on Hewlett-Packard’s HA web page. Support information, including current information on patches and known problems, is available from Hewlett-Packard IT center: http://itrc.hp.
Fixes There are no known fixes at the time of this publication. However, this is subject to change without notice. For the most current information contact your HP support representative. Known Problems and Workarounds The following describes known problems with the Continentalclusters version A.07.00 and workarounds for them. This is subject to change without notice. For the most current information contact your HP support representative.
or more nodes in the cluster that is being monitored. In these circumstances, the cmomd process will continue on the system until terminated by a user. This can become a significant problem if a monitoring node is powered off and on several times, leaving several cmomd processes on the monitored cluster, using system process table space as well as other system resources. • What is the workaround? After a failure of the monitoring node, kill the unused cmomd processes.
This command must be run from one host, on all logical volumes in the package volume groups. A starting timeout value of 60 seconds is suggested; the timeout must not be set to less than 60 seconds. Once the lvchange -t command is run on a host, all other hosts will automatically inherit the new timeout value for the logical volumes. View the current timeout value by running thelvdisplaycommand and checking the value listed for “IO Timeout (Seconds)”.