Managing Serviceguard Extension for SAP Manufacturing Part Number: T2803-90011 December 2007
Legal Notices Copyright © 2000-2007 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.
Contents 1. Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mutual Failover Scenarios Using the Two Package Concept . . . . . . . . . . . . . . . . . . . Robust Failover Using the One Package Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clustering Stand-Alone J2EE Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dedicated NFS Packages . . . . . . . . .
Contents Serviceguard Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HA NFS Toolkit Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SGeSAP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specification of the Packaged SAP Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration of Application Server Handling . . . . . .
Contents Master Data Management User Interface Components. . . . . . . . . . . . . . . . . . . . . . MDM Server Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAP Netweaver XI components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation and Configuration Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 6
Printing History Table 1 Editions and Releases Printing Date Part Number Edition SGeSAP Release Operating System Releases June 2000 B7885-90004 Edition 1 B.03.02 HP-UX 10.20 HP-UX 11.00 March 2001 B7885-90009 Edition 2 B.03.03 HP-UX 10.20 HP-UX 11.00 HP-UX 11i June 2001 B7885-90011 Edition 3 B.03.04 HP-UX 10.20 HP-UX 11.00 HP-UX 11i March 2002 B7885-90013 Edition 4 B.03.06 HP-UX 11.00 HP-UX 11i June 2003 B7885-90018 Edition 5 B.03.
HP Printing Division: Business Critical Computing Business Unit Hewlett-Packard Co. 19111 Pruneridge Ave.
About this Manual... This document describes how to configure and install highly available SAP systems on HP-UX 11i v1, v2 and v3 using Serviceguard. It refers to the HP products B7885BA and T2803BA – Serviceguard Extension for SAP (SGeSAP).
Table 2 Abbreviations Abbreviation 10 Meaning , , , System ID of the SAP system, RDBMS or other components in uppercase/lowercase , instance number of the SAP system , , names mapped to local IP addresses of the client LAN , , names mapped to relocatable IP addresses of SG packages in the client LAN , , names mapped to local IP addresses of the server LAN
Related Documentation The following documents contain additional related information: • Serviceguard Extension for SAP Versions B.04.02 and B.04.51 Release Notes (T2803-900010) • Managing Serviceguard (B3936-90117) • Serviceguard Release Notes (B3935-90108) • Serviceguard NFS Toolkit A.11.11.08 and A.11.23.07 Release Notes (B5140-90032) • Serviceguard NFS Toolkits A.11.31.
Understanding Serviceguard Extension for SAP 1 Understanding Serviceguard Extension for SAP HP Serviceguard Extension for SAP (SGeSAP) extends HP Serviceguard’s failover cluster capabilities to SAP application environments. SGeSAP continuously monitors the health of each SAP cluster node and automatically responds to failures or threshold violations.
Understanding Serviceguard Extension for SAP This chapter introduces the basic concepts used by SGeSAP and explains several naming conventions.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios Designing SGeSAP Cluster Scenarios SAP applications can be divided into one or more distinct software components. Most of these components share a common technology layer, the SAP Application Server (SAPWAS). The SAP Application Server is the central building block of the SAP Netweaver technology. Each Application Server implementation comes with a characteristic set of software Single Points of Failure.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios Table 1-1 Mapping the SGeSAP package types to different SAP naming abbreviations Commonly used SAP abbreviations SGeSAP package type ci CI (old), DVEBMGS (old), ASCS arep AREP, ERS, ENR jci SCS rep REP, ENR, ERS d D, DVEBMGS (new) jd JDI, JD, JC, J The following sections introduce different SGeSAP package types and provide recommendations and examples for the cluster layouts that should be implemented for SAP Appl
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios Mutual Failover Scenarios Using the Two Package Concept SAP ABAP Engine based applications usually rely on two central software services that define the software SPOFs: the SAP Enqueue Service and the SAP Message Service. These services are traditionally combined and run as part of a unique SAP Instance that is called the Central Instance (ci). As any other SAP instance, the Central Instance has an Instance Name.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios Every ABAP Engine also requires a database service. Usually, this is provided by ORACLE or MAXDB RDBMS systems. For certain setups, IBM INFORMIX and IBM DB/2 are also supported. DB/2 can not be used for server consolidation scenarios. The package type db clusters any type of these databases. It unifies the configuration, so that database package administration for all vendors is treated identically.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios simple fashion. They omit other aspects and the level of detail that would be required for a reasonable and complete high availability configuration.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios Robust Failover Using the One Package Concept In a one-package configuration, both the database (db) and Central Instance (ci) run on the same node at all times and are configured in one SGeSAP package. Other nodes in the Serviceguard cluster function as failover nodes for the primary node on which the system runs during normal operation. NOTE The naming convention for these packages are dbciSID.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios A sample configuration in Figure 1-2 shows node1 with a failure, which causes the package containing the database and central instance to fail over to node2. A Quality Assurance System and additional Dialog Instances get shut down, before the database and Central Instance are restarted.
Understanding Serviceguard Extension for SAP Designing SGeSAP Cluster Scenarios Some Application Server installations automatically install more than one Central Instance at a time. E.g. a SAP Exchange Infrastructure 3.0 installation creates a DVEBMGS instance as well as a System Central Service SCS instance using different Instance Numbers.
Understanding Serviceguard Extension for SAP Clustering Stand-Alone J2EE Components Clustering Stand-Alone J2EE Components Some SAP applications are stand-alone J2EE components and do not come in an ABAP context. For example, SAP Enterprise Portal installations. Beginning with SAP Application Server 6.40, these components can be treated a fashion similar to ABAP installations. Comparing Figure 1-3 with Figure 1-2 shows the similarity on a conceptual level.
Understanding Serviceguard Extension for SAP Clustering Stand-Alone J2EE Components A one-package setup for pure JAVA environments implements a jdbjciSID package. The jdb package type becomes necessary in addition to the db package type since in SAP JAVA environments different database tools are common than in ABAP environments. SGeSAP follows the JAVA approach for jdb packages and the ABAP-world treatment for db packages.
Understanding Serviceguard Extension for SAP Clustering Stand-Alone J2EE Components Dedicated NFS Packages Small clusters with only a few SGeSAP packages usually provide HA NFS by combining the HA NFS toolkit package functionality with the SGeSAP packages that contain a database component. The HA NFS toolkit is a separate product with a set of configuration and control files that must be customized for the SGeSAP environment. It needs to be obtained separately.
Understanding Serviceguard Extension for SAP Dialog Instance Clusters as Simple Tool for Adaptive Enterprises Dialog Instance Clusters as Simple Tool for Adaptive Enterprises Databases and Central Instances are Single Points of Failure. ABAP and JAVA Dialog Instances can be installed in a redundant fashion. In theory, this allows to avoid additional SPOFs in Dialog Instances. This doesn’t mean that it is impossible to configure the systems including SPOFs on Dialog Instances.
Understanding Serviceguard Extension for SAP Dialog Instance Clusters as Simple Tool for Adaptive Enterprises Dialog Instance packages allow an uncomplicated approach to achieve abstraction from the hardware layer. It is possible to shift around Dialog Instance packages between servers at any given time. This might be desirable if the CPU resource consumption is eventually balanced poorly due to changed usage patterns. Dialog Instances can then be moved between the different hosts to address this.
Understanding Serviceguard Extension for SAP Dialog Instance Clusters as Simple Tool for Adaptive Enterprises Dialog Instance packages provide high availability and flexibility at the same time. The system becomes more robust using Dialog Instance packages. The virtualization allows to move the instances manually between the cluster hosts on demand.
Understanding Serviceguard Extension for SAP Dialog Instance Clusters as Simple Tool for Adaptive Enterprises The advantage of this setup is, that after repair of node1, the Dialog Instance package can just be restarted on node1 instead of node2.This saves downtime that would otherwise be necessary caused by a failback of the dbciSID package. The two instances can be separated to different machines without impacting the production environment negatively.
Understanding Serviceguard Extension for SAP Dialog Instance Clusters as Simple Tool for Adaptive Enterprises NOTE Declaring non-critical Dialog Instances in a package configuration doesn’t add them to the components that are secured by the package. The package won’t react to any error conditions of these additional instances. The concept is distinct from the Dialog Instance packages that got explained in the previous section.
Understanding Serviceguard Extension for SAP The Replicated Enqueue The Replicated Enqueue In case an environment has very high demands regarding guaranteed uptime, it makes sense to activate a Replicated Enqueue with SGeSAP. With this additional mechanism it is possible to failover ABAP or JAVA System Central Service Instances without impacting ongoing transactions on Dialog Instances.
Understanding Serviceguard Extension for SAP The Replicated Enqueue Since there are no classical SAP Central Instances in configurations that use Replicated Enqueue, the package naming convention shifts from ci to ascs and from jci to scs. The SGeSAP package types remain to be ci and jci. Package names for the Enqueue Replication usually are arep and rep or alternatively ers.
Understanding Serviceguard Extension for SAP The Replicated Enqueue The ASCS Instance will then use the stand-alone Enqueue Server and is able to replicate its status. On the alternate node a Replicated Enqueue needs to be started that receives the replication information. The SGeSAP packaging of the ASCS Instance provides startup and shutdown routines, failure detection, split-brain prevention and quorum services to the stand-alone Enqueue Server.
Understanding Serviceguard Extension for SAP The Replicated Enqueue SAP offers two possibilities to configure Enqueue Replication Servers: 1. SAP self-controlled using High Availability polling 2. Completely High Availability failover solution controlled SGeSAP provides a completely High Availability failover solution controlled implementation that avoids costly polling data exchange between SAP and the High Availability cluster software.
Understanding Serviceguard Extension for SAP The Replicated Enqueue Dedicated Failover Host More complicated clusters that consolidate a couple of SAP applications often have a dedicated failover server. While each SAP application has its own set of primary nodes, there is no need to also provide a failover node for each of these servers. Instead there is one commonly shared secondary node that is in principle capable to replace any single failed primary node.
Understanding Serviceguard Extension for SAP The Replicated Enqueue These replication units can be halted at any time without disrupting ongoing transactions for the systems they belong to. They are ideally sacrificed in emergency conditions in which a failing database and/or Central Instance need the spare resources.
Understanding Serviceguard Extension for SAP SGeSAP Product Structure SGeSAP Product Structure Central component of SGeSAP are the package templates. There are a lot of different SAP package types. Most of these cover different aspects of the SAP Application Server. Therefor SGeSAP provides only one package template with which each of these package types or a combination of many can be realized.
Understanding Serviceguard Extension for SAP SGeSAP Product Structure On top of this structure, SGeSAP provides a generic function pool and a generic package runtime logic file for SAP applications. liveCache packages have a unique runtime logic file. All other package types are covered with the generic runtime logic. All SAP specific cluster configuration parameters are specified via one configuration file that resides in the package directory.
Understanding Serviceguard Extension for SAP SGeSAP Product Structure Package configuration files based on the current generic Serviceguard package template configuration file. These files are used to specify the Serviceguard package configuration. Each file references a control script. /etc/cmcluster/C11/sap.config, SGeSAP configuration file that is used for SAP specific package configuration. This file is jointly used by all packages in /etc/cmcluster/C11.
Understanding Serviceguard Extension for SAP SGeSAP Product Structure Database package control script based on the current HA NFS toolkit. The postfix tells that this file is required by the database failover package dbC11. /etc/cmcluster/C11/sapwas.cntl Contains the SAP specific control logic by calling relevant routines of the generic function pool /etc/cmcluster/sap.functions. This part of the package should never be modified. This will ensure supportability of the solution. /etc/cmcluster/C11/customer.
Understanding Serviceguard Extension for SAP SGeSAP Product Structure There are also cluster wide configuration and execution files: /opt/cmcluster/sap/bin/cmenqord This binary realizes the EMS monitor for Replicated Enqueue. /etc/cmcluster/sap.functions The central SGeSAP function pool for package startup and shutdown. /etc/cmcluster/sapmdm.sh Additional function pool for MDM component startup and shutdown. /opt/cmcluster/sap/bin/sapmap This binary supports the WLM SAP toolkit. /etc/cmcluster/customer.
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions Combining SGeSAP with Complementary Solutions There is a vast range of additional software solutions that can improve the experience with SGeSAP clusters. Most of the products available from HP can be integrated and used in combination with SGeSAP. This is particularly true for the Serviceguard Extension for RAC (SGeRAC).
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions While important application components are on the shared storage pool, the definition for the shared devices, as well as the configuration of the relocatable IP address have to be maintained manually on each node capable of running this application. Each installation of SAP application has many dependencies on the configuration of a hosting operating system.
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions During the initial run, all the generic resources like the shared volume groups and the file systems of an application, which may influence the proper operation of that application in the cluster, are specified in the cluster resource DB.
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions Chapter 1 • Results are provided in a report. Output format of this report may be HTML or ASCII. • Output reports may contain error interpretation text in order to help during interpretation of the compare. • All resources configured in a package may be easily added to the resource DB by using the statement ANALYSE PACKAGE upon creation of the Resource DB.
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions Metropolitan Clusters Metroluster is an HP Disaster Tolerant solution. Two types of Metropolitan Clusters are provided, depending on the type of data storage and replication you are using.
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions All host systems in a Metrocluster environment must be members of a single Serviceguard cluster. Either a single Data Center architecture without an arbitrator or a Three Data Center architecture with one or two arbitrator systems can be implemented. The arbitrator(s) are not physically connected to the storage units.
Understanding Serviceguard Extension for SAP Combining SGeSAP with Complementary Solutions Workload Manager SAP Toolkit HP-UX provides resource management functions based on the Process Resource Manager (PRM). This provides a way to control the amount of core shares, memory and disk bandwidth that specific processes get access to during peak system load. It allows to manually dedicate resources to specific SAP applications.
Planning the Storage Layout 2 Planning the Storage Layout Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. Two volume managers can be used with Serviceguard: the standard Logical Volume Manager (LVM) of HP-UX and the Veritas Volume Manager (VxVM).
Planning the Storage Layout SAP Instance Storage Considerations SAP Instance Storage Considerations In general, it is important to stay as close as possible to the original layout intended by SAP. But certain cluster specific considerations might suggest a slightly different approach in some cases. SGeSAP supports various combinations of providing shared access to file systems in the cluster.
Planning the Storage Layout SAP Instance Storage Considerations Table 2-1 Option descriptions (Continued) Option: Description 3 - SGeSAP CFS Cluster combines maximum flexibility with the convenience of a Cluster File System. It is the most advanced option. CFS should be used with SAP if available. The HP Serviceguard Cluster File System requires a set of multi-node packages. The number of packages varies with the number of disk groups and mountpoints for Cluster File Systems.
Planning the Storage Layout SAP Instance Storage Considerations Option 1: SGeSAP NFS Cluster With this storage setup SGeSAP makes extensive use of exclusive volume group activation. Concurrent shared access is provided via NFS services. Automounter and cross-mounting concepts are used in order to allow each node of the cluster to switch roles between serving and using NFS shares.
Planning the Storage Layout SAP Instance Storage Considerations In clustered SAP environments that use the startsap mechanism it is required to install local executables. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation.
Planning the Storage Layout SAP Instance Storage Considerations Directories that Reside on Shared Disks Volume groups on SAN shard storage are configured as part of the SGeSAP packages. They can be either: • instance specific or • system specific or • environment specific. Instance specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance.
Planning the Storage Layout SAP Instance Storage Considerations A valuable naming convention for most of these shared volume groups is vg or alternatively vg. Table 2-2 and Table 2-3 provide an overview of SAP shared storage and maps them to the component and package type for which they occur.
Planning the Storage Layout SAP Instance Storage Considerations Table 2-3 System and Environment Specific Volume Groups Mount Point /export/sapmnt/ Access Point shared disk and HA NFS Potential owning packages VG Name Device minor number db dbci jdb jdbjci sapnfs db /export/usr/sap/trans dbci sapnfs /usr/sap/put shared disk none The tables can be used to document used device minor numbers.
Planning the Storage Layout SAP Instance Storage Considerations All filesystems mounted below /export are part of HA NFS cross-mounting via automounter. The automounter uses virtual IP addresses to access the HA NFS directories via the path that comes without the /export prefix. This ensures that the directories are quickly available after a switchover. The cross-mounting allows coexistence of NFS server and NFS client processes on nodes within the cluster.
Planning the Storage Layout SAP Instance Storage Considerations Option 2: SGeSAP NFS Idle Standby Cluster This option has a simple setup, but it is severely limited in flexibility. In most cases, option 1 should be preferred. A cluster can be configured using option 2 if it fulfills all of the following prerequisites: • Only one SGeSAP package is configured in the cluster. • Underlying database technology is a single-instance Oracle RDBMS.
Planning the Storage Layout SAP Instance Storage Considerations Several SAP ABAP components install a startup script called startsap__ into the home directory of adm. These scripts internally refer to a startup profile of the instance. With a non-shared home directory it is possible to change this reference to point to individual startup profiles for each host. This allows individual instance configurations for nodes that have a different failover hardware setup.
Planning the Storage Layout SAP Instance Storage Considerations Directories that Reside on Shared Disks Volume groups on a SAN shard storage get configured as part of the SGeSAP package. Instance specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. In this configuration option the instance specific volume groups are included in package.
Planning the Storage Layout SAP Instance Storage Considerations Table 2-4 File systems for the SGeSAP package in NFS Idle Standby Clusters Mount Point Access Point /sapmnt/ shared disk and HA NFS /usr/sap/ shared disk /usr/sap/trans shared disk and HA NFS Remarks VG Name Device minor number required optional The table can be used to document used device minor numbers.
Planning the Storage Layout SAP Instance Storage Considerations Option 3: SGeSAP CFS Cluster SGeSAP supports the use of HP Serviceguard Cluster File System for concurrent shared access. CFS is available with selected HP Serviceguard Storage Management Suite bundles. CFS replaces NFS technology for all SAP related file systems. All related instances need to run on cluster nodes to have access to the shared files. SAP related file systems that reside on CFS are accessible from all nodes in the cluster.
Planning the Storage Layout SAP Instance Storage Considerations Directories that Reside on CFS The following table shows a recommended example on how to design SAP file systems for CFS shared access.
Planning the Storage Layout Database Instance Storage Considerations Database Instance Storage Considerations SGeSAP internally supports clustering of database technologies of different vendors. The vendors have implemented individual database architectures.
Planning the Storage Layout Database Instance Storage Considerations Table 2-6 DB Technology Oracle Single-Instance Availability of SGeSAP Storage Layout Options for Different Database RDBMS Supported Platforms PA 9000 Itanium SGeSAP Storage Layout Options NFS Cluster Software Bundles 1. Serviceguard or any Serviceguard Storage Management bundle (for Oracle) 2. SGeSAP 3. Serviceguard HA NFS Toolkit idle standby 1. Serviceguard 2. SGeSAP 3. Serviceguard HA NFS Toolkit (opt.) CFS 1.
Planning the Storage Layout Database Instance Storage Considerations Oracle Single Instance RDBMS Single Instance Oracle databases can be used with all three SGeSAP storage layout options. The setup for NFS and NFS Idle Standby Clusters are identical. Oracle databases in SGeSAP NFS and NFS Idle Standby Clusters Oracle server directories reside below /oracle/.
Planning the Storage Layout Database Instance Storage Considerations During SGeSAP installation it is necessary to create local copies of the client NLS files on each host to which a failover could take place. SAP Central Instances use the server path to NLS files, while Application Server Instances use the client path. Sometimes a single host may have an installation of both a Central Instance and an additional Application Server of the same SAP System.
Planning the Storage Layout Database Instance Storage Considerations Table 2-8 File System Layout for NFS-based Oracle Clusters Mount Point $ORACLE_HOME /oracle//saparch /oracle//sapreorg Access Point shared disk Potential Owning Packages db dbci VG Type Volume Group Name Device Minor Number db instance specific jdb /oracle//sapdata1 jdb … jdbjci /oracle//sapdatan dbcijci /oracle//origlogA /oracle//origlogB /oracle/
Planning the Storage Layout Database Instance Storage Considerations Oracle Real Application Clusters Oracle Real Application Clusters (RAC) is an option to the Single Instance Oracle Database Enterprise Edition. Oracle RAC is a cluster database with shared cache architecture. The SAP certified solution is based on HP Serviceguard Cluster File System for RAC. Handling of a RAC database is not included in SGeSAP itself. The SGeSAP db and jdb package types can not be used for that purpose.
Planning the Storage Layout Database Instance Storage Considerations Table 2-9 File System Layout for Oracle RAC in SGeSAP CFS Cluster Mount Point $ORACLE_HOME /oracle/client /oracle//oraarch /oracle//sapraw /oracle//saparch /oracle//sapbackup /oracle//sapcheck /oracle//sapreorg /oracle//saptrace Access Point Potential Owning Packages shared disk and CFS /oracle//sapdata1 … /oracle//sapdatan /oracle//origlogA /oracle//origlogB /oracle//mirr
Planning the Storage Layout Database Instance Storage Considerations MAXDB Storage Considerations SGeSAP supports failover of MAXDB databases as part of SGeSAP NFS cluster option 1. Cluster File Systems can not be used for the MAXDB part of SGeSAP clusters. The considerations given below for MAXDB will also apply to liveCache and SAPDB clusters unless otherwise noticed. MAXDB distinguishes an instance dependant path /sapdb/ and two instance independent paths, called IndepData and IndepPrograms.
Planning the Storage Layout Database Instance Storage Considerations /sapdb/programs/runtime/7402=7.4.2.0, For MAXDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations], [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.ini in the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.ini, Databases.ini and Runtimes.ini for a host with a liveCache 7.5 (LC2) and an APO 4.
Planning the Storage Layout Database Instance Storage Considerations NOTE The [Globals] section is commonly shared between LC1/LC2 and AP1/AP2. This prevents setups that keep the directories of LC1 and AP1 completely separated. The following directories are of special interest: Chapter 2 • /sapdb/programs: this can be seen as a central directory with all MAXDB executables. The directory is shared between all MAXDB instances that reside on the same host.
Planning the Storage Layout Database Instance Storage Considerations These core files have file sizes of several Gigabytes. Sufficient free space needs to be configured for the shared logical volume to allow core dumps. NOTE For MAXDB RDBMS starting with version 7.6 these limitations do not exist any more. The working directory is utilized by all instances (IndepData/wrk) and can be globally shared. • /var/spool/sql: This directory hosts local runtime data of all locally running MAXDB instances.
Planning the Storage Layout Database Instance Storage Considerations Table 2-10 shows the file system layout for SAPDB clusters. NOTE In HA scenarios, valid for SAPDB/MAXDB versions up to 7.6, the runtime directory /sapdb/data/wrk is configured to be located at /sapdb//wrk to support consolidated failover environments with several MAXDB instances.
Planning the Storage Layout Database Instance Storage Considerations NOTE Using tar or cpio is not a safe method to copy or move directories to shared volumes. In certain circumstances file or ownership permissions may not correctly transported, especially files having the s-bit set: /sapdb//db/pgm/lserver and /sapdb//db/pgm/dbmsrv. These files are important for the vserver process ownership and they have an impact on starting the SAPDB via adm.
Planning the Storage Layout Database Instance Storage Considerations Informix Storage Considerations Informix database setups with SGeSAP are supported in ABAP-only environments ON PA 9000 servers using SGeSAP NFS cluster option 1. All Informix file systems are located below a central point of access. The complete structure gets exported via HA NFS cross-mounting. Table 2-11 Mount Point /export/\ informix...
Planning the Storage Layout Database Instance Storage Considerations DB2 Storage Considerations SGeSAP supports failover of DB/2 databases as part of SGeSAP NFS cluster option 1. Cluster File Systems can not be used for the DB2 part of SGeSAP clusters. Consolidation of SAP instances is possible in SGeSAP DB2 environments.
Step-by-Step Cluster Conversion 3 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). It is written in the format of a step-by-step guide. It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualize SAP instances manually.
Step-by-Step Cluster Conversion liveCache packages require package type lc. Refer to chapter 4 for lc package types, and Chapter 5 for MDM package types.
Step-by-Step Cluster Conversion The tasks are presented as a sequence of steps.
Step-by-Step Cluster Conversion Installation Flow no SAP Pre installed? Netweaver 2004s High Availability System Generic SAP Installation Enqueue Replication? Enqueue Replication? Netweaver 2004 Java-only HA Option yes yes CFS used? yes Splitting a Central Instance no yes no Creation of Replication Instance CFS Configuration no Non-CFS Directory Structure Conversion Cluster Node Synchronization External Application Servers? Cluster Node Configuration yes no External App Server Host Configu
Step-by-Step Cluster Conversion The SAP Application Server installation types are ABAP-only, Java-only and Add-in. The latter includes both the ABAP and the Java stack. In principle all SAP cluster installations look very similar. Older SAP systems get installed in the same way as they would without a cluster. Cluster conversion takes place afterwards and includes a set of manual steps.
Step-by-Step Cluster Conversion SAP Preparation SAP Preparation This section covers the SAP specific preparation, installation and configuration before creating a high available SAP System landscape. This includes the following logical tasks: • SAP Installation Considerations • Replicated Enqueue Conversion SAP Installation Considerations This section gives additional information that help with the task of performing SAP installations in HP Serviceguard clusters.
Step-by-Step Cluster Conversion SAP Preparation NOTE For Java-only based installations the only possible installation option is a High Availability System installation. It is strongly recommended to use the “High Availability System” option for all new installations that are meant to be used with SGeSAP. A SAP Application Server 7.
Step-by-Step Cluster Conversion SAP Preparation NW04S400 Preparation Step: Create a Serviceguard package directory on each node of the cluster. This directory will be used by all packages that belong to the SAP System with a specific SAP System ID . mkdir -p /etc/cmcluster/ NW04S420 Preparation Step: Create standard package configuration and control files for each package.
Step-by-Step Cluster Conversion SAP Preparation RUN_SCRIPT /etc/cmcluster//.control.script HALT_SCRIPT /etc/cmcluster/ /.control.script Specify subnets to be monitored in the SUBNET section. In the /etc/cmcluster//.control.script file(s), there is a section that defines a virtual IP address array. All virtual IP addresses specified here will become associated to the SAP and database instances that are going to be installed.
Step-by-Step Cluster Conversion SAP Preparation NW04S1300 Preparation Step: Before installing the SAP Application Server 7.0 some OS-specific parameters have to be adjusted. Verify or modify HP-UX kernel parameters as recommended by NW04s Master Guide Part 1. Be sure to propagate changes to all nodes in the cluster. The SAP Installer checks the OS parameter settings with a tool called “Prerequisite Checker” and stops the installer when the requirements are not met.
Step-by-Step Cluster Conversion SAP Preparation SAP Netweaver 2004s High Availability System Installer NW04S1330 Installation Step: The installation is done using the virtual IP provided by the Serviceguard package. SAPINST can be invoked with a special parameter called SAPINST_USE_HOSTNAME. This prevents the installer routines from comparing the physical hostname with the virtual address and drawing wrong conclusions. The installation of the entire SAP Application Server 7.
Step-by-Step Cluster Conversion SAP Preparation The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument. cd /IM_OS/SAPINST/UNIX//sapinst \ SAPINST_USE_HOSTNAME= The SAP installer should now be starting. Follow the instructions of the (A)SCS installation process and provide required details when asked by SAPINST.
Step-by-Step Cluster Conversion SAP Preparation The virtualized installation for SAP Netweaver 2004s is completed, but the failover is not working yet. There might be SAP and Oracle daemon processes running (sapstartsrv and ocssd) for each SAP and database instance. The instances are now able to run on the installation host, once the corresponding Serviceguard packages got started. It is not yet possible to move instances to other nodes, monitor the instances or trigger automated failovers.
Step-by-Step Cluster Conversion SAP Preparation NW04J400 Preparation Step: Create a Serviceguard package directory on each node of the cluster. This directory will be used by all packages that belong to the SAP System with a specific SAP System ID . mkdir -p /etc/cmcluster/ NW04J420 Preparation Step: Create standard package configuration and control files for each package. The SAP JAVA instances are mapped to SGeSAP package types as follows: The SCS Instance requires a jci package type.
Step-by-Step Cluster Conversion SAP Preparation Specify the created control scripts that were created earlier on as run and halt scripts: RUN_SCRIPT /etc/cmcluster//.control.script HALT_SCRIPT /etc/cmcluster/ /.control.script Specify subnets to be monitored in the SUBNET section. In the /etc/cmcluster//.control.script file(s), there is a section that defines a virtual IP address array.
Step-by-Step Cluster Conversion SAP Preparation NW04J445 Preparation Step: The package configuration needs to be applied and the package started. This step assumes that the cluster as such is already configured and started. Please refer to the Managing Serviceguard user’s guide if more details are required. cmapplyconf -p /etc/cmcluster/C11/jdbcijciC11.config cmrunpkg -n jdbcijciC11 All virtual IP address(es) should now be configured.
Step-by-Step Cluster Conversion SAP Preparation NW04J1330 Installation Step: The installation is done using the virtual IP provided by the Serviceguard package. The SAPINST installer is invoked with the parameter SAPINST_USE_HOSTNAME to enable the installation on a virtual IP address of Serviceguard. This parameter is not mentioned in SAP installation documents, but it is officially supported. The installer will show a warning message that has to be confirmed.
Step-by-Step Cluster Conversion SAP Preparation It is recommended to pass the catalog as an argument to SAPINST. The XML file that is meant to be used with SGeSAP clusters is included on the installation DVD/CD’s distributed by SAP. SAPINST 6.40 installation options with HA catalog file The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument.
Step-by-Step Cluster Conversion SAP Preparation Follow the instructions of the SCS installation process and provide required details when asked by SAPINST. It is necessary to provide the virtual hostname to any SCS instance related query. NOTE A known issue may occur as the SCS tries to start the database during finalization of the installation. Given that the database is not yet installed this is not possible. For a solution refer to SAP Note 823742.
Step-by-Step Cluster Conversion SAP Preparation Afterwards, the virtualized installation for SAP J2EE Engine 6.40 is completed, but the cluster still needs to be configured. The instances are now able to run on the installation host, provided the corresponding Serviceguard packages got started up front. It is not yet possible to move instances to other nodes, monitor the instances or trigger automated failovers. Do not shut down the Serviceguard packages while the instances are running.
Step-by-Step Cluster Conversion SAP Preparation Replicated Enqueue Conversion This section describes how a SAP ABAP Central Instance DVEBMGS can be converted to use the Enqueue Replication feature for seamless failover of the Enqueue Service. The whole section can be skipped if Enqueue Replication is not going to be used. It can also be skipped in case Replicated Enqueue is already installed. The proceeding manual conversion steps can be done for SAP applications that are based on ABAP kernel 4.
Step-by-Step Cluster Conversion SAP Preparation Splitting an ABAP Central Instance The SPOFs of the DVEBMGS instance will be isolated in a new instance called ABAP System Central Services Instance ASCS. This instance will replace DVEBMGS for the ci package type. The remaining parts of the Central Instance can then be configured as Dialog Instance D.
Step-by-Step Cluster Conversion SAP Preparation RE020 Replicated Enqueue Conversion: A volume group needs to be created for the ASCS instance. The physical device(s) should be created as LUN(s) on shared storage. Storage connectivity is required from all nodes of the cluster that should run the ASCS. For the volume group, one logical volume should get configured. For the required size, refer to the capacity consumption of /usr/sap//DVEBMGS.
Step-by-Step Cluster Conversion SAP Preparation #---------------------------------# general settings #---------------------------------SAPSYSTEMNAME= INSTANCE_NAME=ASCS SAPSYSTEM= SAPLOCALHOST= SAPLOCALHOSTFULL=.
Step-by-Step Cluster Conversion SAP Preparation This template shows the minimum settings. Scan the old _DVEBMGS_ profile to see whether there are additional parameters that apply to either the Enqueue Service or the Message Service. Individual decisions need to be made whether they should be moved to the new profile.
Step-by-Step Cluster Conversion SAP Preparation #----------------------------------------------------------------------EN = en.sap_ASCS Execute_05 = local rm -f $(_EN) Execute_06 = local ln -s -f $(DIR_EXECUTABLE)/enserver $(_EN) Start_Program_03 = local $(_EN) pf=$(DIR_PROFILE)/_ASCS_ #----------------------------------------------------------------------# start syslog send daemon #----------------------------------------------------------------------SE =se.
Step-by-Step Cluster Conversion SAP Preparation SAPLOCALHOSTFULL=.domain The exact changes depend on the individual appearance of the file for each installation. The startup profile is also individual, but usually can be created similar to the default startup profile of any Dialog Instance.
Step-by-Step Cluster Conversion SAP Preparation Creation of Replication Instance This section describes how to add Enqueue Replication Services (ERS) to a system that has ASCS and/or SCS instances. The ASCS (ABAP) or SCS (JAVA) instance will be accompanied with an ERS instance that permanently keeps a mirror of the [A]SCS internal state and memory.
Step-by-Step Cluster Conversion SAP Preparation RE095 Replicated Enqueue Conversion: A couple of required SAP executables should be copied from the central executable directory /sapmnt//exe to the instance executable directory /usr/sap//ERS/exe. For SAP kernel 6.40 the list includes: enqt, enrepserver, ensmon, libicudata.so.30, libicui18n.so.30, libicuuc.so.30, libsapu16_mt.so, libsapu16.so, librfcum.so, sapcpe and sapstart. For some releases, the shared library file extension .
Step-by-Step Cluster Conversion SAP Preparation servicehttp/sapmc/frog.jar servicehttp/sapmc/soapclient.jar For SAP kernel 7.00 or higher, the ers.lst file needs to have additional lines: sapstartsrv sapcontrol servicehttp The following script example cperinit.sh performs this step for 7.00 kernels. for i in enqt enrepserver ensmon libicudata.so.30 libicui18n.so.30 libicuuc.so.30 libsapu16_mt.so libsapu16.so librfcum.
Step-by-Step Cluster Conversion SAP Preparation RE100 Replicated Enqueue Conversion: Create instance profile and startup profile for the ERS Instance. These profiles get created as adm in the instance profile directory/usr/sap//ERS/profile. Even though this is a different instance, the instance number is identical to the instance number of the [A]SCS. The virtual IP address used needs to be different in any case.
Step-by-Step Cluster Conversion SAP Preparation SCSID = SCSHOST = <[J]CIRELOC> enque/serverinst = $(SCSID) enque/serverhost = $(SCSHOST) Here is an example template for the startup profile START_ERS_<[A]REPRELOC>: #----------------------------------------------------------------------SAPSYSTEM = SAPSYSTEMNAME = INSTANCE_NAME = ERS #-------------------------------------------------------------------# Special settings for this manually set up instance #-------------------
Step-by-Step Cluster Conversion SAP Preparation _ER = er.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME) Execute_01 = immediate rm -f $(_ER) Execute_02 = local ln -s -f $(DIR_EXECUTABLE)/enrepserver $(_ER) Start_Program_00 = local $(_ER) pf=$(_PF) NR=$(SCSID) For kernel 7.
Step-by-Step Cluster Conversion HP-UX Configuration HP-UX Configuration A correct HP-UX configuration ensures that all cluster nodes provide the environment and system configuration required to run SAP Application Server based business scenarios. Several of the following steps must be repeated on each node. Record the steps completed for each node, as you complete them. This helps identify errors in the event of a malfunction later in the integration process.
Step-by-Step Cluster Conversion HP-UX Configuration • Repeat the cluster node configuration steps for each node of the cluster. NOTE • NOTE Cluster Node Configuration - This section consists of steps performed on all the cluster nodes, regardless if the node is a primary node or a backup node. External Application Server Host Configuration - This section consists of steps performed on any host outside of the cluster that runs another ABAP instance of the SAP system.
Step-by-Step Cluster Conversion HP-UX Configuration Directory Structure Configuration This section adds implementation practices to the architectural decisions made in the chapter “Storage Layout Considerations”. If layout option 1 or option 2 is used, then the non-CFS directory structure conversion must be performed. An implementation based on HP LVM is described in this document. VxVM can similarly be used. Option 3 maps to the CFS configuration for SAP. In this case, usage of VxVM and CVM are mandatory.
Step-by-Step Cluster Conversion HP-UX Configuration IS030 Installation Step: Comment out the references to any file system that classified as a shared directory in Chapter 2 from the /etc/fstab. Also make sure that there are no remaining entries for file systems converted in IS009. SD040 MAXDB Database Step: NOTE This step can be skipped for MAXDB instances starting with versions 7.6. MAXDB is not supported on CFS, but can be combined with SAP instances that use CFS.
Step-by-Step Cluster Conversion HP-UX Configuration /sapdb//wrk OK --dbmcli on > IS049 Installation Step: If it does not yet exist, create a CVM/CFS system multi-node package and start it. Then integrate all SAP related CFS disk groups and file systems. The package dependencies to the system multi-node package get created automatically. The mount points need to be created manually on all alternate nodes first.
Step-by-Step Cluster Conversion HP-UX Configuration Non-CFS Directory Structure Conversion The main purpose of this section is to ensure the proper LVM layout and the right distribution of the different file systems that reside on shared disks. This section does not need to be consulted when using the HP Serviceguard Storage Management Suite with CFS and shared access Option 3. Logon as root to the system where the SAP Central Instance is installed (primary host).
Step-by-Step Cluster Conversion HP-UX Configuration IS020 Installation Step: Verify that the existing volume group layout is compliant with the needs of the Serviceguard package(s) as specified in the tables of Chapter 2. 1. Make sure that database specific file systems and Central Instance and/or System Central Services specific file systems are separated onto different volume groups.
Step-by-Step Cluster Conversion HP-UX Configuration # remove everything that is different from DVEBMGS, ls # Example: rm -r SYS # rm -r D00 cd DVEBMGS find . -depth -print|cpio -pd /usr/sap//DVEBMGS rm -r * # be careful with this cd .. rmdir DVEBMGS 2. Mark all shared volume groups as members of the cluster. This only works if the cluster services are already available.
Step-by-Step Cluster Conversion HP-UX Configuration SD040 MAXDB Database Step: NOTE This step can be skipped for MAXDB instances starting with versions 7.6. Make sure you have mounted a sharable logical volume on /sapdb//wrk as discussed in section MAXDB Storage Considerations in Chapter 2. Change the path of the runtime directory of the SAPDB and move the files to the new logical volume accordingly. cd /sapdb/data/wrk/ find . -depth -print | cpio -pd /sapdb//wrk cd ..
Step-by-Step Cluster Conversion HP-UX Configuration IS050 Installation Step: The usage of local executables with SGeSAP is required. It is not sufficient to have a local executable directory as part of the kernel 7.x instance directory. Check if the Central Instance host and all application servers have a directory named /usr/sap//SYS/exe/ctrun. If the directory exists, this step can be skipped. The system is already using local executables through sapcpe.
Step-by-Step Cluster Conversion HP-UX Configuration To create local executables, the SAP filesystem layout needs to be changed. The original link /usr/sap//SYS/exe/run needs to be renamed to /usr/sap//SYS/exe/ctrun. A new local directory /usr/sap//SYS/exe/run will then be required to store the local copy. It needs to be initialized by copying the files sapstart and saposcol from the central executable directory /sapmnt//exe. Make sure to match owner, group and permission settings.
Step-by-Step Cluster Conversion HP-UX Configuration IS065 Installation Step: If failover nodes have no internal application server with local executables installed, distribute the directory tree /usr/sap//SYS from the primary node. Do not use rcp(1), it will follow all links and copy a lot of files from the shared disks that are not needed. For example, on the primary node: cd /usr/sap//SYS find . -depth -print | cpio -o >/tmp/SYS.cpio Use ftp(1) to copy the file over to the secondary node.
Step-by-Step Cluster Conversion HP-UX Configuration Cluster Node Synchronization Repeat the steps in this section for each node of the cluster that is different than the primary. Logon as root to the system where the SAP Central Instance is installed (primary host) and prepare a logon for each of its backup hosts – if not already available. IS070 Installation Step: Open the groupfile, /etc/group, on the primary side.
Step-by-Step Cluster Conversion HP-UX Configuration IS080 Installation Step: Open the password file, /etc/passwd, on the primary side. If any of the users listed in Table 3-3 Password File Users exist on the primary node, recreate them on the backup node. Assign the users on the backup nodes the same user and group ID as the primary nodes. NOTE INFORMIX users must have the same passwords, as well, on both the backup and primary nodes.
Step-by-Step Cluster Conversion HP-UX Configuration IS090 Installation Step: Look at the service file, /etc/services, on the primary side. Replicate all services listed in Table 3-4 Services on the Primary Node that exist on the primary node onto the backup node.
Step-by-Step Cluster Conversion HP-UX Configuration IS100 Installation Step: Change the HP-UX kernel on the backup node to meet the SAP requirements. Compare the Tunable Parameters section of /stand/system on all nodes. All values on the backup nodes must reach or exceed the values of the primary node. Install all HP-UX patches that are recommended for Serviceguard and patches recommended for SAP. Build a new kernel with mk_kernel(1m) on each backup node if /stand/system was changed.
Step-by-Step Cluster Conversion HP-UX Configuration ls -a|awk '// { system( sprintf( "mv %s %s\n", $0,\ gensub("", "", 1 ))) }' exit Never use the relocatable address in these filenames. If an application server was already installed, do not overwrite any files which will start the application server. If the rc-files have been modified, correct any hard coded references to the primary hostname. IS130 Installation Step: If the system has a SAP kernel release < 6.
Step-by-Step Cluster Conversion HP-UX Configuration OR150 Oracle Database Step: If the primary node has the ORACLE database installed: Create additional links in /oracle/ on the primary node. For example: su - ora ln .dbenv_.csh .dbenv_.csh ln .dbenv_.sh .dbenv_.sh exit NOTE If you are implementing an Application Server package make sure that you install the Oracle Client libraries locally on all nodes you run the package on.
Step-by-Step Cluster Conversion HP-UX Configuration IS190 Installation Step: In full CFS environments, this step can be omitted. If CFS is only used for SAP and not for the database, then the volume group(s) that are required by the database need to be distributed to all cluster nodes. Import the shared volume groups using the minor numbers specified in Table 1 – Instance Specific Volume Groups contained in chapter “Planning the Storage Layout”.
Step-by-Step Cluster Conversion HP-UX Configuration SD230 MAXDB Database Step: SAPDB database specific: su - sqd mkdir -p /sapdb/programs mkdir -p /sapdb/data mkdir -p /usr/spool/sql exit Ownership and permissions of these directories should be chosen equally to the already existing directories on the primary host. SD235 MAXDB Database Step: For releases starting with MAXDB 7.5: Copy file /etc/opt/sdb to the alternate cluster nodes. This file contains global path names for the MAXDB instance.
Step-by-Step Cluster Conversion HP-UX Configuration Cluster Node Configuration Repeat the steps in this section for each node of the cluster. Logon as root. SGeSAP needs remote login to be enabled on all cluster hosts. The traditional way to achieve this is via remote shell commands. If security concerns prohibit this, it is also possible to use secure shell access instead.
Step-by-Step Cluster Conversion HP-UX Configuration IS270 Installation Step: If you allow remote access using the secure shell mechanism: 1. Check with swlist to see if ssh (T1471AA) is already installed on the system: swlist | grep ssh If not, it can be obtained from http://www.software.hp.com/ISS_products_list.html. 2. Create a public and private key for the root user: ssh-keygen -t dsa Executing this command creates a .
Step-by-Step Cluster Conversion HP-UX Configuration After successful installation, configuration and test of the secure shell communication ssh can be used by SGeSAP. This is done via setting the parameter REM_COMM to ssh in the SAP specific configuration file sap.config of section Configuration of the optional Application Server Handling .
Step-by-Step Cluster Conversion HP-UX Configuration Table 3-5 Relocatable IP Address Information name/aliases IS310 IP address Installation Step: If you use DNS: Configure /etc/nsswitch.conf to avoid problems. For example: hosts: files[NOTFOUND=continue UNAVAIL=continue \ TRYAGAIN=continue]dns IS320 Installation Step: If you establish front-end, server, and LANs to separate network traffic: Add routing entries to the internet routing configurations of /etc/rc.config.d/netconf.
Step-by-Step Cluster Conversion HP-UX Configuration ROUTE_MASK[n+1]="" ROUTE_GATEWAY[n+1]= ROUTE_COUNT[n+1]=1 ROUTE_ARGS[n+1]="" IS330 Installation Step: In some older SAP releases, during installation SAP appends some entries to the standard .profile files in the user home directories instead of using a new file defined by the SAP System. On HP-UX, by default, there is the following in the given profiles: set -u This confuses the .dbenv*.sh and .sapenv*.sh files of the SAP System.
Step-by-Step Cluster Conversion HP-UX Configuration External Application Server Host Configuration Repeat the steps in this section for each host that has an external application server installed. Logon as root. IS340 Installation Step: If you want to use a secure connection with SSL, refer to IS270 on how to set up secure shell for an application server hosts. Refer to IS260 if remsh is used instead. IS350 Installation Step: Search .
Step-by-Step Cluster Conversion Cluster Configuration Cluster Configuration This section describes the cluster software configuration with the following topics: • Serviceguard Configuration • HA NFS Toolkit Configuration • SGeSAP Configuration • Autofs Configuration Serviceguard Configuration Refer to the standard Serviceguard manual Managing Serviceguard to learn about creating and editing a cluster configuration file and how to apply it to initialize a cluster with cmquerycl(1m) and cmapplyconf(1
Step-by-Step Cluster Conversion Cluster Configuration /etc/cmcluster//customer.functions Since SGeSAP B.03.12, there is only one file for the package control logic that can be used for many package combinations; SGeSAP automatically detects the package setup with the desired SAP component configured. cp /opt/cmcluster/sap/SID/sapwas.cntl /etc/cmcluster//sapwas.cntl IS420 Installation Step: Enter the package directory /etc/cmcluster/.
Step-by-Step Cluster Conversion Cluster Configuration Specify subnets to be monitored in the SUBNET section. OS435 Installation Step: When using the HP Serviceguard Storage Management Suite with CFS and storage layout option 3 of chapter 2 the package(s) dependencies to the corresponding mount point packages holding the required file systems need to be defined. This ensures a successful package start only if the required CFS file system(s) are available.
Step-by-Step Cluster Conversion Cluster Configuration NOTE The manual startup using common startsap/stopsap scripts may not work correctly with Netweaver 2004s based Application Servers. In this case the instances need to started by directly using the sapstart binary passing the start profile as a parameter.
Step-by-Step Cluster Conversion Cluster Configuration SEVICE_NAME[0]=”ciC11ms” SERVICE_CMD[0]=”/etc/cmcluster/C11/sapms.mon\ ” SEVICE_NAME[1]=”ciC11disp” SERVICE_CMD[1]=”/etc/cmcluster/C11/sapdisp.
Step-by-Step Cluster Conversion Cluster Configuration OS450 Optional Step: For non-CFS shares, it is recommended to set AUTO_VG_ACTIVATE=0 in /etc/lvmrc. Edit the custom_vg_activation() function if needed. Distribute the file to all cluster nodes. OS460 Optional Step: It is recommended to set AUTOSTART_CMCLD=1 in /etc/rc.config.d/cmcluster. Distribute the file to all cluster nodes.
Step-by-Step Cluster Conversion Cluster Configuration IS480 Installation Step: To enable the SAP specific scripts change the customer_defined_commands sections of the package control script(s) .control.script: function customer_defined_run_cmds { . /etc/cmcluster//sapwas.cntl start test_return 51 } function customer_defined_halt_cmds { . /etc/cmcluster//sapwas.cntl stop test_return 52 } The SAP specific control file sapwas.
Step-by-Step Cluster Conversion Cluster Configuration IS500 Installation Step: Distribute the package setup to all failover nodes, i.e. you have to create package directories /etc/cmcluster on all backup nodes, copy all integration files below /etc/cmcluster/ from the primary host’s package directory to the backup host’s package directory using rcp(1) or cmcp(1m) and similarly copy /etc/cmcluster/sap.functions from the primary host to the same location on the backup hosts.
Step-by-Step Cluster Conversion Cluster Configuration HA NFS Toolkit Configuration The cross-mounted file systems need to be added to a package that provides HA NFS services. This is usually the (j)db(j)ci package, the (j)db package or the standalone sapnfs package.
Step-by-Step Cluster Conversion Cluster Configuration In the package control script .control.script, specify : HA_NFS_SCRIPT_EXTENSION= This will allow only the package control script of the .control.script to execute the HA NFS script. IS540 Installation Step: The following steps will customize the hanfs. scripts. It will customize all required directories for the usage of the HA-NFS.
Step-by-Step Cluster Conversion Cluster Configuration XFS[0]="-o root=node1:node2:node3:trans:db:sap /export/usr/sap/trans" XFS[1]="-o root=node1:node2:node3:trans:db:sap /export/sapmnt/" XFS[2]="-o root=node1:node2:node3:trans:db:sap /export/informix/" IS550 Installation Step: A Serviceguard service monitor for the HA NFS can be configured in section NFS MONITOR in hanfs.. The naming convention for the service should be NFS.
Step-by-Step Cluster Conversion SGeSAP Configuration SGeSAP Configuration This section deals with the configuration of the SAP specifics of the Serviceguard packages. In Chapter 1 various sample scenarios and design considerations that apply to common setups were introduced. The mapping of this design to SGeSAP is done in a SAP specific configuration file called sap.config.
Step-by-Step Cluster Conversion SGeSAP Configuration Specification of the Packaged SAP Components For each type of potentially mission-critical SAP software components there exists a set of configuration parameters in section 1 of the sap.config file. The information delivered here will be specified exactly once and it will configure all packaged components of a within the cluster.
Step-by-Step Cluster Conversion SGeSAP Configuration NOTE It is not allowed to specify a db and a jdb component as part of the same package. It is not allowed to specify a [j]ci and an [a]rep component as part of the same package. Except Dialog Instance components of type d, each component can be specified once at most. Apart from these exceptions, any subset of the SAP mission critical components can be maintained in sap.config to be part of one or more packages.
Step-by-Step Cluster Conversion SGeSAP Configuration Specify the relocatible IP address of the database instance. Be sure to use exactly the same syntax as configured in the IP[]-array in the package control file. Example: DBRELOC=0.0.0.0 In the subsection for the DB component there is an optional paragraph for Oracle and SAPDB/MAXDB database parameters. Depending on your need for special HA setups and configurations have a look at those parameters and their description.
Step-by-Step Cluster Conversion SGeSAP Configuration OS630 Subsection for the AREP component: SGeSAP supports SAP stand-alone Enqueue Service with Enqueue Replication. It’s important to distinguish between the two components: the stand-alone Enqueue Service and the replication. The stand-alone Enqueue Service is part of the ci or jci component. The arep component refers to the replication unit for protecting ABAP System Services.
Step-by-Step Cluster Conversion SGeSAP Configuration JDB=ORACLE Also, a relocatible IP address has to be configured for the database for J2EE applications in JDBRELOC. Additionally for the J2EE RDBMS, the database instance ID name in JDBSID and database administrator in JDBADM is required to be specified: JDBSID=EP6 JDBADM=oraep6 In the subsection for the JDB component there is a paragraph for optional parameters.
Step-by-Step Cluster Conversion SGeSAP Configuration OS666 Subsection for the JD component: A set of SAP JAVA Application Servers can be configured in each Serviceguard package. Set the relocatible IP address of the dialog instance in JDRELOC, the dialog instance name in JDNAME and the instance ID number in JDNR. The startup of the J2EE engines get triggered with the package, but the successful run of the JAVA applications does not yet get verified. For example: JDRELOC[0]=0.0.0.
Step-by-Step Cluster Conversion SGeSAP Configuration Configuration of Application Server Handling In more complicated setups, there can be a sap.config file for each package. For example a Dialog Instance package can have its own sap.config configured to start additional non-critical Dialog Instances, whereas this setting should not be effective for a Central Instance package with the same SID.
Step-by-Step Cluster Conversion SGeSAP Configuration the failover node to allow the failover to succeed. Often, this includes stopping less important SAP Systems, namely consolidation or development environments. If any of these instances is a Central Instance, it might be that there are additional Application Servers belonging to it. Not all of them are necessarily running locally on the failover node. They can optionally be stopped before the Central Instance gets shut down.
Step-by-Step Cluster Conversion SGeSAP Configuration — ${STOP_WITH_PKG}: Add 2 to ASTREAT[*] if the Application Server should automatically be stopped during halt of the package. — ${RESTART_DURING_FAILOVER}: Add 4 to ASTREAT[*] if the Application Server should automatically be restarted if a failover of the package occurs. If the restart option is not used the SAP ABAP Engine has to be configure to use DB-RECONNECT.
Step-by-Step Cluster Conversion SGeSAP Configuration ${START_WITH_PACKAGE}, ${STOP_WITH_PACKAGE} and {RESTART_DURING_FAILOVER} only make sense if ASSID[]=${SAPSYSTEMNAME}, i.e. these instances need to belong to the clustered SAP component. ${STOP_IF_LOCAL_AFTER_FAILOVER} and {STOP_DEPENDENT_INSTANCES} can also be configured for different SAP components. The following table gives an overview of ASTREAT[*] values and their combination.
Step-by-Step Cluster Conversion SGeSAP Configuration Table 3-6 Overview of reasonable ASTREAT values ASTREA T value STOP_ DEP STOP_ LOCAL RESTART 9 0 1 (8) 0 0 1 (1) 10 0 1 (8) 0 1 (1) 0 11 0 1 (8) 0 1 (2) 1 (1) 12 0 1 (8) 1 (4) 0 0 13 0 1 (8) 1 (4) 0 1 (1) 14 0 1 (8) 1 (4) 1 (2) 0 15 0 1 (8) 1 (4) 1 (2) 1 (1) 24 1 (16) 1 (8) 0 0 0 STOP START Restrictions Should only be configured for AS that belong to the same SID Should only be configured for Insta
Step-by-Step Cluster Conversion SGeSAP Configuration Example 2: The failover node is running a Central Consolidation System QAS. It shall be stopped in case of a failover to this node. ASSID[0]=QAS; ASHOST[0]=node2; ASNAME[0]=DVEBMGS; ASNR[0]=10; ASTREAT[0]=8; ASPLATFORM[0]="HP-UX" The failover node is running the Central Instance of Consolidation System QAS. There are two additional Application Servers configured for QAS, one inside of the cluster and one outside of the cluster.
Step-by-Step Cluster Conversion SGeSAP Configuration IS680 Installation Step: The REM_COMM value defines the method to be used to remotely execute commands for Application Server handling. It can be set to ssh to provide secure encrypted communications between untrusted hosts over an insecure network. Information on how to set up ssh for each node refer to Section Cluster Node Configuration. Default value is remsh.
Step-by-Step Cluster Conversion SGeSAP Configuration can significantly speed up the package start and stop, especially if Windows NT application servers are used. Use this value only if you have carefully tested and verified that timing issues will not occur. OS710 Optional Step: Specify SAPOSCOL_STOP=1 if saposcol should be stopped together with each instance that is stopped. The collector will only be stopped if there is no instance of an SAP system running on the host.
Step-by-Step Cluster Conversion SGeSAP Configuration The following list summarizes how the behavior of SGeSAP is affected by changing the CLEANUP_POLICY parameter: NOTE • lazy – no action, no cleanup of resources • normal – removes unused resources belonging to the own system • strict - uses HP-UX commands to free up all semaphores and shared memory segments that belong to any SAP Instance of any SAP system on the host if the Instance is to be started soon.
Step-by-Step Cluster Conversion SGeSAP Configuration Optional Parameters and Customizable Functions In /etc/cmcluster/ there is a file called customer.functions that provides a couple of predefined function hooks. They allow the specification of individual startup or runtime steps at certain phases of the package script execution. Therefore it is not necessary and also not allowed to change the sap.functions.
Step-by-Step Cluster Conversion SGeSAP Configuration Table 3-7 Optional Parameters and Customizable Functions List (Continued) Command: Description: stop_addons_predb relates to start_addons_postdb stop_addons_postdb relates to start_addons_predb The customer.functions template that is delivered with SGeSAP provides several examples within the hook functions. They can be activated via additional parameters as described in the following. Stubs for JAVA based components work in a similar fashions.
Step-by-Step Cluster Conversion SGeSAP Configuration OS750 Optional Step: Specify RFCADAPTER_START=1 to automatically start the RFC adapter component, e.g. for some early Exchange Infrastructure versions. Make sure that the JVM executables can be reached via the path of SIDADM. Example: RFCADAPTER_START=1 RFCADAPTER_CMD="run_adapter.sh" OS760 Optional Step: Specify SAPCCMSR_START=1 if there should be a CCMS agent started on the DB host automatically.
Step-by-Step Cluster Conversion SGeSAP Configuration Global Defaults The fourth section of sap.config is rarely needed. It mainly provides various variables that allow overriding commonly used default parameters. OS770 Optional Step: If there is a special demand to use values different from the default, it is possible to redefine some global parameters.
Step-by-Step Cluster Conversion SGeSAP Configuration If the use of the SAP control framework is not required, then remove the reference link from the sapinit script. Furthermore, any running sapstartsrv processes can be killed from the process list. For example: # rm /sbin/rc3.d/S<###>sapinit # ps –ef | grep sapstartsrv # kill {PID of sapstartsrv } To make use of the sapstartservice configure the SAPSTARTSRV_START and SAPSTARTSRV_STOP values. The sapstartsrv daemon is started and stopped accordingly.
Step-by-Step Cluster Conversion SGeSAP Configuration /opt/wlm/toolkits/sap/bin/wlmsapmap -f /etc/cmcluster/P03/wlmprocmap.
Step-by-Step Cluster Conversion SGeSAP Configuration The following example WLM configuration file ties the CPU core shares guaranteed for dialog processing to the number of dialog processes running as of the current SAP operation mode. In times of high load, the overall dialog processing power of the instance is guaranteed to be at least 25% of a core multiplied by the number of dialog work processes running. # # Uses absolute CPU units so that 100 shares == 1 CPU.
Step-by-Step Cluster Conversion SGeSAP Configuration # # Report the number of active processes in the dialog workload group.
Step-by-Step Cluster Conversion SGeSAP Configuration Auto FS Configuration This section describes the configuration of the HP-UX automount feature that will automatically mount NFS file systems. NOTE This section only needs to be performed when using storage layout option 1. Repeat the steps in this section for each node of the cluster and for each external application server host. Logon as root. IS800 Installation Step: Check that the automounter is active. In /etc/rc.config.
Step-by-Step Cluster Conversion SGeSAP Configuration NUM_NFSD=4 NUM_NFSIOD=4 IS820 Installation Step: Add the following line to your /etc/auto_master file: /- /etc/auto.direct IS830 Installation Step: Create a file called /etc/auto.direct. Identify the HA file systems as named in chapter Planning the Storage Layout and tables referring to the used database for each directory configured to be mounted below /export. For each HA file system add a line to /etc/auto.direct.
Step-by-Step Cluster Conversion SGeSAP Configuration WARNING Never kill the automount process. Always use nfs.client to stop or start it. WARNING Never stop the NFS client while the automounter directories are still in use by some processes. If nfs.client stop reports that some filesystems could not be unmounted, the automounter may refuse to handle them after nfs.client start.
Step-by-Step Cluster Conversion Database Configuration Database Configuration This section deals with additional database specific installation steps and contains the following: • Additional Steps for Oracle • Additional Steps for MAXDB • Additional Steps for Informix • Additional Steps for DB2 Additional Steps for Oracle The Oracle RDBMS includes a two-phase instance and crash recovery mechanism that enables a faster and predictable recovery time after a crash.
Step-by-Step Cluster Conversion Database Configuration • Fast-Start Parallel Rollback: configure the FAST_START_PARALLEL_ROLLBACK parameter to roll-back set of transaction in parallel. This parameter is similar to the RECOVERY_PARALLELISM parameter for the roll-forward phase. All these parameters can be used to tune the duration of Instance / Crash recovery.
Step-by-Step Cluster Conversion Database Configuration # alter datafile 'end backup' when instance crashed during # backup echo "connect internal;" > $SRVMGRDBA_CMD_FILE echo "startup mount;" >> $SRVMGRDBA_CMD_FILE echo "spool endbackup.log" >> $SRVMGRDBA_CMD_FILE echo "select 'alter database datafile '''||f.name||''' end backup;'" >> $SRVMGRDBA_CMD_FILE echo "from v\$datafile f, v\$backup b" >> $SRVMGRDBA_CMD_FILE echo "where b.file# = f.file# and b.
Step-by-Step Cluster Conversion Database Configuration OR870 Oracle Database Step: Copy $ORACLE_HOME/network/admin/tnsnames.ora to all additional application server hosts. Be careful if these files were customized after the SAP installation. OR880 Oracle Database Step: Be sure to configure and install the required Oracle NLS files and client libraries as mentioned in section Oracle Storage Considerations included in chapter Planning the Storage Layout. Also refer to SAP OSS Note 180430 for more details.
Step-by-Step Cluster Conversion Database Configuration The file consists of two identical parts Table 3-8 Working with the two parts of the file Part of the file: first part Instruction: Replace each occurrence of the word LISTENER by a new listener name. You can choose what suits your needs, but it is recommended to use the syntax LISTENER: ( host = ) Change nothing. second part Replace each occurrence of the word LISTENER by a new listener name different from the one chosen above.
Step-by-Step Cluster Conversion Database Configuration Create an /etc/services entry for the new port you specified above. Use tlisrv as service name. The name is not needed anyhow. This entry has to be made on all hosts that run an instance that belongs to the system. This includes all external application server hosts outside of the cluster. OR900 Optional Step: If you use multiple packages for the database and SAP components: Set the optional parameter SQLNET.EXPIRE_TIME in sqlnet.
Step-by-Step Cluster Conversion Database Configuration If you configure multiple Application servers to be started with the parallel startup option in sap.config, make sure the tcp_conn_request_max ndd parameter on the DB-nodes (primary and backup) are configured with an appropriate value: Example: tcp_conn_request_max = 1024 If this parameter is set too low, incoming tcp connections from starting SAP Application servers that want connect to the DB via the Oracle Listener may halt.
Step-by-Step Cluster Conversion Database Configuration OR940 Oracle Step: Additional steps for Oracle 10g RDBMS: Even though no cluster services are needed when installing a Oracle 10g Single Instance, the cluster services daemon ($ORACLE_HOME/bin/ocssd.bin) is installed. It will remain running, even when the database is shut down. This keeps the file system busy during package shut down. Therefore it is mandatory that the cluster services daemon is disabled.
Step-by-Step Cluster Conversion Database Configuration Additional Steps for MAXDB Logon as root to the primary host of the database where the db or dbci package is running in debug mode. SD930 MAXDB Database Step: If environment files exist in the home directory of the MAXDB user on the primary node, create additional links for any secondary. For example: su - sqd ln -s .dbenv_.csh .dbenv_.csh ln -s .dbenv_.sh .dbenv_.
Step-by-Step Cluster Conversion Database Configuration For all previously listed user keys run the following command as adm Administrator: # dbmcli -U quit exits the upcoming prompt: # dbmcli on : > quit should be relocatable. If it is not, the XUSER mapping has to be recreated.
Step-by-Step Cluster Conversion Database Configuration Additional Steps for Informix Logon as root to the primary host of the database where the package is running in debug mode. IR970 Informix Database Step: Perform the following steps as an INFORMIX user: su - informix Comment out the remsh sections in the files called .dbenv.csh and .dbenv.sh in the home directory. If they are missing, check for alternative files with hostnames in them: .dbenv_.csh and .dbenv_.sh.
Step-by-Step Cluster Conversion Database Configuration IR980 Informix Database Step: In Step IR490 you copied two files to the INFORMIX home directory of the primary node. At this time, still as an INFORMIX user, customize these files by replacing the string relocdb with your individual (or in case of database package). For example: .customer.sh: .customer.csh: ####################### .customer.
Step-by-Step Cluster Conversion Database Configuration IR990 Informix Database Step: Perform the following steps as adm user: su - adm Copy the files that were manipulated to the home directory: cp /home/informix/.dbenv* ~ cp /home/informix/.customer.* ~ Copy the files over to the home directories of adm on all cluster nodes and all external application server hosts. IR1000 Informix Database Step: Perform the following steps as an INFORMIX user.
Step-by-Step Cluster Conversion Database Configuration IR1020 Informix Database Step: Add a line with rel-IP-name of the database package to the file: /informix//etc/sqlhosts.soc. After the SAP installation this file will be similar to: demo_on onipcshm on_hostname on_servername demo_se seipcpip se_hostname sqlexec shm onipcshm sapinf tcp onsoctcp sapinf Change the entries to of the database package.
Step-by-Step Cluster Conversion Database Configuration Additional Steps for DB2 DB1030 DB2 Database Step: DB2 installs the DB2 executables on local disks below /opt. It is necessary to install a copy of the DB2 executables on all nodes where the package containing the database may run. It is important to synchronize the patch level of the DB2 executables whenever installing fixes. The executables should be installed using the DB2 installer because DB2 checks the version when starting the DB server.
Step-by-Step Cluster Conversion Database Configuration Use the following commands to catalog the database for both local and remote access. The commands need to be executed as db2 and on all nodes that can run the DB2 database package.
Step-by-Step Cluster Conversion Database Configuration Database alias = EC2 Database name = EC2L Node name = NODEEC2 Database release level = a.00 Comment = Directory entry type = Remote Authentication = SERVER_ENCRYPT Catalog database partition number = -1 Alternate server hostname = Alternate server port number = Database 2 entry: 192 Database alias = EC2L Database name = EC2 Local database directory = /db2/EC2 Database release level = a.
Step-by-Step Cluster Conversion SAP Application Server Configuration SAP Application Server Configuration This section describes some required configuration steps that are necessary for SAP to become compatible to a cluster environment. The following configuration steps do not need to be performed if the SAP System was installed using virtual IPs. The steps are only required to make a non-clustered installation usable for clustering.
Step-by-Step Cluster Conversion SAP Application Server Configuration If you don’t have Replicated Enqueue: rdisp/enqname = __ The following parameters are only necessary if an application server is installed on the adoptive node.
Step-by-Step Cluster Conversion SAP Application Server Configuration The instance profile name is often extended by the hostname. You do not need to change this filename to include the relocatible hostname. SGeSAP also supports the full instance virtualization of SAPWAS 6.40 and beyond. The startsap mechanism can then be called by specifying the instance’s virtual IP address. For this case, it is required to use the virtual IP addressing in the filenames of the instance profile and instance start profile.
Step-by-Step Cluster Conversion SAP Application Server Configuration IS1150 Installation Step: Connect with a SAPGUI. Import the changed SAP profiles within SAP. The transaction is called RZ10. After importing the profiles, check with rsparam in SE38 if the parameters SAPLOCALHOST and SAPLOCALHOSTFULL are correct. If you do not import the profiles, the profiles within SAP can be edited by the SAP Administrator.
Step-by-Step Cluster Conversion SAP Application Server Configuration IS1180 Installation Step: Within the SAP Computing Center Management System (CCMS) you can define operation modes for SAP instances. An operation mode defines a resource configuration for the instances in your SAP system. It can be used to determine which instances are started and stopped, and how the individual services are allocated for each instance in the configuration.
Step-by-Step Cluster Conversion SAP Application Server Configuration to the SAP Message Server or the SAP Web Dispatcher Server. These servers then act as the physical point of access for HTTP requests. They classify requests and send HTTP redirects to the client in order to connect them to the required ICM Instance. This only works properly if the bound ICM instances propagate virtual IP addresses.
Step-by-Step Cluster Conversion SAP Application Server Configuration SAP J2EE Engine specific installation steps This section is applicable for SAP J2EE Engine 6.40 based installations that were performed prior to the introduction of SAPINST installations on virtual IPs. There is a special SAP OSS Note 757692 that explains how to treat a hostname change and thus the virtualization of a SAP J2EE Engine 6.40.
Step-by-Step Cluster Conversion SAP Application Server Configuration Table 3-9 IS1130 Installation Step Choose... Change the following values cluster_data -> dispatcher -> cfg -> kernel -> Propertysheet LockingManager enqu.host = cluster_data -> dispatcher -> cfg -> kernel -> Propertysheet ClusterManager ms.host = cluster_data -> server -> cfg -> kernel -> Propertysheet LockingManager enqu.
SAP Supply Chain Management 4 SAP Supply Chain Management Within SAP Supply Chain Management (SCM) scenarios two main technical components have to be distinguished: the APO System and the liveCache. An APO System is based on SAP Application Server technology. Thus, ci, db, dbci, d and sapnfs packages may be implemented for APO. These APO packages are set up similar to Netweaver packages. There is only one difference. APO needs access to liveCache client libraries.
SAP Supply Chain Management This chapter describes how to configure and setup SAP Supply Chain Management using a liveCache cluster. The SGeSAP lc package type is explained in the context of failover and restart clusters as well as hot standby architectures.
SAP Supply Chain Management NOTE Chapter 4 For installation steps in this chapter that require the adjustment of SAP specific parameter in order to run the SAP application in a switchover environment usually example values are given. These values are for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP’s latest recommendation. Whenever possible the SAP OSS note number is given.
SAP Supply Chain Management More About Hot Standby More About Hot Standby A fatal liveCache failure concludes in a restart attempt of the liveCache instance. This will take place either locally or, as part of a cluster failover, on a remote node. It is a key aspect here, that liveCache is an in-memory database technology. While the bare instance restart as such is quick, the reload of the liveCache in-memory content can take a significant amount of time, depending on the size of the liveCache data spaces.
SAP Supply Chain Management More About Hot Standby A hot standby liveCache is a second liveCache instance that runs with the same System ID as the original master liveCache. It will be waiting on the secondary node of the cluster during normal operation. A failover of the liveCache cluster package does not require any time consuming filesystem move operations or instance restarts. The hot standby simply gets notified to promote itself to become the new master.
SAP Supply Chain Management More About Hot Standby The liveCache logging gets continuously verified during operation. An invalid entry in the log files gets detected immediately. This avoids the hazardous situation of not becoming aware of corrupted log files until they fail to restore a production liveCache instance. A data storage corruption that could happen during operation of the master does not get replicated to the only logically coupled standby.
SAP Supply Chain Management Planning the Volume Manager Setup Planning the Volume Manager Setup In the following, the lc package of SGeSAP gets described. The lc package was developed according to the SAP recommendations and fulfills all SAP requirements for liveCache failover solutions. liveCache distinguishes an instance dependant path /sapdb/ and two instance independent paths IndepData and IndepPrograms. By default all three point to a directory below /sapdb.
SAP Supply Chain Management Planning the Volume Manager Setup Table 4-2 Storage Type File System Layout for liveCache Package running separate from APO (Option 1) Package Mount Point shared lc /sapdb/ data shared lc /sapdb/ / datan shared lc /sapdb/ / logn shared lc /var/ spool/sql shared lc /sapdb/ programs Volume group Logical Volume Device Number In the above layout all relevant files get shared via standard procedures.
SAP Supply Chain Management Planning the Volume Manager Setup Option 2: Non-SAPDB Environments Cluster Layout Constraints: • There is no SAPDB or additional liveCache running on cluster nodes. Especially the APO System RDBMS is either based on Informix, ORACLE or DB2, but not on SAPDB. • There is no hot standby liveCache system configured. Often APO does not rely on SAPDB as underlying database technology.
SAP Supply Chain Management Planning the Volume Manager Setup This can be any standard, standalone NFS package. The SAP global transport directory should already be configured in a similar package. This explains why this package is often referred to as “the trans package” in related literature. A trans package can optionally be extended to also serve the global liveCache fileshares.
SAP Supply Chain Management Planning the Volume Manager Setup /sapdb/data/wrk: The working directory of the main liveCache/SAPDB processes is also a subdirectory of the IndepData path for non-HA setups. If a liveCache restarts after a crash, it copies important files from this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for liveCache versions lower than 7.6, this directory should move with the package.
SAP Supply Chain Management Planning the Volume Manager Setup /var/spool/sql: This directory hosts local runtime data of all locally running liveCache/SAPDB Instances. Most of the data in this directory becomes meaningless in the context of a different host after failover. The only critical portion that still has to be accessible after failover is the initialization data in /var/spool/sql/ini. This directory is almost always very small (< 1MB).
SAP Supply Chain Management Planning the Volume Manager Setup Option 4: Hot Standby liveCache Two liveCache instances are running in a hot standby liveCache cluster during normal operation. No instance failover takes place. This allows to keep instance-specific data local to each node. The cluster design follows the principle to share as little as possible.
SAP Supply Chain Management MAXDB Storage Considerations MAXDB Storage Considerations SGeSAP supports current MAXDB, liveCache and older SAPDB releases. The considerations made below will apply similarly to MAXDB,liveCache and SAPDB clusters unless otherwise indicated. MAXDB distinguishes an instance dependant path /sapdb/ and two instance independent paths, called IndepData and IndepPrograms. By default all three point to a directory below /sapdb.
SAP Supply Chain Management MAXDB Storage Considerations /sapdb/programs/runtime/7402=7.4.2.0, For MAXDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations], [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.ini in the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.ini, Databases.ini and Runtimes.ini for a host with a liveCache 7.5 (LC2) and an APO 4.
SAP Supply Chain Management MAXDB Storage Considerations LC010 liveCache Installation Step: If you decided to use option three, and the liveCache version is lower than 7.6: 1. Log on as adm on the machine on which liveCache was installed. Make sure, you have mounted a sharable logical volume on /sapdb//wrk as discussed above. 2. Change the path of the runtime directory of the liveCache and move the files to the new logical volume accordingly. cd /sapdb/data/wrk/ find .
SAP Supply Chain Management MAXDB Storage Considerations LC020 liveCache Installation Step: Mark all shared non-CFS liveCache volume groups as members of the cluster. This only works if the cluster services are already available. For example: cd / # umount all logical volumes of the volume group vgchange -a n vgchange -c y vgchange -a e # remount the logical volumes The device minor numbers must be different from all device minor numbers gathered on the other hosts.
SAP Supply Chain Management HP-UX Setup for Options 1, 2 and 3 HP-UX Setup for Options 1, 2 and 3 This section describes how to synchronize and configure the HP-UX installations on all cluster nodes in order to allow that the same liveCache instance is able to run on any of these nodes. This section does not apply to hot standby systems, since they never perform any instance failover. For systems with hot standby this entire HP-UX setup section can be skipped. Clustered Node Synchronization 1.
SAP Supply Chain Management HP-UX Setup for Options 1, 2 and 3 su - adm mv .dbenv_.csh .dbenv_.csh mv .dbenv_.sh .dbenv_.sh For liveCache 7.6: su - adm mv .lcenv_.csh .lcenv_.csh mv .lcenv_.sh .lcenv_.sh NOTE Never use the relocatable address in these file names. LC061 liveCache Installation Step: Copy file /etc/opt/sdb to the second cluster node.
SAP Supply Chain Management HP-UX Setup for Options 1, 2 and 3 LC080 liveCache Installation Step: On the backup node, create a directory as future mountpoint for all relevant directories from the table of section that refers to the layout option you chose. Option 1: mkdir /sapdb Option 2: mkdir -p /sapdb/data mkdir /sapdb/ Option 3: mkdir -p /sapdb/ Cluster Node Configuration LC100 liveCache Installation Step: Repeat the steps in this section for each node of the cluster.
SAP Supply Chain Management HP-UX Setup for Options 1, 2 and 3 LC130 liveCache Installation Step: If you establish frontend and server LANs to separate network traffic: Add routing entries to the internet routing configurations of /etc/rc.config.d/netconf. This is the only phase of the whole installation in which you will need to specify addresses of the server LAN. Route all relocatable client LAN addresses to the local server LAN addresses.
SAP Supply Chain Management HP-UX Setup for Option 4 HP-UX Setup for Option 4 This section describes how to install a hot standby instance on the secondary node. LC143 hot standby Installation Step: The hot standby instance of storage option 4 can be installed with the installer routine that comes on the SAP liveCache media. There is no special option for a hot standby provided by SAP yet, but installation is very straight-forward. “Install” or “patch” the master instance on the standby host.
SAP Supply Chain Management HP-UX Setup for Option 4 LC149 hot standby Installation Step: The liveCache instances require full administration control to handle their storage configuration. There is a special callback binary on the primary and secondary node that acts as a gateway to these functions.
SAP Supply Chain Management HP-UX Setup for Option 4 LC157 hot standby Installation Step: The activation of the hot standby happens on the master instance. Logon to the instance via dbmcli and issue the following commands: hss_enable node= lib= /opt/cmcluster/sap/lib/hpux64/librtehssgesap.so hss_addstandby login=, Alternatively, you can use the configuration dialog of the Hot Standby button in the SAP Database Manager configuration to set these values.
SAP Supply Chain Management SGeSAP Package Configuration SGeSAP Package Configuration This section describes how to configure the SGeSAP lc package. LC165 liveCache Installation Step: Install the product depot file for SGeSAP (B7885BA or T2803BA) using swinstall (1m) if this has not been done already. The installation staging directory is /opt/cmcluster/sap. All original product files are copied there for reference purposes.
SAP Supply Chain Management SGeSAP Package Configuration • For lc.config the following settings are strongly recommended: PACKAGE_NAME lc PACKAGE_TYPE FAILOVER RUN_SCRIPT /etc/cmcluster//lc.control.script HALT_SCRIPT /etc/cmcluster//lc.control.script SERVICE_NAME SGeSAPlc • For lc.
SAP Supply Chain Management SGeSAP Package Configuration LC190 liveCache Installation Step: The /etc/cmcluster//sap.config configuration file of a liveCache package is similar to the configuration file of Netweaver Instance packages. The following standard parameters in sap.
SAP Supply Chain Management SGeSAP Package Configuration The LCMONITORINTERVAL variable specifies how often the monitoring polling occurs (in sec.) LC193 hot standby Installation Step: The following additional parameters in sap.config have to be set for a hot standby liveCache package: Hot standby systems have a preferred default machine for the instance with the master role. This should usually correspond to the hostname of the primary node of the package.
SAP Supply Chain Management SGeSAP Package Configuration LC198 Optional hot standby Installation Step: The hot standby liveCache instance does not detect a regular liveCache master shutdown. It will not reconnect to the restarted liveCache master. The sap.config parameter LC_STANDBY_RESTART=1 will (re)start a remote hot standby system after the master liveCache instance package is started.
SAP Supply Chain Management SGeSAP Package Configuration Service Monitoring SAP recommends the use of service monitoring in order to test the runtime availability of liveCache processes. The monitor, provided with SGeSAP, periodically checks the availability and responsiveness of the liveCache system. The sanity of the monitor will be ensured by standard Serviceguard functionality. The liveCache monitoring program is shipped with SGeSAP in the saplc.mon file.
SAP Supply Chain Management SGeSAP Package Configuration NOTE Activation of pause mode, state changes of liveCache and liveCache restart attempts get permanently logged into the standard package logfile /etc/cmcluster//lc.control.script.log. The monitor can also be paused by standard administrative tasks that use the administrative tools delivered by SAP. Stopping the liveCache using the SAP lcinit shell command or the APO LC10 transaction will send the monitoring into pause mode.
SAP Supply Chain Management APO Setup Changes APO Setup Changes Running liveCache within a Serviceguard cluster package means that the liveCache instance is now configured for the relocatable IP of the package. This configuration needs to be adopted in the APO system that connects to this liveCache. Figure 4-3 shows an example for configuring LCA.
SAP Supply Chain Management APO Setup Changes GS230 liveCache Installation Step: 1. Configure the XUSER file in the APO user home and liveCache user home directories. If an .XUSER file does not already exist, you must create it. 2. The XUSER file in the home directory of the APO administrator and the liveCache administrator keeps the connection information and grant information for a client connecting to liveCache. The XUSER content needs to be adopted to the relocatable IP the liveCache is running on.
SAP Supply Chain Management APO Setup Changes ADMIN/ key # dbmcli -n -d -u control,control -uk c -us control,control ONLINE/ key # dbmcli -n -d -u superdba,admin -uk w -us superdba,admin LCA key # dbmcli -n -d -us control,control -uk 1LCA -us control,control NOTE Refer to the SAP documentation to learn more about the dbmcli syntax.
SAP Supply Chain Management General Serviceguard Setup Changes General Serviceguard Setup Changes Dependent from the storage option chosen, the globally available directories need to be added to different existing Serviceguard packages. The following installation steps require that the system has already been configured to use the automounter feature. If this is not the case, refer to installation steps IS730 to IS770 found in Chapter 3 of this manual.
SAP Supply Chain Management General Serviceguard Setup Changes For option 3: 1. Add two shared logical volume for /export/sapdb/programs and /export/sapdb/data to the global NFS package (sapnfs). 2. Copy the content of /sapdb/programs and the remaining content of /sapdb/data from the liveCache primary node to these logical volumes. 3. Make sure /sapdb/programs and /sapdb/data exist as empty mountpoints on all hosts of the liveCache package.
SAP Master Data Management (MDM) 5 SAP Master Data Management (MDM) Starting with SGeSAP version B.04.51, there is now a failover mechanism for the four server components of SAP Master Data Management (MDM) 5.5, i.e. the MDB, MDS, MDIS and MDSS servers. They can be handled as a single package or split up into four individual packages.
SAP Master Data Management (MDM) Master Data Management - Overview Master Data Management - Overview Figure 5.1 provides a general flow of how SGeSAP works with MDM. Figure 5-1 • The Upper Layer - contains the User Interface components. • The Middle Layer - contains the MDM Server components. The MDM components to be clustered with SGeSAP are from the middle layer. These are MDS, MDIS, MDSS and MDB.
SAP Master Data Management (MDM) Master Data Management - Overview Master Data Management User Interface Components Table 5-1 details the two MDM user interface categories, as it relates to SGeSAP: Table 5-1 MDM User Interface and Command Line Components Category MDM GUI (Graphical user Interface) clients Description MDM Console, MDM Data Manager Client, MDM Import Manager.... Allows you to use, administer and monitor the MDM components.
SAP Master Data Management (MDM) Master Data Management - Overview MDM Server Components The MDM Server Components provide consistent master data within a heterogeneous system landscape and allows organizations to consolidate and centrally manage their master data. The MDM Server components import and export this information to and from various clients across a network.
SAP Master Data Management (MDM) Master Data Management - Overview Table 5-2 MDM Server Components (Continued) Component Description MDSS (MDM Syndication Server) Allows you to export data automatically in conjunction with predefined outbound ports and syndicator maps to various remote systems (e.g. ERP master data clients, Web catalogs and files with flat or relational formats). MDB (MDM Database Server) Provides Database services for the MDM configuration.
SAP Master Data Management (MDM) Master Data Management - Overview SAP Netweaver XI components Inbound/Outbound XI components - Provide the Inbound and Outbound plug-ins to allow the connection of the MDM system to remote systems. NOTE Go to http://service.sap.com/installmdm to access the SAP Service Marketplace web site, where you can then access the required SAP documentation needed for this installation. You must be a SAP registered user to access these documents. MDM 5.5 SP05 - Master Guide MDM 5.
SAP Master Data Management (MDM) Installation and Configuration Considerations Installation and Configuration Considerations The following sections contain a step-by-step guide on the components required to install MDM in a SGeSAP (Serviceguard extension for SAP) environment. The MDM server components that are relevant for the installation and configuration of the SGeSAP scripts are: MDS, MDIS, MDSS and MDB.
SAP Master Data Management (MDM) Installation and Configuration Considerations The MDM SGeSAP File System Layout The following file system layout will be used for the MDM Server components. /oracle/MDM For performance reasons, the MDM database (MDB) file systems will be based on local and relocatable storage (the physical storage volume / file system can relocate between the cluster nodes, but only ONE node in the cluster will mount the file system).
SAP Master Data Management (MDM) Installation and Configuration Considerations NOTE The advantage of the NFS server/client approach from a system management viewpoint is only one copy of all the MDM server files have to be kept and maintained on the cluster instead of creating and distributing copies to each cluster node. Regardless of on which node any of the MDM server components are running in the cluster - the MDM server files are always available in the /home/mdmuser directory.
SAP Master Data Management (MDM) Installation and Configuration Considerations Single or Multiple MDM Serviceguard Package Configurations MDM with SGeSAP can be configured to run either as a “Single" Serviceguard package (ONE package) OR as "Multiple" Serviceguard packages (FOUR+ONE).
SAP Master Data Management (MDM) Installation and Configuration Considerations Multiple MDM Serviceguard packages (FOUR+ONE) Chapter 5 • In the case of "Multiple" MDM Serviceguard packages each of the four MDM component (mdb, mds, mdis and mdss) is configured and run in a separate (mdbMDM, mdsMDM, mdssMDM, mdisMDM) Serviceguard package.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM010 Setup Step: Add MDM virtual IP addresses and hostnames to /etc/hosts. The database will be accessible under the 172.16.11.97/mdbreloc virtual IP address/hostname, the MDS component under the 172.16.11.95/mdsreloc virtual IP address/hostname and the MDM NFS files systems under 172.16.11.95/mdmnfsreloc virtual IP address/hostname. MDIS and MDSS do not require a separate virtual address.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM030 Setup Step: Create file systems and mount points on the first cluster node. The following commands create the LVM file systems and mount points for the MDM components. The database file system in this configuration has a size of 20 GB (/oracle/MDM), the file system for the MDM components (/export/home/mdmuser) has a size of 7GB.
SAP Master Data Management (MDM) Installation and Configuration Considerations Create NFS export/import mount points for the /home/mdmuser home directory. The directory /export/home/mdmuser is the NFS server ( NFS export) mount point, /home/mdmuser is the NFS client (NFS import) mount point. mkdir -p /export/home/mdmuser mkdir –p /home/mdmuser Create the mount point for the MDM binaries directory. mkdir –p /opt/MDM MDM040 Setup Step: Create file systems and mount points on the other cluster nodes.
SAP Master Data Management (MDM) Installation and Configuration Considerations mknod /dev/vgmdmoradb/group mknod /dev/vgmdmuser/group c 64 0x500000 c 64 0x530000 vgimport -s -m /tmp/vgmdmoradb.map /dev/vgmdmoradb vgimport -s -m /tmp/vgmdmuser.
SAP Master Data Management (MDM) Installation and Configuration Considerations shell:/bin/sh /etc/group ---------- oper::202:oramdm dba::201:oramdm /oracle/MDM/.profile --------------------export ORACLE_HOME=/oracle/MDM/920_64 export ORA_NLS=/oracle/MDM/920_64/ocommon/NLS_723/admin/data ORA_NLS32=/oracle/MDM/920_64/ocommon/NLS_733/admin/data export ORA_NLS32 export ORA_NLS33=/oracle/MDM/920_64/ocommon/nls/admin/data export ORACLE_SID=MDM NLSPATH=/opt/ansic/lib/nls/msg/%L/%N.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM060 alias __C=`echo "\006"` # right arrow = forward a character alias __D=`echo "\002"` # left arrow = back a character alias __H=`echo "\001"` # home = start of line Setup Step: Create mdm account for the MDM user The following parameters and settings were used: /etc/passwd ----------- user:mdmuser uid:205 home:/home/mdmuser group:dba shell:/bin/sh /etc/group ---------- dba::201:oramdm,mdmuser /home/mdmuser/.
SAP Master Data Management (MDM) Installation and Configuration Considerations PATH=${HOME}/oracle/MDM/920_64/bin:.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM080 Setup Step: These scripts will mount the appropriate storage volumes, enable virtual IP addresses and export the NFS file systems to the NFS clients. Setup the Serviceguard directory for the mdmNFS package. Create a directory /etc/cmcluster/MDMNFS which will contain the Serviceguard templates and scripts for NFS.
SAP Master Data Management (MDM) Installation and Configuration Considerations vi /etc/cmcluster/MDMNFS/mdmNFS.config PACKAGE_NAME NODE_NAME mdmNFS * RUN_SCRIPT /etc/cmcluster/MDMNFS/mdmNFS.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDMNFS/mdmNFS.control.script HALT_SCRIPT_TIMEOUT NO_TIMEOUT The file /etc/cmcluster/MDMNFS/mdmNFS.control.script contains information on the volume groups (VG[0]) , the LVM logical volumes (LV[0]), the file system mount point and options (FS..
SAP Master Data Management (MDM) Installation and Configuration Considerations XFS[0]="-o root=clunode1:clunode2 /export/home/mdmuser" All nodes of the cluster require a copy of the configuration changes. Copy the MDM NFS Serviceguard files to the second cluster node: scp -rp /etc/cmcluster/MDMNFS clunode2:/etc/cmcluster/MDMNFS MDM100 Setup Step: Register the Serviceguard mdmNFS package in the cluster and run it: clunode1: cmapplyconf -P /etc/cmcluster/MDMNFS/mdmNFS.
SAP Master Data Management (MDM) Installation and Configuration Considerations Creating an initial Serviceguard package for the MDB Component The following steps will depend if the MDM Database will be configured as “Single” or “Multiple” MDM Serviceguard package configurations as described earlier. An “initial” Serviceguard package means that initially only the storage volumes, the file systems and the virtual IP address will be configured as a Serviceguard package.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM200 Setup Step: (a) Single MDM Serviceguard package - create a mgroupMDM package. Create Serviceguard templates and edit the files for the mgroupMDM package. For information on the variables used in this step see the configuration step of the nfsMDM package above. clunode1: mkdir /etc/cmcluster/MDM cmmakepkg -s /etc/cmcluster/MDM/mgroupMDM.control.script cmmakepkg -p /etc/cmcluster/MDM/mgroupMDM.
SAP Master Data Management (MDM) Installation and Configuration Considerations FS[0]="/oracle/MDM"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]=""; \ FS_TYPE[0]="vxfs" scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mgroupMDM.config cmrunpkg mgroupMDM MDM202 Setup Step: (b) Multiple MDM Serviceguard packages - create a mdbMDM package Create Serviceguard templates and edit the files for the mdbMDM package.
SAP Master Data Management (MDM) Installation and Configuration Considerations HALT_SCRIPT_TIMEOUT NO_TIMEOUT vi /etc/cmcluster/MDM/mdbMDM.control.script IP[0]="172.16.11.97" SUBNET[0]="172.16.11.0" VG[0]="vgmdmoradb" LV[0]="/dev/vgmdmoradb/lvmdmoradb"; \ FS[0]="/oracle/MDM"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]=""; \ FS_TYPE[0]="vxfs" scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mdbMDM.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM204 Setup Step: Create Serviceguard templates and edit the files for the mdsMDM package. For information on the variables used in this step see the configuration step of the nfsMDM package above. NOTE As the /home/mdmuser/mds file system is NFS based - storage volumes or file systems have do not have to be specified in the package configuration files. clunode1: cmmakepkg -s /etc/cmcluster/MDM/mdsMDM.control.
SAP Master Data Management (MDM) Installation and Configuration Considerations cmapplyconf -P /etc/cmcluster/MDM/mdsMDM.config cmrunpkg mdsMDM MDM206 Setup Step: (b) Multiple MDM Serviceguard packages - create a mdisMDM package. Create Serviceguard templates and edit the files for the mdisMDM package. For information on the variables used in this step see the configuration step of the nfsMDM package above.
SAP Master Data Management (MDM) Installation and Configuration Considerations scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mdisMDM.config cmrunpkg mdisMDM MDM208 Setup Step: (b) Multiple MDM Serviceguard packages - create a mdssMDM package. Create Serviceguard templates and edit the files for the mdssMDM package. For information on the variables used in this step see the configuration step of the nfsMDM package above.
SAP Master Data Management (MDM) Installation and Configuration Considerations NOTE At this stage file /etc/cmcluster/MDM/mdssMDM.control.script does not have to be edited as neither an IP address nor a storage volume has to be configured scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mdssMDM.config cmrunpkg mdssMDM MDM210 Setup Step: (b) Multiple MDM Serviceguard packages - create a masterMDM package.
SAP Master Data Management (MDM) Installation and Configuration Considerations NOTE At this stage file /etc/cmcluster/MDM/masterMDM.control.script does not have to be edited as neither an IP address nor a storage volume has to be configured. scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/masterMDM.config cmrunpkg masterMDM MDM212 Setup Step: Start the “Oracle Server” 9.2 installation The Oracle installer requires an X –display for output.
SAP Master Data Management (MDM) Installation and Configuration Considerations File locations --------------Source: Path products: /KITS/920_64/Disk1/products.
SAP Master Data Management (MDM) Installation and Configuration Considerations Database Identification --------------------Global Database Name: MDM SID:MDM Database File Location ---------------------Directory for Database Files: /oracle/MDM/920_64/oradata Database Character Set ---------------------[ ]Use the default character set [X]Use the Unicode (AL32UTF8) as the character set Choose JDK Home Directory ------------------------Enter JDK Home: /opt/java1.
SAP Master Data Management (MDM) Installation and Configuration Considerations cat p4547809_92080_HP64aa.bin p4547809_92080_HP64ab.bin > p4547809_92080.zip /usr/local/bin/unzip p4547809_92080_HP64.zip This will unzip and copy the kit into directory /KITS/ora9208/Disk1. Start the Oracle upgrade executing the runInstaller command. The following contains a summary of the responses during the installation.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM216 Setup Step: Start the “Oracle Server” - Client installation for user oramdm Before running the upgrade halt the Oracle database and halt the Oracle listener. The next steps are ore installing the "Oracle Server - Client" bits to be able to run sqlplus command over a network connection. The "Oracle Server - Client" is installed for user oramdm. Start the installation executing the runInstaller command.
SAP Master Data Management (MDM) Installation and Configuration Considerations [ ]Runtime [ ]Custom Installer Summary ----------------- ->Install Chapter 5 271
SAP Master Data Management (MDM) Installation and Configuration Considerations NOTE SAP also provides a kit called “Oracle 9.2.0.8 client software”. Do not install “Oracle 9.2.0.8 client software”. The Oracle 9.2.0.8 client software is not an Oracle product. It is an SAP product and mainly provides libraries for an R/3 application server to connect to an Oracle database instance. This kit is called OCL92064.SAR and it installs into the /oracle/client/92x_64 directory.
SAP Master Data Management (MDM) Installation and Configuration Considerations --------------Source: Path products: /KITS/Disk1/products.jar Destination: Name: MDM (OUIHome) Path: /home/mdmuser/oracle/MDM/920_64 Available Products ------------------[ ]Oracle 9i Database [X]Oracle 9i client [ ]Oracle 9i management Installation Type ----------------[X]Administrator [ ]Runtime [ ]Custom Choose JDK Home Directory ------------------------Enter JDK Home: /opt/java1.
SAP Master Data Management (MDM) Installation and Configuration Considerations ----------------------------------[ ]Yes I will use a directory service [X]No I will create net service names my self Net Service Name Configuration ------------------------------- [X]Oracle8i or late database service [ ]Oracle 8i release 8.0 database service NetService Name Configuration ----------------------------- database used :MDM protocol used: TCP protocol used db host name: 172.16.11.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM220 Setup Step: Upgrade to Oracle 9.2.0.8 for user mdmuser Upgrade to Oracle 9.2.08 of the "Oracle Server - Client" bits with Oracle patchset p4547809_92080_HP64.zip. The Oracle patchset has been unzipped and copied the kit into directory /KITS/ora9208/Disk1. /KITS/ora9208/Disk1/runInstller Specify File Locations ---------------------Source: Path: /KITS/oa9208/stage/products.
SAP Master Data Management (MDM) Installation and Configuration Considerations cp listener.ora listener.ORG /oracle/MDM/920_64/network/admin/tnsnames.ora --------------------------------------------- # TNSNAMES.ORA Network Configuration File: # /oracle/MDM/920_64/network/admin/tnsnames.ora # Generated by Oracle configuration tools. MDM = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.
SAP Master Data Management (MDM) Installation and Configuration Considerations (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.97)(PORT = 1521)) ) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = MDM) (ORACLE_HOME = /oracle/MDM/920_64) (SID_NAME = MDM) ) ) MDM224 Setup Step: Configure /etc/tnsnames.ora , /etc/listener.ora and /etc/sqlnet.
SAP Master Data Management (MDM) Installation and Configuration Considerations Start the listener and connect to the MDM database via the newly configured virtual IP address 172.16.11.97 with the following commands: NOTE Replace “passwd” with the password set during the installation. su – oramdm lsnrctl start sqlplus system/passwd@MDM SQL*Plus: Release 9.2.0.8.0 - Production on Fri Feb 16 04:30:22 2007 Copyright (c) 1982, 2002, Oracle Corporation. reserved.
SAP Master Data Management (MDM) Installation and Configuration Considerations SQL*Plus: Release 9.2.0.8.0 - Production on Fri Feb 16 04:30:22 2007 Copyright (c) 1982, 2002, Oracle Corporation. reserved. All rights Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.8.0 - Production SQL> MDM226 Setup Step: Install the MDM IMPORT SERVER 5.
SAP Master Data Management (MDM) Installation and Configuration Considerations cd /opt zcat "/KITS/MDM_Install/MDM55SP04P03/MDM IMPORT SERVER\ 5.5/HPUX_64/mdm-import-server-5.5.35.16-hpux-PARISC64-aCC.\ tar.Z" | tar xvf – x MDM/bin/mdis-r, 41224248 bytes, 80517 tape blocks x MDM/bin/mdis, 355 bytes, 1 tape blocks x MDM/lib/libxerces-c.sl.26 symbolic link to libxerces-c.sl.26. . . su - mdmuser mkdir /home/mdmuser/mdis cd mdis mdis MDM228 Setup Step: Install the MDM SERVER 5.
SAP Master Data Management (MDM) Installation and Configuration Considerations /usr/local/bin/unzip MDS55004P_3.ZIP cd /opt zcat "/KITS/MDM_Install/MDM55SP04P03/MDM SERVER\ 5.5/HPUX_64/mdm-server-5.5.35.16-hpux-PARISC64-aCC.tar.Z" |\ tar xvf – x MDM/bin/mds-r, 41224248 bytes, 80517 tape blocks x MDM/bin/mds, 355 bytes, 1 tape blocks . . su - mdmuser mkdir /home/mdmuser/mds cd mds mds MDM230 Setup Step: Install the MDM SYNDICATION SERVER 5.
SAP Master Data Management (MDM) Installation and Configuration Considerations /usr/local/bin/unzip MDMSS55004P_3.ZIP cd /opt zcat "/KITS/MDM_Install/MDM55SP04P03/MDM SYNDICATION SERVER\ 5.5/HPUX_64/mdm-syndication-server-5.5.35.16-hpux-PARISC64\ -aCC.tar.Z" |tar xvf x MDM/bin/mdss-r, 37537864 bytes, 73317 tape blocks x MDM/bin/mdss, 355 bytes, 1 tape blocks . . su - mdmuser mkdir /home/mdmuser/mdss cd mdss mdss MDM232 Setup Step: Edit the mds.ini, mdis.ini and mdss.ini files Edit the mds.ini, mdis.
SAP Master Data Management (MDM) Installation and Configuration Considerations /home/mdmuser/mdss/mdss.ini [GLOBAL] Server=mdsreloc . . MDM234 Setup Step: Stop all MDM processes Before continuing with the configuration of the MDM serviceguard packages, the MDM processes have to be stopped. Use the following kill command to accomplish this:.
SAP Master Data Management (MDM) Installation and Configuration Considerations typeset MDM_SCR="/etc/cmcluster/MDM/sapmdm.sh" typeset MDM_ACTION="start" typeset MDM_COMPONENT="mgroup" ${MDM_SCR} "${MDM_ACTION}" "${MDM_COMPONENT}" test_return 51 } function customer_defined_halt_cmds { typeset MDM_SCR="/etc/cmcluster/MDM/sapmdm.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM238 Setup Step: (b) Multiple Serviceguard package – continue with mdsMDM, mdisMDM, mdsMDM and masterMDM configuration If the Multiple Serviceguard package option was chosen in the beginning of this chapter, continue with the configuration of the mdsMDM, mdisMDM, mdssMDM and masterMDM packages. Insert in the function customer_defined_run_cmds and customer_defined_halt_cmds the following shell scripts.
SAP Master Data Management (MDM) Installation and Configuration Considerations ${MDM_SCR} "${MDM_ACTION}" "${MDM_COMPONENT}" test_return 51 } function customer_defined_halt_cmds { typeset MDM_SCR="/etc/cmcluster/MDM/sapmdm.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM240 Setup Step: Configure sap.config File /etc/cmcluster/MDM/sap.config contains the SGeSAP specific parameters for MDM. The following table is an excerpt and explains the parameters that can be set for the MDM configuration. Table 5-3 MDM parameter descriptions Parameter Chapter 5 Description MDM_DB=ORACLE The database type being used. MDM_DB_SID=MDM The Oracle SID of the MDM database.
SAP Master Data Management (MDM) Installation and Configuration Considerations Table 5-3 MDM parameter descriptions (Continued) Parameter MDM_REPOSITORY_SPEC=” PRODUCT_HA:MDM:o:mdm:sap \ PRODUCT_HA_INI:MDM:o:mdm:sap \” Description The following a brief description of the values: Repository= PRODUCT_HA DBMS instance= MDM Database type= o (Oracle) Username= mdm Password= sap MDM_CRED="Admin: " Clix Repository Credentials: the password used for repository related commands.
SAP Master Data Management (MDM) Installation and Configuration Considerations The following table details the MDM_MGROUP and MDM_MASTER dependencies. Table 5-4 MDM_MGROUP and MDM_MASTER dependencies Dependency MDM242 Description MDM_MGROUP_DEPEND=”\ mdb mds mdis mdss” The elements (mdb mds...) of variable MDM_MGROUP_DEPEND are the names of the MDM components that are started from a single Serviceguard package. The order of the MDM components is important.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM_USER=mdmuser MDM_MDS_RELOC=172.16.11.96 MDM_PASSWORD="" MDM_REPOSITORY_SPEC=" PRODUCT_HA:MDM:o:mdm:sap \ PRODUCT_HA_INI:MDM:o:mdm:sap \ PRODUCT_HA_NOREAL:MDM:o:mdm:sap \ " MDM_CRED="Admin: " MDM_MONITOR_INTERVAL=60 MDM_MGROUP_DEPEND="mdb mds mdis mdss" Copy file sap.config to the second node: scp –pr /etc/cmcluster/MDM/sap.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM_USER=mdmuser MDM_MDS_RELOC=172.16.11.96 MDM_PASSWORD="" MDM_REPOSITORY_SPEC=" PRODUCT_HA:MDM:o:mdm:sap \ PRODUCT_HA_INI:MDM:o:mdm:sap \ PRODUCT_HA_NOREAL:MDM:o:mdm:sap \ " MDM_CRED="Admin: " MDM_MONITOR_INTERVAL=60 MDM_MASTER_DEPEND="mdbMDM mdsMDM mdisMDM mdssMDM" Copy file sap.config to second node. scp –pr /etc/cmcluster/MDM/sap.
SAP Master Data Management (MDM) Installation and Configuration Considerations SERVICE_HALT_TIMEOUT 60 vi /etc/cmcluster/MDM/mgroupMDM.control.script SERVICE_NAME[0]="mgroupMDMmon" SERVICE_CMD[0]="/etc/cmcluster/MDM/sapmdm.sh check mgroup" SERVICE_RESTART[0]="" To activate the changes in the configuration files run the following commands: cmapplyconf -P /etc/cmcluster/MDM/mgroupMDM.
SAP Master Data Management (MDM) Installation and Configuration Considerations SERVICE_RESTART[0]="" To activate the changes in the Serviceguard configuration files run the following command: cmapplyconf -P /etc/cmcluster/MDM/mdsMDM.config Repeat the above steps for the mdbMDM, mdisMDM and mdsMDM packages. NOTE The masterMDM Serviceguard package does not require a check function.
SAP Master Data Management (MDM) Installation and Configuration Considerations MDM254 Setup Step: (b) Multiple - Starting and Stopping the masterMDM package The masterMDM package is responsible for starting other Serviceguard packages (mdmDB, mdsMDM, mdisMDM and mdssMDM) in a cluster in the required order as specified in file sap.config. With this information, the masterMDM will run the appropriate cmrunpkg commands to start the packages.
SAP Master Data Management (MDM) Installation and Configuration Considerations masterMDM log files: /etc/cmcluster/MDM/dbMDM.control.script.log /etc/cmcluster/MDM/masterMDM.control.script.log /etc/cmcluster/MDM/mdisMDM.control.script.log /etc/cmcluster/MDM/mdsMDM.control.script.log /etc/cmcluster/MDM/mdssMDM.control.script.log Serviceguard log files: /var/adm/syslog/sylog.
SAP Master Data Management (MDM) Installation and Configuration Considerations 296 Chapter 5
SGeSAP Cluster Administration 6 SGeSAP Cluster Administration SGeSAP clusters follow characteristic hardware and software setups. An SAP application is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a virtualization layer that keeps the application independent of specific server hardware.
SGeSAP Cluster Administration Change Management Change Management Serviceguard keeps information about the cluster configuration. It especially needs to know the relocatable IP addresses and its subnets, your Volume Groups, the Logical Volumes and their mountpoints. Check with your HP consultant for information about the way Serviceguard is configured to suite your SAP system. If you touch this configuration, you may have to reconfigure your cluster.
SGeSAP Cluster Administration Change Management If remote shell access is used, never delete the mutual .rhosts entries of the root user and adm on any of the nodes. Never delete the secure shell setup in case it is specified for SGeSAP. Entries in /etc/hosts, /etc/services, /etc/passwd or /etc/group should be kept unified across all nodes. If you use an ORACLE database, be aware that the listener configuration file of SQL*Net V2 is kept in a local copy as /etc/listener.ora by default, too.
SGeSAP Cluster Administration Change Management If the system is badly configured, it might be possible that the system hangs if a logon is attempted as adm user. The reason for this is, that /usr/sap//SYS/exe is part of the path of adm. Without local binaries this directory links to /sapmnt/ which in fact is handled by the automounter. The automounter cannot contact the host belonging to the relocatable address that is configured because the package is down. The system hangs.
SGeSAP Cluster Administration Change Management SAP Software Changes During installation of the SGeSAP Integration for SAP releases with kernel <7.0, SAP profiles are changed to contain only relocatable IP-addresses for the database as well as the Central Instance. You can check this using transaction RZ10. In the DEFAULT.
SGeSAP Cluster Administration Change Management NOTE A print job in process at the time of the failure will be canceled and needs to be reissued manually after the failover. To make a spooler highly available on the Central Instance, set the destination of the printer to __ using the transaction SPAD. Print all time critical documents via the high available spool server of the Central Instance.
SGeSAP Cluster Administration Change Management Within the CCMS you can define operation modes for SAP instances. An operation mode defines a resource configuration. It can be used to determine which instances are started and stopped and how the individual services are allocated for each instance in the configuration. An instance definition for a particular operation mode consists of the number and types of Work processes as well as Start and Instance Profiles.
SGeSAP Cluster Administration SGeSAP Administration Aspects SGeSAP Administration Aspects SGeSAP needs some additional information about the configuration of your SAP environment. It gathers this information in the file /etc/cmcluster//sap.config. In more complicated cases there might be additional files /etc/cmcluster//sap.config. A sap.config file is sometimes used to create a package specific section two of sap.conf in order to achieve proper Application Server handling.
SGeSAP Cluster Administration SGeSAP Administration Aspects After performing any of the above mentioned activities the ability to failover correctly has to be tested again. Be aware that changing the configuration of your SAP cluster in any way can impact the ability to failover correctly. Any change activity should be finalized by a failover test to confirm proper operation.
SGeSAP Cluster Administration SGeSAP Administration Aspects Upgrading SAP Software Upgrading the version of the clustered SAP application does not necessarily require changes to SGeSAP. Usually SGeSAP detects the release of the application that is packaged automatically and treats it as appropriate.
SGeSAP Cluster Administration Mixed Clusters Mixed Clusters Platform changes from HP 9000 hardware to Integrity systems are complex and require a significant investment. Changing the hardware for a multi-node cluster at once is an expensive undertaking. It is possible to perform this change step by step, replacing only one server at once with SGeSAP. During this transition phase, the cluster will technically have a mixed layout. However, from SAPs perspective it’s still a homogeneous HP-UX system.
SGeSAP Cluster Administration Mixed Clusters NOTE Always check for the latest versions of the Serviceguard Extension for SAP manual and release notes at http://docs.hp.com/en/ha.html. Especially for mixed cluster setups and configurations refer to SAP note 612796 - 'Move to HP-UX Itanium systems'. Even though the SAP HP 9000 executables might run on Integrity systems under the ARIES binary emulator, SAP has no plans to support HP 9000 executables on Integrity systems.
SGeSAP Cluster Administration Mixed Clusters System Configuration Changes PI010 System Configuration Change: The shared disk containing SAP executables, profiles and globals (/sapmnt/) must have sufficient disk space for a full set of additional Integrity binaries. PI020: System Configuration Change: The SAP executable directory is the key point where the redirection for HP 9000 or Integrity executables is performed, so the HP 9000 executables need to be moved to a distinct directory.
SGeSAP Cluster Administration Mixed Clusters PI040: System Configuration Change: Similar to the local executable link on HP 9000 systems, perform the step for the Integrity node: cd /sapmnt ln -s /sapmnt/exe_ipf exe_local PI050: System Configuration Change: Make sure to have a local copy of the Oracle client libraries on all nodes. You’ll need to download the HP 9000 version for the PA RISC nodes and the IPF version for the Integrity nodes from http://service.sap.com.
SGeSAP Cluster Administration Switching SGeSAP Off and On Switching SGeSAP Off and On This section provides a brief description of how to switch off SGeSAP for ABAP systems with storage layout option 1 of chapter 2 (NFS-based). Your individual setup may require additional steps that are not included in this document. Completely switching off the SGeSAP Integration means that the SAP system will not run on the relocatable IP address any longer.
SGeSAP Cluster Administration Switching SGeSAP Off and On SO040: Switch off step: Save the configuration files that are going to be changed so you can switch on SGeSAP again by just copying back the original files. This list of files you have to save include: /usr/sap/trans/bin/TPPARAM /usr/sap//SYS/profile/DEFAULT.PFL /usr/sap//SYS/profile/_DVEBMGS If an Oracle database is used, the following files also need to be saved: listener.ora tnsnames.
SGeSAP Cluster Administration Switching SGeSAP Off and On on.. in sqlhosts.soc with: on.. SO060: Switch off step (Informix database only): In the home directories of the users Informix and adm you can find two files called .customer.sh and .customer.csh. Rename them to prevent them from being executed the next time the database starts. For example: mv .customer.csh .customer.sh.
SGeSAP Cluster Administration Switching SGeSAP Off and On SO080: Switch off step (Informix database only): After successfully finishing all tests, take another copy of the files you saved in SO040. You previously renamed the onconfig file if you have an Informix database. You might need this second backup for reference. If you plan to switch on the SGeSAP Integration again at a later point in time, make sure that no additional changes in the files occurred in the between times.
SGeSAP Cluster Administration Switching SGeSAP Off and On Switching On SGeSAP Perform the following steps to switch SGeSAP on again. SO500: Switch on step: Stop the database and SAP instances manually on all hosts in and outside of the cluster with: stopsap all SO510: Switch on step: Restore all profiles with the versions that use relocatable addresses. They were backed up in step SO040. If appropriate, incorporate any changes that occurred to these files during the switch off process.
SGeSAP Cluster Administration Switching SGeSAP Off and On 316 • Relocate printers to the relocatable name. Transaction code: SPAD • Check operational modes within SAP. You must setup operation modes for the relocatable name. • Do all the testing as described in the document SAP High Availability.