Managing Serviceguard Extension for SAP Version B.05.
© Copyright 2000-2009 Hewlett-Packard Development Company, L.P Legal Notices Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright.Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.
Table of Contents Printing History.............................................................................................9 About this Manual...................................................................................................................................9 Related Documentation..........................................................................................................................10 1 Designing SGeSAP Cluster Scenarios..................................................
Global Defaults...............................................................................................................................78 HA NFS Toolkit Configuration.................................................................................................................80 Auto FS Configuration............................................................................................................................81 Database Configuration.....................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 3-1 4-1 4-2 4-3 5-1 6-1 Two-Package Failover with Mutual Backup Scenario.............................................................................14 One-Package Failover Scenario........................................................................................................15 Replicated Enqueue Clustering for ABAP and JAVA Instances................................................................16 Failover Node with Application Server package..................
List of Tables 1 Editions and Releases............................................................................................................................9 2 Abbreviations.....................................................................................................................................10 1-1 Mapping the SGeSAP legacy package types to SGeSAP modules and different SAP naming conventions....12 2-1 Option descriptions.....................................................................
Printing History Table 1 Editions and Releases Printing Date Part Number Edition SGeSAP Release Operating System Releases June 2000 B7885-90004 Edition 1 B.03.02 HP-UX 10.20 and HP-UX 11.00 March 2001 B7885-90009 Edition 2 B.03.03 HP-UX 10.20, HP-UX 11.00 and HP-UX 11i June 2001 B7885-90011 Edition 3 B.03.04 HP-UX 10.20, HP-UX 11.00 and HP-UX 11i March 2002 B7885-90013 Edition 4 B.03.06 HP-UX 11.00 and HP-UX 11i June 2003 B7885-90018 Edition 5 B.03.
Table 2 Abbreviations Abbreviation Meaning , , , System ID of the SAP system, RDBMS or other components in uppercase/lowercase SAP instance, e.g.
1 Designing SGeSAP Cluster Scenarios This chapter introduces the basic concepts used by the HP Serviceguard Extension for SAP (SGeSAP) and explains several naming conventions.
templates, SAP software service monitors as well as specialized additional features to integrate hot standby liveCache scenarios, HP Workload Management scenarios and HP Event Monitors. There are three major Serviceguard modules delivered with SGeSAP. For the standard SAP Netweaver web application server stack it provides a Serviceguard module called sgesap/sapinstance.
identified as SPOFs in the system. Other SAP services can potentially be installed redundantly within additional Application Server instances, sometimes called Dialog Instances. As its naming convention may suggest, DVEBMGS, there are more services available within the Central Instance than just those that cause the SPOFs. An undesirable result of this is, that a Central Instance is a complex software with a high resource demand.
dependencies. The information is for example used to delay SAP instance package startups while the database is starting in a separate package, but not yet ready to accept connections. A cluster can be configured in a way that two nodes back up each other. The principle layout is depicted in figure 1-1. This picture as well as the following drawings are meant to illustrate basic principles in a clear and simple fashion.
If the primary node fails, the database and the Central Instance fail over and continue functioning on an adoptive node. After failover, the system runs without any manual intervention needed. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can either stay up or be restarted triggered by a failover. A sample configuration in Figure 1-2 shows node1 with a failure, which causes the package containing the database and central instance to fail over to node2.
Figure 1-3 Replicated Enqueue Clustering for ABAP and JAVA Instances Enqueue Services also come as integral part of each ABAP DVEBMGS Central Instance. This integrated version of the Enqueue Service is not able to utilize replication features. The DVEBMGS Instance needs to be split up in a standard Dialog Instance and a ABAP System Central Service Instance (ASCS).
packages. Otherwise, situations can arise in which a failover of the combined ASCS/SCS package is not possible. Finally, ASCS can not be combined with its ERS instance (AREP) in the same package. For the same reason, SCS can not be combined with its ERS instance (REP). The sgesap/sapinstance module can be used to cluster Enqueue Replication Instances. Furthermore, SGeSAP offers the legacy package types rep and arep to implement enqueue replication packages for JAVA and ABAP.
Dialog Instance virtualization packages provide high availability and flexibility at the same time. The system becomes more robust using Dialog Instance packages. The virtualization allows moving the instances manually between the cluster hosts on demand. Figure 1-4 Failover Node with Application Server package Figure 1-4 illustrates a common configuration with the adoptive node running as a Dialog Server during normal operation.
For convenience, Additional Dialog Instances can be started, stopped or restarted with any SGeSAP package that secures critical components. Some SAP applications require the whole set of Dialog Instances to be restarted during failover of the Central Service package. This can be triggered with SGeSAP means. It helps to better understand the concept, if one considers that all of these operations for non-clustered instances are considered inherently non-critical.
Figure 1-5 Replicated Enqueue Clustering for ABAP and JAVA Instances Figure 1-5 shows an example configuration. The dedicated failover host can serve many purposes during normal operation. With the introduction of Replicated Enqueue Servers, it is a good practice to consolidate a number of Replicated Enqueues on the dedicated failover host. These replication units can be halted at any time without disrupting ongoing transactions for the systems they belong to.
2 Planning the Storage Layout Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. Two volume managers can be used with Serviceguard: the standard Logical Volume Manager (LVM) of HP-UX and the Veritas Volume Manager (VxVM). SGeSAP can be used with both volume managers.
• Whether it needs to be kept as a local copy on internal disks of each node of the cluster. • Whether it needs to be shared on a SAN storage device to allow failover and exclusive activation. • Whether it needs to provide shared access to more than one node of the cluster at the same time. NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel sometimes also patches SAP tools.
Directories that Reside on Shared Disks Volume groups on SAN shard storage are configured as part of the SGeSAP packages. They can be either: • instance specific or • system specific or • environment specific. Instance specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. System specific volume groups get accessed from all instances that belong to a particular SAP System.
Table 2-3 System and Environment Specific Volume Groups Mount Point Access Point Potential owning packages VG Name /export/sapmnt/ shared disk and HA NFS db Device minor number dbci jdb jdbjci sapnfs /export/usr/sap/trans db dbci sapnfs /usr/sap/put shared disk none The tables can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes.
• /etc/cmcluster — the directory in which Serviceguard keeps its legacy configuration files and the node specific package runtime directories • Local database client software needs to be stored locally on each node. Details can be found in the database sections below. Part of the content of the local group of directories must be synchronized manually between all nodes of the cluster. SAP instance (startup) profile names contain either local hostnames or virtual hostnames.
requires a Serviceguard multi-node package. SGeSAP packages are Serviceguard single-node packages. Thus, a package can not combine SGeSAP and CFS related functionality. Common Directories that are Kept Local Most common file systems reside on CFS, but there are some directories and files that are kept local on each node of the cluster: • /etc/cmcluster — the directory in which Serviceguard keeps its configuration files and the node specific package runtime directories.
Table 2-6 Availability of SGeSAP Storage Layout Options for Different Database RDBMS DB Technology Supported Platforms Oracle Single-Instance PA 9000 Itanium SGeSAP Storage Layout Options Cluster Software Bundles 1. Serviceguard or any Serviceguard Storage Management bundle (for Oracle) 2. SGeSAP 3. Serviceguard HA NFS Toolkit idle standby 1. Serviceguard 2. SGeSAP 3. Serviceguard HA NFS Toolkit (opt.) CFS 1. Serviceguard Cluster File System (for Oracle) 2.
1. The Oracle RDBMS and database tools rely on an ORA_NLS[] setting that refers to NLS files that are compatible to the version of the RDBMS. Oracle 9.x needs NLS files as delivered with Oracle 9.x. 2. The SAP executables rely on an ORA_NLS[] setting that refers to NLS files of the same versions as those that were used during kernel link time by SAP development. This is not necessarily identical to the installed database release.
SAP and the note content may change at any time without further notice. Described options may have "Controlled Availability" status at SAP. Real Application Clusters requires concurrent shared access to Oracle files from all cluster nodes. This can be achieved by installing the Oracle software on Cluster File Systems provided by HP Serviceguard Cluster File System for RAC. There are node specific files and directories, such as the TNS configuration.
AP1=/sapdb/AP1/db [Runtime] /sapdb/programs/runtime/7240=7.2.4.0, /sapdb/programs/runtime/7250=7.2.5.0, /sapdb/programs/runtime/7300=7.3.0.0, /sapdb/programs/runtime/7301=7.3.1.0, /sapdb/programs/runtime/7401=7.4.1.0, /sapdb/programs/runtime/7402=7.4.2.0, For MAXDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations] , [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.
this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for SAPDB/MAXDB lower that version 7.6, this directory should move with the package. Therefore, SAP provided a way to redefine this path for each SAPBDB/MAXDB individually. SGeSAP expects the work directory to be part of the database package. The mount point moves from /sapdb/data/wrk to /sapdb/data//wrk for the clustered setup.
NOTE: Using tar or cpio is not a safe method to copy or move directories to shared volumes. In certain circumstances file or ownership permissions may not correctly transported, especially files having the s-bit set: /sapdb//db/pgm/lserver and /sapdb//db/pgm/dbmsrv. These files are important for the vserver process ownership and they have an impact on starting the SAPDB via adm. These files should retain the same ownership and permission settings after moving them to a shared volume.
3 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). It is written in the format of a step-by-step guide. It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualize SAP instances manually.
The legacy package installation steps cover HP-UX 11i v1, HP-UX 11i v2 and HP-UX 11i v3 using Serviceguard 11.16 or higher. Modular packages can be used with HP-UX 11i v2 and HP-UX 11i v3 using Serviceguard 11.18 or higher.
SAP Preparation This section covers the SAP specific preparation, installation and configuration before creating a high available SAP System landscape. This includes the following logical tasks: • SAP Pre-Installation Considerations • Replicated Enqueue Conversion SAP Pre-Installation Considerations This section gives additional information that help with the task of performing SAP installations in HP Serviceguard clusters. It is not intended to replace any SAP installation manual.
SGeSAP functionality. It will be more convenient to do this once the SAP installation has taken place. The following steps are performed as root user to prepare the cluster for the SAP installation. Preparation steps MS12xx should be followed to create module-based packages. Preparation steps LS12xx should be followed to create legacy packages. Preparation Step: MS1200 Create a tentative Serviceguard package configuration for one or more SAP instances.
This step assumes that the cluster as such is already configured and started. Please refer to the Managing Serviceguard user's guide if more details are required. cmapplyconf -P ./sap.config cmrunpkg -n All virtual IP addresses should now be configured. A ping command should reveal that they respond to communication requests. Preparation Step: IS1300 Before installing the SAP Application Server 7.0 some OS-specific parameters have to be adjusted.
between physical machines. HP does not support a conversion to a virtual IP after the initial installation on a physical hostname for SAP JAVA engines. The SAPINST_USE_HOSTNAME option can be set as an environment variable, using export or setenv commands. It can also be passed to SAPINST as an argument.
is not delivering installation routines that install Replicated Enqueue configurations for these releases, so the manual conversion steps become necessary. The 4.6D kernel does require some kernel executables of the 6.40 kernel to be added. If the SAP installation was done for Netweaver 2004 Java-only, Netweaver 2004s, or a newer release as documented in section 'SAP Installation Considerations', only the second part 'Creation of Replication Instance' is required.
They will be switched between the cluster nodes later. su - adm cd /usr/sap//ASCS mkdir data log sec work Replicated Enqueue Conversion: RE040 If the used SAP kernel has a release older than 6.40... Download the executables for the Standalone Enqueue server from the SAP Service Marketplace and copy them to /sapmnt//exe. There should be at least three files that are added/replaced: enserver, enrepserver and ensmon.
INSTANCE_NAME =ASCS ----------------------------------------------------------------------# start SCSA handling #----------------------------------------------------------------------Execute_00 =local $(DIR_EXECUTABLE)/sapmscsa –n pf=$(DIR_PROFILE)/_ASCS_ #----------------------------------------------------------------------# start message server #----------------------------------------------------------------------MS =ms.
Scan the old _DVEBMGS_ profile to see whether there are additional parameters that apply to either the Enqueue Service or the Message Service. Individual decisions need to be made whether they should be moved to the new profile.
They will be switched between the cluster nodes later. su - adm cd /usr/sap//ERS mkdir data log exe work profile Starting with SAP kernel 7.0, the following subdirectory structure also needs to be created: mkdir -p exe/servicehttp/sapmc Replicated Enqueue Conversion: RE095 A couple of required SAP executables should be copied from the central executable directory /sapmnt//exe to the instance executable directory /usr/sap//ERS/exe For SAP kernel 6.
for i in sapmc.jar sapmc.html frog.jar soapclient.jar do cp /sapmnt/$1/exe/servicehttp/sapmc/$i /usr/sap/$1/$2/exe/servicehttp/sapmc/$i done The script needs to be called providing a SAP SID and the instance. Example: ./cperinit.sh C11 ERS00 Replicated Enqueue Conversion: RE100 Create instance profile and startup profile for the ERS Instance. These profiles get created as adm in the instance profile directory/usr/sap//ERS/profile.
#----------------------------------------------------------------------_CPARG0 = list:$(DIR_EXECUTABLE)/ers.lst Execute_00 = immediate $(DIR_EXECUTABLE)/sapcpe $(_CPARG0) pf=$(_PF) #-------------------------------------------------------------------# start enqueue replication server #-------------------------------------------------------------------_ER = er.
Directory Structure Configuration This section adds implementation practices to the architectural decisions made in the chapter "Storage Layout Considerations". If layout option 1 or option 2 is used, then the non-CFS directory structure conversion must be performed. An implementation based on HP LVM is described in this document. VxVM can similarly be used. Option 3 maps to the CFS configuration for SAP. In this case, usage of VxVM and CVM are mandatory.
...> param_directget RUNDIRECTORY OK RUNDIRECTORY /sapdb/data/wrk/ --...> param_directput RUNDIRECTORY /sapdb//wrk OK --...> Installation Step: IS049 If it does not yet exist, create a CVM/CFS system multi-node package and start it. Integrate all SAP related CFS disk groups and file systems. The package dependencies to the system multi-node package get created automatically. The mount points need to be created manually on all alternate nodes first.
Non-CFS Directory Structure Conversion The main purpose of this section is to ensure the proper LVM layout and the right distribution of the different file systems that reside on shared disks. This section does not need to be consulted when using the HP Serviceguard Storage Management Suite with CFS and shared access Option 3. Logon as root to the system where the SAP Central Instance is installed (primary host).
ls # Example: rm -r SYS # rm -r D00 cd DVEBMGS find . -depth -print|cpio -pd /usr/sap//DVEBMGS rm -r * # be careful with this cd .. rmdir DVEBMGS 2. Mark all shared volume groups as members of the cluster. This only works if the cluster services are already available. For example: cd / # umount all logical volumes of the volume group vgchange -a n vgchange -c y vgchange -a e # remount the logical volumes 3.
Installation Step: IS050 This step describes how to create a local copy of the SAP binaries. The step is mandatory for legacy packages. It is also required to do this, if a module-based package is used for a system with SAP kernel <7.0. Check if the Central Instance host and all application servers have a directory named /usr/sap//SYS/exe/ctrun. If the directory exists, this step can be skipped. The system is already using local executables through sapcpe.
If the local executable directory only holds links, sapcpe is not configured correctly. It is not an option to manually copy the local executables in this case. The next instance restart would replace the local copies with symbolic links. For latest information on how to utilize sapcpe refer to the SAP online documentation. Installation Step: IS060 Clustered SAP Instances must have instance numbers that are unique for the whole cluster.
NOTE: Beware of copying over into /etc/passwd if your HP-UX is running in Trusted System mode. Table 3-3 Password File Users username UID GID home directory shell adm ora sqd sqa sapdb Installation Step: IS090 Look at the service file, /etc/services, on the primary side. Replicate all services listed in Table 3-4 Services on the Primary Node that exist on the primary node onto the backup node.
mv .dbenv_.sh .dbenv_.sh mv .sapsrc_.csh .sapsrc_.csh mv .sapsrc_.sh .sapsrc_.sh mv .dbsrc_.csh .dbsrc_.csh mv .dbsrc_.sh .dbsrc_.sh The following statement should automate this activity for standard directory contents.
Installation Step: IS190 In full CFS environments, this step can be omitted. If CFS is only used for SAP and not for the database, then the volume group(s) that are required by the database need to be distributed to all cluster nodes. Import the shared volume groups using the minor numbers specified in Table 1 - Instance Specific Volume Groups contained in chapter "Planning the Storage Layout". The whole volume group distribution should be done using the command line interface. Do not use SAM.
Installation Step: IS240 Make sure that the required software packages are installed on all cluster nodes: ▲ Serviceguard Extension for SAP, T2803BA The swlist command may be utilized to list available software on a cluster node If a software component is missing install the required product depot files using the swinstall tool. Installation Step: IS260 You need to allow remote access between cluster hosts. This can be done by using remote shell remsh(1) or secure shell ssh(1) mechanisms.
If storage layout option 1 is used, create all SAP instance directories below /export as specified in Chapter 2. For example: su - adm mkdir -p /export/sapmnt/ mkdir -p /export/usr/sap/trans exit MAXDB Database Step: SD290 Create all MAXDB directories below /exportas specified in Chapter 2.
Refer to IS260 if remsh is used instead. Installation Step: IS350 Search .profile in the home directory of adm and remove the set -u, if found. Installation Step: IS360 Add all relocatable IP address information to /etc/hosts on all nodes in the cluster. Modular Package Configuration This section describes the cluster software configuration.
In the subsection for the DB component there is an optional paragraph for Oracle and SAPDB/MAXDB database parameters. Depending on your need for special HA setups and configurations have a look at those parameters and their description. Preparation Step: MS410 The generic Serviceguard parameters of the configuration file need to be edited.
The following list summarizes how the behavior of SGeSAP is affected with different settings of the cleanup_policy parameter: ▲ lazy - no action, no cleanup of resources ▲ normal - removes orphaned resources as reported by SAP tools for the SAP system that is specified in sap_system. An obsolete ORACLE SGA is also removed if a database crash occurred.
db_system determines the name of the database (schema) for SAP. Usually it is a three letter name, similar to a sap_system value (SID). If the value is not specified, but db_vendor has been set, a database with default db_system=sap_system is assumed (SAP’s installation default), if sap_system is specified elsewhere in the package configuration. Optionally, maxdb_userkey sets the MAXDB userkey that is mapped for the operating system level administrator to the database control user (via XUSER settings).
NOTE: The message server monitor works without command line parameters in the service command string service_cmd. It will configure itself automatically during startup and monitors all message servers that are clustered within the package. Example entries for the package configuration file: service_name scsC11ms service_cmd “/opt/cmcluster/sap/SID/sapms.
An attempt can be triggered to start, stop, restart or notify instances that belong to the same SAP system as the instances in the package. An attempt can be triggered to shutdown instances that belong to any other SAP system (details are given below). All the triggered attempts are considered to be non-critical. If any triggered attempt fails, it doesn't cause failure of the ongoing package operation. Don't use these parameters for non-redundant resources and single points of failure.
sap_ext_host node2 sap_ext_treat nnnyn sap_ext_instance D03 sap_ext_host node2 sap_ext_treat yyyyn Subsection for additional SAP infrastructure software handling: OS472 Parameters to influence SAP software that runs outside of SAP Application Server Instances come with module sgesap/sapinfra. sap_infra_sw_type defines a SAP infrastructure software component that is to be covered by the package.
If the use of the SAP control framework is not required, then remove the reference link from the sapinit script. Furthermore, any running sapstartsrv processes can be killed from the process list. For example: rm /sbin/rc3.d/S<###>sapinit ps -ef | grep sapstartsrv kill Module-based SGeSAP packages will handle the sapstart service agent automatically. For SAP releases based on kernel 7.
Legacy Package Configuration This section describes the cluster software configuration with the following topics: • Serviceguard Configuration • SGeSAP Configuration • Global Default Settings Serviceguard Configuration Refer to the standard Serviceguard manual Managing Serviceguard to learn about creating and editing a cluster configuration file and how to apply it to initialize a cluster with cmquerycl(1m) and cmapplyconf(1m).
Specify subnets to be monitored in the SUBNET section. Installation Step: OS435 This ensures a successful package start only if the required CFS file system(s) are available. DEPENDENCY_NAME SG-CFS-MP-1 DEPENDENCY_CONDITION SG-CFS-MP-1=UP DEPENDENCY_LOCATION SAME_NODE DEPENDENCY_NAME SG-CFS-MP-2 DEPENDENCY_CONDITION SG-CFS-MP-2=UP DEPENDENCY_LOCATION SAME_NODE Optional Step: OS440 If the SAP message server or SAP dispatcher should be monitored as Serviceguard service, the .
.config. If no configuration parameters are provided with the service command, the monitors will default to the settings for a ABAP DVEBMGS Central Instance if possible. Example entries in .control.script: SEVICE_NAME[0]="ciC11ms" SERVICE_CMD[0]="/etc/cmcluster/C11/sapms.mon\" SEVICE_NAME[1]="ciC11disp" SERVICE_CMD[1]="/etc/cmcluster/C11/sapdisp.
needs to be replaced by the SAP instance number. needs to be replaced with the 3-letter SAP system identifier. can be any valid Serviceguard package name. Example: ENQOR_SCS_PKGNAME_C11=foobar ENQOR_REP_PKGNAME_C11=foorep For SAP kernel 7.x; instances SCS00 and ERS01: ENQOR_SCS_ERS01_PKGNAME_C11=foobar ENQOR_ERS_ERS01_PKGNAME_C11=fooers Optional Step: OS450 For non-CFS shares, it is recommended to set AUTO_VG_ACTIVATE=0 in /etc/lvmrc.
The SAP specific control file sapwas.cntl needs to arguments, which are the MODE (start/stop) and the SAP System ID (SID). Don't omit the leading period sign in each line that calls sapwas.cntl. Installation Step: LS500 Distribute the package setup to all failover nodes.
• jdb: a database exclusively utilized by J2EE engines that are not part of Application Server Instances. Do not combine a database with a liveCache in a single package. • jci: a SAP JAVA System Central Services Instance that provides Enqueue and Message Service to SAP J2EE engines that might or might not be part of SAP Application Servers (SCS). • jd: one or more virtualized SAP JAVA Application Instances. NOTE: It is not allowed to specify a db and a jdb component as part of the same package.
CINAME=DVEBMGS CINR=00 CIRELOC=0.0.0.0 In the subsection for the CI component there is a paragraph for optional parameters. Depending on your need for special HA setups and configurations have a look at those parameters and their description. Subsection for the AREP component: OS630 SGeSAP supports SAP stand-alone Enqueue Service with Enqueue Replication. It's important to distinguish between the two components: the stand-alone Enqueue Service and the replication.
JCINAME=SCS JCINR=01 JCIRELOC=0.0.0.0 Subsection for the REP component: OS665 SGeSAP supports SAP stand-alone Enqueue Service with Enqueue Replication. It's important to distinguish between the two components: the stand-alone Enqueue Service and the replication. The stand-alone Enqueue Service is part of the ci or jci component. The rep component refers to the replication unit for protecting JAVA System Central Services.
These Application Servers are not necessarily virtualized or secured by Serviceguard, but an attempt can be triggered to start, stop or restart them with the package. If any triggered attempt fails, it doesn't automatically cause failure of the ongoing package operation. The attempts are considered to be non-critical. In certain setups, it is necessary to free up resources on the failover node to allow the failover to succeed.
used, you also have to specify the Dialog Instance package name in ASPKGNAME[*].The common naming conventions for Dialog Instance packages are: • d (new) • app (deprecated) ${START_WITH_PACKAGE}, ${STOP_WITH_PACKAGE} and {RESTART_DURING_FAILOVER} only make sense if ASSID[]=${SAPSYSTEMNAME}, i.e. these instances need to belong to the clustered SAP component. ${STOP_IF_LOCAL_AFTER_FAILOVER} and {STOP_DEPENDENT_INSTANCES} can also be configured for different SAP components.
ASSID[1]=SG1; ASHOST[1]=extern1; ASNAME[1]=D; ASNR[1]=02; ASTREAT[1]=7; ASPLATFORM[1]="HP-UX" Example 2: The failover node is running a Central Consolidation System QAS. It shall be stopped in case of a failover to this node. ASSID[0]=QAS; ASHOST[0]=node2; ASNAME[0]=DVEBMGS; ASNR[0]=10; ASTREAT[0]=8; ASPLATFORM[0]="HP-UX" The failover node is running the Central Instance of Consolidation System QAS.
WAIT_OWN_AS=2 if the package should also wait for all application servers to come up successfully. You have to use this value if you want to prevent the integration from temporarily opening a new process group for each application server during startup. WAIT_OWN_AS=0 can significantly speed up the package start and stop, especially if Windows NT application servers are used. Use this value only if you have carefully tested and verified that timing issues will not occur.
of the package script execution. Therefore it is not necessary and also not allowed to change the sap.functions.
Specify SAPCCMSR_START=1 if there should be a CCMS agent started on the DB host automatically. You should also specify a path to the profile that resides on a shared lvol For example: SAPCCMSR_START=1 SAPCCMSR_CMD=sapccmsr SAPCCMSR_PFL="/usr/sap/ccms/${SAPSYSTEMNAME}_${CINR}/sapccmsr/${SAPSYSTEMNAME}.pfl Global Defaults The fourth section of sap.config is rarely needed. It mainly provides various variables that allow overriding commonly used default parameters.
prm { groups = OTHERS : 1, Batch : 2, Dialog : 3, SAP_other: 4; users = adm: SAP_other, ora: SAP_other; # # utilize the data provided by sapdisp.mon to identify batch and dialog # procmap = Batch : /opt/wlm/toolkits/sap/bin/wlmsapmap -f /etc/cmcluster//wlmprocmap.__ -t BTC, Dialog : /opt/wlm/toolkits/sap/bin/wlmsapmap -f /etc/cmcluster/P03/wlmprocmap.
# # Request 25 shares per dialog job. # slo s_dialog { pri = 1; entity = PRM group dialog cpushares = 25 shares total per metric m_dialog; } # # Report the number of active processes in the dialog workload group.
HA_NFS_SCRIPT_EXTENSION= This will allow only the package control script of the .control.script to execute the HA NFS script. Installation Step: LS540 The following steps will customize the hanfs. scripts. It will customize all required directories for the usage of the HA-NFS. All directories that are handled by the automounter must be exported by the scripts if they are part of the packages.
AUTO_MASTER="/etc/auto_master" AUTOMOUNT_OPTIONS="-f $AUTO_MASTER" AUTOMOUNTD_OPTIONS= -L AUTOFS=1 Older installations on HP-UX and installations without autofs require a slightly different syntax for the "old" automounter: AUTOMOUNT=1 AUTO_MASTER="/etc/auto_master" AUTO_OPTIONS="-f $AUTO_MASTER" Installation Step: IS810 Make sure that at least one NFS client daemon and one NFS server daemon is configured to run. This is required for the automounter to work. Check the listed variables in /etc/rc.config.
Database Configuration This section deals with additional database specific installation steps and contains the following: • Additional Steps for Oracle • Additional Steps for MAXDB Database Configuration 83
Additional Steps for Oracle The Oracle RDBMS includes a two-phase instance and crash recovery mechanism that enables a faster and predictable recovery time after a crash. The instance and crash recovery is initiated automatically and consists of two phases: Roll-forward phase: Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks.
Be careful if these files were customized after the SAP installation. Oracle Database Step: OR880 Be sure to configure and install the required Oracle NLS files and client libraries as mentioned in section Oracle Storage Considerations included in chapter Planning the Storage Layout. Also refer to SAP OSS Note 180430 for more details. Optional Step: OR890 If you use more than one SAP system inside of your cluster... It is possible that more than one database is running on the same node.
whether there is an invalid connection that should be terminated. It finds the dead connections and returns an error, causing the server process to exit. Oracle Step: OR940 Additional steps for Oracle 9i RDBMS: Some Oracle 9i the Oracle Installers create symbolic links in the client library directory /oracle/client/92x_64/lib that reference to SID-specific libraries residing in the $ORACLE_HOME/lib of that database instance.
Make sure that /usr/spool exists as a symbolic link to /var/spool on all cluster nodes on which the database can run. MAXDB Database Step: SD950 Configure the XUSER file in the adm user home directory. The XUSER file in the home directory of the SAP Administrator keeps the connection information and grant information for a client connecting to the SAPDB database. The XUSER content needs to be adopted to the relocatable IP the SAPDB RDBMS is running on.
SAP ABAP Engine specific configuration steps Logon as adm on the primary node on which a Central Instance or the System Central Services have been installed. The appropriate Serviceguard package should still run on this host in debug mode. Installation Step: IS1100 For SAP Central Instance virtualization, change into the profile directory by typing the alias: cdpro In the DEFAULT.PFL change the following entries and replace the hostname with the relocatible name if you cluster a ci component.
SAPLOCALHOST is set to the hostname per default at startup time and is used to build the SAP application server name: __ This parameter represents the communication path inside an SAP system and between different SAP systems. SAPLOCALHOSTFULL is used for rfc-connections. Set it to the fully qualified hostname. The application server name appears in the server list held by the message server, which contains all instances, hosts and services of the instances.
An instance definition for a particular operation mode consist of the number and types of work processes as well as start and instance profiles (starting with version 3.0 the CCMS allows profile maintenance from within the SAP system). When defining an instance for an operation mode, enter the hostname and the system number of the application server. By using to fill in the hostname field, the instance is working under control of the CCMS after a failover without any change.
admin/host/ For Oracle databases replace the hostname in the connection string: jdbc/pool//Url jdbc:oracle:thin@:1527: Installation Step: NW04J1220 These settings have to be adjusted for the switchover of the J2EE part of the SAP WEB AS; the following configuration has to be performed in the Offline Configuration Editor: ▲ Log on to the Offline Configuration Editor. Table 3-9 IS1130 Installation Step Choose...
4 SAP Supply Chain Management Within SAP Supply Chain Management (SCM) scenarios two main technical components have to be distinguished: the APO System and the liveCache. An APO System is based on SAP Application Server technology. Thus, sgesap/sapinstance and sgesap/dbinstance modules can be used or ci, db, dbci, d and sapnfs legacy packages may be implemented for APO. These APO packages are set up similar to Netweaver packages. There is only one difference. APO needs access to liveCache client libraries.
for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP's latest recommendation. Whenever possible the SAP OSS note number is given. More About Hot Standby A fatal liveCache failure concludes in a restart attempt of the liveCache instance. This will take place either locally or, as part of a cluster failover, on a remote node. It is a key aspect here, that liveCache is an in-memory database technology.
The detection of volumes that need replication as part of the standby startup is dynamically identified within the startup procedure of the standby. It does not require manual maintenance steps to trigger volume pair synchronizations and subsequent split operations. Usually, synchronizations occur only in rare cases, for example for the first startup of a standby or if a standby got intentionally shut down for a longer period of time.
NOTE: The data and log disk spaces of MAXDB are called devspaces. For security and performance reasons SAP recommends to place the devspaces on raw devices. Option 2: Non-MAXDB Environments Cluster Layout Constraints: • There is no MAXDB or additional liveCache running on cluster nodes. Especially the APO System RDBMS is either based on , ORACLE or DB2, but not on MAXDB. • There is no hot standby liveCache system configured. Often APO does not rely on MAXDB as underlying database technology.
this directory to a backup location. This information is then used to determine the reason of the crash. In HA scenarios, for liveCache versions lower than 7.6, this directory should move with the package. SAP allows you to redefine this path for each liveCache/MAXDB individually. SGeSAP expects the work directory to be part of the lc package. The mount point moves from /sapdb/data/wrk to/sapdb/data//wrk.
MAXDB distinguishes an instance dependant path /sapdb/ and two instance independent paths, called IndepData and IndepPrograms. By default all three point to a directory below /sapdb. The paths can be configured in a configuration file called /var/spool/sql/ini/SAP_DBTech.ini. Depending on the version of the MAXDB database this file contains different sections and settings. A sample SAP_DBTech.ini for a host with a MAXDB 7.4 (LC1) and an APO 3.1 using a MAXDB 7.
1. Log on as adm on the machine on which liveCache was installed. Make sure, you have mounted a sharable logical volume on /sapdb//wrk as discussed above. 2. Change the path of the runtime directory of the liveCache and move the files to the new logical volume accordingly. cd /sapdb/data/wrk/ find . -depth -print | cpio -pd /sapdb//wrk cd ..
liveCache Installation Step: LC060 Do the following to continue: 1. 2. Copy the content of the adm home directory to the backup node. This is a local directory on each node. Rename the environment scripts on the secondary nodes. Some of the environment scripts may not exist. For example: su - adm mv .dbenv_.csh .dbenv_.csh mv .dbenv_.sh .dbenv_.sh For liveCache 7.6: su - adm mv .lcenv_.csh .lcenv_.csh mv .lcenv_.
Add all relocatable IP address information to /etc/hosts. Remember to add the heartbeat IP addresses. liveCache Installation Step: LC120 If you use DNS: Configure /etc/nsswitch.conf to avoid problems. For example: hosts: files[NOTFOUND=continue UNAVAIL=continue \ TRYAGAIN=continue]dns HP-UX Setup for Option 4 This section describes how to install a hot standby instance on the secondary node.
The Raidmanager binaries need to accessible via the PATH of root. Two HORCM instances can be configured on each node hot standby Installation Step: LC157 The activation of the hot standby happens on the master instance. Logon to the instance via dbmcli and issue the following commands: hss_enable node= lib= /opt/cmcluster/sap/lib/hpux64/librtehssgesap.
package_type failover : ip_address … : service_name lcknl service_cmd “/sapdb//db/sap/lccluster monitor” service_restart 3 All other standard parameters should be chosen as appropriate for the individual setup. The sgesap/livecache module parameters of the package configuration file are as follows: lc_system determines the name of the clustered liveCache. The lc_virtual_hostname specified corresponds to the virtual hostname of the liveCache.
Change the liveCache instance name to . Redo the above steps for LDA. SGeSAP Legacy Package Configuration This section describes how to configure the SGeSAP lc package. liveCache Installation Step: LC165 Install the product depot file for SGeSAP (T2803BA) using swinstall (1m) if this has not been done already. The installation staging directory is /opt/cmcluster/sap. All original product files are copied there for reference purposes.
### Add following line . /etc/cmcluster//sapwas.cntl start test_return 51 } function customer_defined_halt_cmds { ### Add following line . /etc/cmcluster//sapwas.cntl stop test_return 52 } liveCache Installation Step: LC190 The/etc/cmcluster//sap.configconfiguration file of a liveCache package is similar to the configuration file of Netweaver Instance packages. The following standard parameters in sap.
A hot standby initialization during startup might require hardware-based copy mechanisms of the underlying storage array. Currently HP StorageWorks XP arrays with BCV volumes are supported. The following parameter needs to be set: LC_COPY_MECHANISM=BUSINESSCOPY Optional hot standby Installation Step: LC197 A hot standby initialization procedure can perform a plausibility check of its storage configuration. A hot standby initialization procedure can perform a plausibility check of its storage configuration.
package with storage option 1,2 or 3 is used, Serviceguard will switch the package and try to restart the same instance on different hardware. Monitoring begins with package startup. At this point, the monitor will make sure, that liveCache is working only up to the point that is specified in lc_startmode (legacy: LCSTARTMODE). For example, if the mode is set to offline, only the vserver processes will be part of the monitoring. Still, the monitor detects any manual state change of liveCache.
Figure 4-3 Example HA SCM Layout liveCache Installation Step: GS220 Run SAP transaction LC10 and configure the logical liveCache names LCA and LCD to listen to the relocatable IP of the liveCache package. liveCache Installation Step: GS230 Do the following to continue: 1. 2. Configure the XUSER file in the APO user home and liveCache user home directories. If an .XUSER file does not already exist, you must create it.
# dbmcli -n -d -u SAPRIS,SAP -uk DEFAULT -us SAPRIS,SAP -up "SQLMODE=SAPR3; TIMEOUT=0; ISOLATION=0;" ADMIN/ key # dbmcli -n -d -u control,control -uk c -us control,control ONLINE/ key # dbmcli -n -d -u superdba,admin -uk w -us superdba,admin LCA key # dbmcli -n -d -us control,control -uk 1LCA -us control,control NOTE: Refer to the SAP documentation to learn more about the dbmcli syntax.
3. 4. Make sure /sapdb/programs and /sapdb/data exist as empty mountpoints on all hosts of the liveCache package. Also make sure /export/sapdb/programs and /export/sapdb/data exist as empty mountpoints on all hosts of the sapnfs package. Add the following entries to /etc/auto.direct.on all hosts of the liveCache package: /sapdb/programs :/export/sapdb/programs /sapdb/data :/export/sapdb/data For option 4: No changes are required.
5 SAP Master Data Management (MDM) SGeSAP legacy packaging provides a cluster solution for server components of SAP Master Data Management (MDM) 5.5, i.e. the MDB, MDS, MDIS and MDSS servers. They can be handled as a single package or split up into four individual packages. Master Data Management - Overview Figure 5.1 provides a general flow of how SGeSAP works with MDM. • The Upper Layer - contains the User Interface components. • The Middle Layer - contains the MDM Server components.
Table 5-1 MDM User Interface and Command Line Components Category Description MDM GUI (Graphical user Interface) clients MDM Console, MDM Data Manager Client, MDM Import Manager.... Allows you to use, administer and monitor the MDM components. For example, the MDM console allows you to create and maintain the structure of MDM repositories as wellmas to control access to them. NOTE: The MDM GUI interface is not relevant for the implementation of the SGeSAP scripts.
MDM 5.5 SP05 - Solution Operation Guide As of the writing of this document, the following SAP notes contain the most recent information on the MDM installation as well as corrections to the installation: 1025897 - MDM 5.5 SP05 Release Note 822018 - MDM 5.5 Release Restriction Note Installation and Configuration Considerations The following sections contain a step-by-step guide on the components required to install MDM in a SGeSAP (Serviceguard extension for SAP) environment.
/opt/MDM The /opt/MDM mount point is used for storing the binaries of the MDM servers. These are static files installed once during the installation. In this specific cluster configuration each cluster node had it's own copy of the /opt/MDM file system. The/opt/MDM only contains a few files and because of it's small size, a separate storage volume / mount point was not created but instead kept in the existing /opt directory.
/etc/hosts ---------172.16.11.95 mdsreloc # MDM reloc address for MDS 172.16.11.96 mdmnfsreloc # MDM reloc address for NFS 172.16.11.97 mdbreloc # MDM reloc address for DB 172.16.11.98 clunode1 # cluster node 1 172.16.11.99 clunode2 # cluster node 2 Setup Step: MDM020 Run ioscan and insf to probe/install new disk devices. Scan for new disk devices on the first cluster node (clunode1).
mkdir -p /opt/MDM Setup Step: MDM040 Create file systems and mount points on the other cluster nodes. All nodes of the cluster require a copy of the configuration changes. Copy LVM volume information to second cluster node: clunode1: vgchange -a n /dev/vgmdmoradb vgexport -p -s -m /tmp/vgmdmoradb.map /dev/vgmdmoradb vgchange -a n /dev/vgmdmuser vgexport -p -s -m /tmp/vgmdmuser.map /dev/vgmdmuser scp /tmp/vgmdmoradb.map clunode2:/tmp/vgmdmoradb.map scp /tmp/vgmdmuser.map clunode2:/tmp/vgmdmuser.
export ORACLE_HOME=/oracle/MDM/920_64 export ORA_NLS=/oracle/MDM/920_64/ocommon/NLS_723/admin/data ORA_NLS32=/oracle/MDM/920_64/ocommon/NLS_733/admin/data export ORA_NLS32 export ORA_NLS33=/oracle/MDM/920_64/ocommon/nls/admin/data export ORACLE_SID=MDM NLSPATH=/opt/ansic/lib/nls/msg/%L/%N.cat:/opt/ansic/lib/\ nls/msg/C/%N.cat export NLSPATH PATH=/oracle/MDM/920_64/bin:.
alias __C=`echo "\006"` # right arrow = forward a character alias __D=`echo "\002"` # left arrow = back a character alias __H=`echo "\001"` # home = start of line Setup Step: MDM070 Copy files to second cluster node. All nodes of the cluster require a copy of the configuration changes. Copy modified files to the second cluster node.
vi /etc/cmcluster/MDMNFS/mdmNFS.control.script VG[0]="vgmdmuser" LV[0]="/dev/vgmdmuser/lvmdmuser"; \ FS[0]="/export/home/mdmuser"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]=""; \ FS_TYPE[0]="vxfs" IP[0]="172.16.11.96" SUBNET[0]="172.16.11.0" The file hanfs.sh contains NFS directories that will exported with export options. These variables are used by the command exportfs -i to export the file systems and the command exportfs -u to unexport the file systems.
• (a) = Single MDM Serviceguard package - create a mgroupMDM package. • (b) = Multiple MDM Serviceguard packages - create a mdbMDM package At this stage the steps in option (a) or (b) are the same. Only the package name is different - the storage volume used is the same in both cases. Setup Step: Level: MDM200 (a) Single MDM Serviceguard package - create a mgroupMDM package. Create Serviceguard templates and edit the files for the mgroupMDM package.
RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDM/mdbMDM.control.script HALT_SCRIPT_TIMEOUT NO_TIMEOUT vi /etc/cmcluster/MDM/mdbMDM.control.script IP[0]="172.16.11.97" SUBNET[0]="172.16.11.0" VG[0]="vgmdmoradb" LV[0]="/dev/vgmdmoradb/lvmdmoradb"; \ FS[0]="/oracle/MDM"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]=""; \ FS_TYPE[0]="vxfs" scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mdbMDM.
NOTE: As the /home/mdmuser/mdis file system is NFS based no storage volumes or file systems have to be specified in the package configuration files. clunode1: cmmakepkg -s /etc/cmcluster/MDM/mdisMDM.control.script cmmakepkg -p /etc/cmcluster/MDM/mdisMDM.config vi /etc/cmcluster/MDM/mdisMDM.config PACKAGE_NAME mdisMDM NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDM/mdisMDM.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDM/mdisMDM.control.
clunode1: cmmakepkg -s /etc/cmcluster/MDM/masterMDM.control.script cmmakepkg -p /etc/cmcluster/MDM/masterMDM.config vi /etc/cmcluster/MDM/masterMDM.config PACKAGE_NAME masterMDM NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDM/masterMDM.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDM/masterMDM.control.script HALT_SCRIPT_TIMEOUT NO_TIMEOUT NOTE: At this stage file /etc/cmcluster/MDM/masterMDM.control.
[ ] Oracle 9i Management Installation Type ———— [ ] Enterprise Edition [X] Standard Edition [ ] Custom Database Configuration ———— [X] General Purpose [ ] Transaction Processing [ ] Data Warehouse [ ] Customized [ ] Software Only Database Identification ———— Global Database Name: MDM SID:MDM Database File Location ———— Directory for Database Files: /oracle/MDM/920_64/oradata Database Character Set ———— [ ] Use the default character set [X] Use the Unicode (AL32UTF8) as the character set Choose JDK Home Dire
Source: Path: /KITS/oa9208/stage/products.xml Destination: Name:MDM Path:/oracle/MDM/920_64 Summary ———— Success Migrate the MDM database to 9208 ———— See Oracle 9i Patch Set Notes Release 2 (9.2.0.8) Patch Set 7 for HP-UX PA-RISC (64-bit)runInstaller Setup Step: MDM216 Start the "Oracle Server" - Client installation for user oramdm Before running the upgrade halt the Oracle database and halt the Oracle listener.
Start the "Oracle Server" - Client installation for user mdmuser The next steps are ore installing the "Oracle Server - Client" bits so that user mdmuser will be able to run sqlplus commands over a network connection. The command sqlplus system/passwd@MDM is used to connect to the database. The Oracle client software will be installed in directory /home/mdmuser/oracle/MDM/920_64. NOTE: Oracle Server - Client version 9.2.0.8 is required for MDM. Start the installation executing the runInstaller command.
[X] Oracle8i or late database service [ ] Oracle 8i release 8.0 database service NetService Name Configuration ———— database used :MDM protocol used: TCP protocol used db host name: 172.16.11.97 Configure another service: no Net Service Configuration Complete Exit End of Installation Setup Step: MDM220 Upgrade to Oracle 9.2.0.8 for user mdmuser Upgrade to Oracle 9.2.08 of the "Oracle Server - Client" bits with Oracle patchset p4547809_92080_HP64.zip.
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.97)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MDM) ) ) /oracle/MDM/920_64/network/admin/listener.ora ———— # LISTENER.ORA Network Configuration File: # /oracle/MDM/920_64/network/admin/listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.
sqlplus system/passwd@MDM SQL*Plus: Release 9.2.0.8.0 - Production on Fri Feb 16 04:30:22 2007 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.8.0 Production SQL> Repeat the connect test for user mdmuser su – mdmuser sqlplus system/passwd@MDM SQL*Plus: Release 9.2.0.8.
• extract the zip archive • uncompress and install the kit into the /opt/MDM directory • create a /home/mdmuser/mds directory • start the mds process (this will create the mds environment in the current directory) su cd /KITS/MDM_Install/MDM55SP04P03/MDM SERVER 5.5/HPUX_64\ /usr/local/bin/unzip MDS55004P_3.ZIP cd /opt zcat "/KITS/MDM_Install/MDM55SP04P03/MDM SERVER\ 5.5/HPUX_64/mdm-server-5.5.35.16-hpux-PARISC64-aCC.tar.
Edit the mds.ini, mdis.ini and mdss.ini files Edit the mds.ini, mdis.ini and mdss.ini files and add the virtual IP hostname for the mds component to the [GLOBAL] section. The mdis and mdss components use this information to connect to the mds server. /home/mdmuser/mds/mds.ini [GLOBAL] Server=mdsreloc . . /home/mdmuser/mdis/mdis.ini [GLOBAL] Server=mdsreloc . . /home/mdmuser/mdss/mdss.ini [GLOBAL] Server=mdsreloc . .
typeset MDM_SCR="/etc/cmcluster/MDM/sapmdm.sh" typeset MDM_ACTION="stop" typeset MDM_COMPONENT="mgroup" ${MDM_SCR} "${MDM_ACTION}" "${MDM_COMPONENT}" test_return 52 } Setup Step: MDM238 (b) Multiple Serviceguard package - continue with mdsMDM, mdisMDM, mdsMDM andmasterMDM configuration If the Multiple Serviceguard package option was chosen in the beginning of this chapter, continue with the configuration of the mdsMDM, mdisMDM, mdssMDM and masterMDM packages.
Table 5-3 MDM parameter descriptions Parameter Description MDM_DB=ORACLE The database type being used. MDM_DB_SID=MDM The Oracle SID of the MDM database. MDM_DB_ADMIN=oramdm The Oracle admin user. MDM_LISTENER_NAME="LISTENER" The name of the Oracle listener. MDM_USER=mdmuser The MDM admin user. MDM_MDS_RELOC=172.16.11.96 The virtual/relocatable IP address of the MDS component. The IP address is required to run clix commands. Note: MDSS and MDIS do not require a virtual IP address.
MDM_MDS_RELOC=172.16.11.96 MDM_PASSWORD="" MDM_REPOSITORY_SPEC=" PRODUCT_HA:MDM:o:mdm:sap \ PRODUCT_HA_INI:MDM:o:mdm:sap \ PRODUCT_HA_NOREAL:MDM:o:mdm:sap \ " MDM_CRED="Admin: " MDM_MONITOR_INTERVAL=60 MDM_MGROUP_DEPEND="mdb mds mdis mdss" Copy file sap.config to the second node: scp -pr /etc/cmcluster/MDM/sap.config \ clunode2:/etc/cmcluster/MDM Setup Step: MDM244 (b) Multiple Serviceguard package - configure sap.config Edit file sap.
SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 60 vi /etc/cmcluster/MDM/mgroupMDM.control.script SERVICE_NAME[0]="mgroupMDMmon" SERVICE_CMD[0]="/etc/cmcluster/MDM/sapmdm.sh check mgroup" SERVICE_RESTART[0]="" To activate the changes in the configuration files run the following commands: cmapplyconf -P /etc/cmcluster/MDM/mgroupMDM.
The masterMDM package is responsible for starting other Serviceguard packages (mdmDB, mdsMDM, mdisMDM and mdssMDM) in a cluster in the required order as specified in file sap.config. With this information, the masterMDM will run the appropriate cmrunpkg commands to start the packages. Before executing cmrunpkg make sure that the MDM server packages (mdmDB, mdsMDM, mdisMDM and mdssMDM) are halted from the previous installation tasks.
6 SGeSAP Cluster Administration SGeSAP clusters follow characteristic hardware and software setups. An SAP application is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a virtualization layer that keeps the application independent of specific server hardware.
needed. Servers outside of the cluster that have External Dialog Instances installed are set up in a similar way. Refer to /etc/auto.direct for a full list of automounter file systems of SGeSAP. It enhances the security of the installation if the directories below /export are exported without root permissions. The effect is, that the root user cannot modify these directories or their contents. With standard permissions set, the root user cannot even see the files.
If the package still runs, all monitors will begin to work immediately and the package failover mechanism is restored. SAP Software Changes During installation of the SGeSAP Integration for SAP releases with kernel <7.0, SAP profiles are changed to contain only relocatable IP-addresses for the database as well as the Central Instance. You can check this using transaction RZ10. In the DEFAULT.
NOTE: If an instance is running on the standby node in normal operation and is stopped during the switchover Only configure the update service on a node for Application Services running on the same node. As a result, the remaining servers, running on different nodes, are not affected by the outage of the update server. However, if the update server is configured to be responsible for application servers running on different nodes, any failure of the update server leads to subsequent outages at these nodes.
• SGeSAP packages that do not contain database components can be configured to run on any node in the cluster. This includes ci, jci, [a]rep and d package types. • SGeSAP database instance packages must be configured to run only on either Integrity or HP 9000 nodes. This includes db, jdb and lc package types and all package types composed of these. NOTE: Always check for the latest versions of the Serviceguard Extension for SAP manual and release notes at http://docs.hp.com/en/ha.html.
Currently Oracle does not support the failover of databases based on HP 9000 binaries to nodes using executables compiled for the Integrity platform. Consequently this means, the use of an Oracle database package in a mixed environment is restricted to one platform. Therefore the database package must be kept either on purely HP 9000 or Integrity nodes. The installation and configuration of a database package should have been done already, as mentioned in the prerequisites.