Managing Serviceguard Extension for SAP Version B.05.
© Copyright 2000-2010 Hewlett-Packard Development Company, L.P Legal Notices Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.
Table of Contents Printing History...................................................................................................................9 About this Manual..................................................................................................................................9 Related Documentation........................................................................................................................10 1 Designing SGeSAP Cluster Scenarios...............................
Command line interface for modular package creation.................................................................70 Legacy Package Configuration.............................................................................................................78 Serviceguard Configuration............................................................................................................78 SGeSAP Configuration................................................................................................
Single or Multiple MDM Serviceguard Package Configurations..................................................156 Single MDM Serviceguard Package (ONE)..............................................................................156 Multiple MDM Serviceguard packages (FOUR)......................................................................156 Installation steps for MDM......................................................................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 3-1 3-2 3-3 4-1 4-2 4-3 4-4 5-1 6-1 6-2 6 Two-Package Failover with Mutual Backup Scenario...................................................................14 One-Package Failover Scenario.....................................................................................................16 Replicated Enqueue Clustering for ABAP and JAVA Instances...................................................17 Failover Node with Application Server package.........................
List of Tables 1 2 1-1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 4-1 4-2 4-3 4-4 4-5 5-1 5-2 5-3 5-4 Editions and Releases......................................................................................................................9 Abbreviations................................................................................................................................
Printing History Table 1 Editions and Releases Printing Date Part Number Edition SGeSAP Release Operating System Releases June 2000 B7885-90004 Edition 1 B.03.02 HP-UX 10.20 and HP-UX 11.00 March 2001 B7885-90009 Edition 2 B.03.03 HP-UX 10.20, HP-UX 11.00 and HP-UX 11i June 2001 B7885-90011 Edition 3 B.03.04 HP-UX 10.20, HP-UX 11.00 and HP-UX 11i March 2002 B7885-90013 Edition 4 B.03.06 HP-UX 11.00 and HP-UX 11i June 2003 B7885-90018 Edition 5 B.03.
• • • Chapter 4 "SAP Supply Chain Management" specifically deals with the SAP SCM and liveCache technology, gives a Storage Layout proposal and leads through the SGeSAP cluster conversion. Chapter 5 "SAP Master Data Management (MDM)" specifically deals with the SAP MDM technology and leads through the SGeSAP cluster conversion. Chapter 6 "SGeSAP Cluster Administration" covers SGeSAP Administration aspects, as well as the use of different HP-UX platforms in a mixed cluster environment.
1 Designing SGeSAP Cluster Scenarios This chapter introduces the basic concepts used by the HP Serviceguard Extension for SAP (SGeSAP) and explains several naming conventions.
and virtualization by reading the Serviceguard product manual, Managing Serviceguard, at www.hp.com/go/hpux-serviceguard-docs —> HP Serviceguard —> User guide. Serviceguard packages can be distinguished into legacy packages and module-based packages. SGeSAP provides solutions for both approaches.
Service. These services are traditionally combined and run as part of a unique SAP Instance that is referred to as JAVA System Central Service Instance (SCS) for SAP JAVA applications or ABAP System Central Service Instance (ASCS) for SAP ABAP applications. If an SAP application has both JAVA and ABAP components, it is possible to have both—an SCS and an ASCS instance—for one SAP application. In this case, both instances are SPOFs that require clustering.
NOTE: • Module-based SGeSAP database packages cannot be combined with a legacy based NFS toolkit to create a single package. • The major advantage of this approach is, that the failed SAP package will never cause a costly failover of the underlying database since it is separated in a different package. • It is not a requirement to do so, but it can help to reduce the complexity of a cluster setup, if SCS and ASCS are combined in a single package.
It is a best practice to base the package naming on the SAP instance naming conventions whenever possible. Each package name should also include the SAP System Identifier (SID) of the system to which the package belongs. If similar packages of the same type get added later, they have a distinct namespace because they have a different SID. Example: A simple mutual failover scenario for an ABAP application defines two packages, called dbSID and ascsSID (or ciSID for old SAP releases).
Figure 1-2 One-Package Failover Scenario Follow-and-Push Clusters with Replicated Enqueue In case an environment has very high demands regarding guaranteed uptime, it makes sense to activate a Replicated Enqueue with SGeSAP. With this additional mechanism, it is possible to failover ABAP and/or JAVA System Central Service Instances without impacting ongoing transactions on Dialog Instances.
Figure 1-3 Replicated Enqueue Clustering for ABAP and JAVA Instances Enqueue Services also come as integral part of each ABAP DVEBMGS Central Instance. This integrated version of the Enqueue Service is not able to utilize replication features. The DVEBMGS Instance needs to be split up in a standard Dialog Instance and a ABAP System Central Service Instance (ASCS).
instances. It is also supported to combine the replication instances within one SGeSAP package. It is also supported to combine ASCS and SCS in one package, but only if the two ERS instances are likewise combined in another package. It is not supported to combine ASCS and SCS in one package and keep the two ERS instances in two separate packages. Otherwise, situations can arise in which a failover of the combined ASCS/SCS package is not possible.
Dialog Instance packages allow an uncomplicated approach to achieve abstraction from the hardware layer. It is possible to shift around Dialog Instance packages between servers at any given time. This might be desirable if the CPU resource consumption is eventually balanced poorly due to changed usage patterns. Dialog Instances can then be moved between the different hosts to address this.
The described shutdown operation for Dialog Instance packages can be specified in any SGeSAP legacy package directly. In modularized SGeSAP, it is recommended to use generic Serviceguard package dependencies instead. Handling of Redundant Dialog Instances Non-critical SAP Application Servers can be run on HP-UX, Novell SLES or RedHat RHEL Linux application server hosts. These hosts do not need to be part of the Serviceguard cluster.
Configuring the Update Service as part of the packaged Central Instance is recommended. Consider using local update servers only if performance issues require it. In this case, configure Update Services for application services running on the same node. This ensures that the remaining SAP Instances on different nodes are not affected if an outage occurs on the Update Server. Otherwise, a failure of the Update Service will lead to subsequent outages at different Dialog Instance nodes.
2 Planning the Storage Layout Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. Two volume managers can be used with Serviceguard: the standard Logical Volume Manager (LVM) of HP-UX and the Veritas Volume Manager (VxVM). SGeSAP can be used with both volume managers.
• • • Whether the file system needs to be kept as a local copy on internal disks of each node of the cluster. Whether the file system needs to be shared on a SAN storage device to allow failover and exclusive activation. Whether the file system needs to provide shared access to more than one node of the cluster at the same time. NOTE: SGeSAP packages and service monitors require SAP tools. Patching the SAP kernel sometimes also patches SAP tools.
To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Directories that Reside on Shared Disks Volume groups on SAN shared storage are configured as part of the SGeSAP packages.
Table 2-3 System and Environment Specific Volume Groups Mount Point Access Point Potential owning packages /export/sapmnt/ shared disk and HA NFS VG Name Device minor number db dbci jdb jdbjci sapnfs /export/usr/sap/trans db dbci sapnfs /usr/sap/put shared disk none The tables can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes.
• • • • • • /home/adm — the home directory of the SAP system administrator with node specific startup log files. /usr/sap//SYS/exe/run — the directory that holds a local copy of all SAP instance executables, libraries, and tools (optional for kernel 7.x and higher). /usr/sap/tmp — the directory where the SAP operating system collector keeps monitoring data of the local operating system. /usr/sap/hostctrl — the directory where SAP control services for the local host are kept (kernel 7.
The table can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes. If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory should not be added to any package. This ensures that they are independent from any SAP WAS system and you can mount them on any host by hand if needed.
The /usr/sap/tmp might or might not be part of the local root file system. This is the working directory of the operating system collector process saposcol. The size of this directory is usually not more than a few Megabytes. Database Instance Storage Considerations SGeSAP internally supports clustering of database technologies of different vendors. The vendors have implemented individual database architectures.
Table 2-7 NLS Files - Default Location Kernel Version Client NLS Location <=4.6 $ORACLE_HOME/ocommon/NLS[_]/admin/data 4.6 /oracle//ocommon/nls/admin/data 6.x, 7.x /oracle/client//ocommon/nls/admin/data A second type of NLS directory, called the "server" NLS directory. always exists. This directory is created during database or SAP Central System installations.
Table 2-8 File System Layout for NFS-based Oracle Clusters Mount Point Access Point Potential Owning Packages VG Type $ORACLE_HOME shared disk dbci db instance specific /oracle//saparch jdb”/sapreorg jdbjci /oracle//sapdata1 dbcijci Volume Group Name Device Minor Number ...
Table 2-9 File System Layout for Oracle RAC in SGeSAP CFS Cluster Mount Point Access Point $ORACLE_HOME/oracle/client shared disk and CFS Potential Owning Packages /oracle//oraarch /oracle//sapraw /oracle//saparch /oracle//sapbackup /oracle//sapcheck /oracle//sapreorg /oracle//saptrace /oracle//sapdata1... /oracle//sapdatan /oracle//origlogA /oracle//origlogB /oracle//mirrlogA /oracle//mirrlogB tnsnames.
/sapdb/programs/runtime/7401=7.4.1.0, /sapdb/programs/runtime/7402=7.4.2.0, • For MAXDB and liveCache Version 7.5 (or higher), the SAP_DBTech.ini file does not contain sections [Installations], [Databases], and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini, and Runtimes.ini in the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.ini, Databases.ini, and Runtimes.ini for a host with a liveCache 7.5 (LC2) and an APO 4.1 using a MAXDB 7.
7.6, this directory should move with the package. Therefore, SAP provided a way to redefine this path for each SAPBDB/MAXDB individually. SGeSAP expects the work directory to be part of the database package. The mount point moves from /sapdb/data/wrk to /sapdb/data//wrk for the clustered setup. This directory should not be mixed up with the directory /sapdb/data//db/wrk that might also exist. Core files of the kernel processes are written into the working directory.
NOTE: In HA scenarios, valid for SAPDB/MAXDB versions up to 7.6, the untime directory /sapdb/data/wrk is configured to be located at /sapdb//wrk to support consolidated failover environments with several MAXDB instances. The local directory /sapdb/data/wrk is referred to by the VSERVER processes (vserver, niserver), that means VSERVER core dump and log files will be located there.
Table 2-11 File System Layout for DB2 Clusters Mount Point Access Point Potential Owning Packages VG Type /db2/db2 shared disk db database specific /db2/db2/db2_software dbci /db2/ jdb /db2//log_dir jdbjci /db2//sapdata1 … /db2//sapdata /db2//saptemp1 /db2//db2dump /db2//db2 36 Planning the Storage Layout VG Name Device Minor Number
3 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualize SAP instances manually. If these tasks are already covered by different means, it might be sufficient to quickly check that the requested result is already achieved.
SGeSAP modules are implemented to work independent of the used package naming. For these packages, the above naming scheme is a recommendation. For a description and combination restrictions on those packages, refer to chapter 1—“Designing SGeSAP Cluster Scenarios.” The legacy package installation steps cover HP-UX 11i v1, HP-UX 11i v2, and HP-UX 11i v3 using Serviceguard 11.16 or higher. Modular packages can be used with HP-UX 11i v2 and HP-UX 11i v3 using Serviceguard 11.18 or higher.
installed into a virtualized environment that obsoletes the SAP Application Server Configuration steps that usually concluded a manual cluster conversion. Therefore, it is important to first decide which kind of SAP installation is intended. The installation of a SAP High Availability System was introduced with Netweaver 2004s. For Netweaver 2004 JAVA-only installations, there is a similar High Availability Option for SAPINST. All older SAP kernels need to be clustered manually.
NOTE: For Java-only based installations, the only possible installation option is a High Availability System installation. It is strongly recommended to use the "High Availability System" option for all new installations that are meant to be used with SGeSAP. A SAP Application Server 7.
Refer to the Managing Serviceguard user guide for general information about the generic file content. A minimum configuration will do for the purpose of supporting the SAP installation. At least the following parameters should be edited: package_name, node_name, ip_address and monitored_subnet. The package_name can be chosen freely. It is often a good approach to stick to the naming convention that combines the name of the SAP Instance type and the SAP System ID. Examples: dbciC11, scsC11.
Installation Step: IS1330 The installation is done using the virtual IP provided by the Serviceguard package. SAPINST can be invoked with a special parameter called SAPINST_USE_HOSTNAME. This prevents the installer routines from comparing the physical hostname with the virtual address and drawing wrong conclusions. The installation of the entire SAP Application Server 7.0 will happen in several steps, depending on the installation type. Each time a different virtual hostname can be provided.
When starting the SAPINST installer for kernel 6.40, the first screen shows installation options that are generated from an XML file called product.catalog located at /IM_OS/SAPINST/UNIX/. The standard catalog file product.catalog has to be either: • • Replaced by product_ha.catalog in the same directory on a local copy of the DVD or The file product_ha.catalog can be passed as an argument to the SAPINST installer It is recommended to pass the catalog as an argument to SAPINST.
each subnet. The LUNs need to have the size that is required for a SAP Instance directory of the targeted kernel release. Splitting an ABAP Central Instance The SPOFs of the DVEBMGS instance will be isolated in a new instance called ABAP System Central Services Instance ASCS. This instance will replace DVEBMGS for the ci package type. The remaining parts of the Central Instance can be configured as Dialog Instance D.
Create instance profile and startup profile for the ASCS Instance. These profiles get created as adm in the NFS-shared /sapmnt//profile directory. Here is an example template for the instance profile _ASCS_: #---------------------------------# general settings #---------------------------------SAPSYSTEMNAME= INSTANCE_NAME=ASCS SAPSYSTEM= SAPLOCALHOST= SAPLOCALHOSTFULL=.
Execute_06 = local ln -s -f $(DIR_EXECUTABLE)/enserver $(_EN) Start_Program_03 = local $(_EN) pf=$(DIR_PROFILE)/_ASCS_ #----------------------------------------------------------------------# start syslog send daemon #----------------------------------------------------------------------SE =se.
Creation of Replication Instance This section describes how to add Enqueue Replication Services (ERS) to a system that has SAP kernel 4.6, 6.x or 7.0 ASCS and/or SCS instances. The section can be skipped for SAP kernel version 7.10 and later. Use the installation routines for ERS instances as provided by the SAP installer instead. The ASCS (ABAP) or SCS (JAVA) instance will be accompanied with an ERS instance that permanently keeps a mirror of the [A]SCS internal state and memory.
libicuuc.so.30 libsapu16_mt.so libsapu16.so librfcum.so sapcpe sapstart ers.lst For SAP kernel 7.00 or higher, in addition, the following executables need to be copied: sapstartsrv sapcontrol servicehttp/sapmc/sapmc.jar servicehttp/sapmc/sapmc.html servicehttp/sapmc/frog.jar servicehttp/sapmc/soapclient.jar For SAP kernel 7.00 or higher, in addition, the ers.lst file needs the following lines: sapstartsrv sapcontrol servicehttp The following script example cperinit.sh performs this step for 7.00 kernels.
SCSHOST = <[J]CIRELOC> enque/serverinst = $(SCSID) enque/serverhost = $(SCSHOST) Here is an example template for the startup profile START_ERS_<[A]REPRELOC>: #-------------------------------------------------------------------SAPSYSTEM = SAPSYSTEMNAME = INSTANCE_NAME = ERS #-------------------------------------------------------------------# Special settings for this manually set up instance #-------------------------------------------------------------------SCSID = DIR
NOTE: Repeat the cluster node synchronization steps for each node of the cluster that is different than the primary. • Cluster Node Configuration—this section consists of steps performed on all the cluster nodes, regardless if the node is a primary node or a backup node. NOTE: • Repeat the cluster node configuration steps for each node of the cluster.
Comment out the references to any file system that is classified as a shared directory in chapter 2 from the /etc/fstab. Also, make sure that there are no remaining entries for file systems converted in IS009. MAXDB Database Step: SD040 This step can be skipped for MAXDB instances starting with versions 7.6. MAXDB is not supported on CFS, but can be combined with SAP instances that use CFS.
cfsdgadm add dgC11 all=sw cfsmntadm add dgC11 cfs01 /usr/sap/ all=rw After these conversions, it should be possible to start the SAP System manually. If the installation was done using virtual IPs, then the additional packages defined in that step need to be started after starting the CFS packages, but prior to the SAP manual start.
Non-CFS Directory Structure Conversion The main purpose of this section is to ensure the proper LVM layout and the right distribution of the different file systems that reside on shared disks. This section does not need to be consulted when using the HP Serviceguard Storage Management Suite with CFS and shared access Option 3. Logon as root to the system where the SAP Central Instance is installed (primary host).
rm -r * # be careful with this cd .. rmdir DVEBMGS 2. Mark all shared volume groups as members of the cluster. This only works if the cluster services are already available. Example: cd / # umount all logical volumes of the volume group vgchange -a n vgchange -c y vgchange -a e # remount the logical volumes 3. 3. The device minor numbers must be different from all device minor numbers gathered on the other hosts.
Figure 3-1 sapcpe Mechanism for Executables To create local executables, the SAP filesystem layout needs to be changed. The original link /usr/sap//SYS/exe/run needs to be renamed to /usr/sap//SYS/exe/ctrun. A new local directory /usr/sap//SYS/exe/run will then be required to store the local copy. It needs to be initialized by copying the files sapstart and saposcol from the central executable directory /sapmnt//exe. Make sure to match owner, group, and permission settings.
It should not reply any match on all nodes. Otherwise, refer to the SAP documentation how to change instance IDs for the relevant instance type. It might be simpler to reinstall on of the conflicting instances. Installation Step: IS065 If local executables are created (refer to step IS050) and failover nodes have no internal application server with local executables installed, distribute the directory tree /usr/sap//SYS from the primary node.
NOTE: mode. Beware of copying over into /etc/passwd if your HP-UX is running in Trusted System Table 3-3 Password File Users username UID GID home directory shell adm ora sqd sqa sapdb db2 Installation Step: IS090 Look at the service file, /etc/services, on the primary side. Replicate all services listed in Table 3-4 “Services on the Primary Node” that exist on the primary node onto the backup node.
Copy the adm home directory to he backup node(s). This is a local directory on each node. Default home directory path is /home/adm. Installation Step: IS120 On the second node, in the adm home directory the start, stop and environment scripts need to be renamed. If some of the scripts listed in the following do not exist, these steps can be skipped. su mv mv mv mv mv mv mv mv mv mv - adm startsap__ stopsap__ .sapenv_.csh .sapenv_.sh .
If you are using ORACLE: Create a mount point for the Oracle files on the alternate nodes if it is not already there. Example: su - ora mkdir -p /oracle/ exit MAXDB Database Step: SD170 Create a mount point for the SAPDB files on the alternate nodes if it is not already there. Example: su - sqd mkdir -p /sapdb/ DB2 Installaton Step: DB180 Create mount points for DB2 files on alternate node if they are not already there.
su - sqd mkdir -p /sapdb/programs mkdir -p /sapdb/data mkdir -p /usr/spool/sql exit Ownership and permissions of these directories should be chosen equally to the already existing directories on the primary host. MAXDB Database Step: SD235 For releases starting with MAXDB 7.5: Copy file /etc/opt/sdb to the alternate cluster nodes. This file contains global path names for the MAXDB instance. MAXDB Database Step: SD236 For releases starting with MAXDB 7.
id_dsa id_dsa.pub The file id_dsa.pub contains the security information (public key) for the user@host pair e.g. root@. This information needs to be added to the file $HOME/.ssh/authorized_keys2 of the root and adm user. Create these files if they are not already there. This will allow the root user on to remotely execute commands via ssh under his own identity and under the identity of adm on all other relevant nodes.
other database. Therefore, it is recommended to switch the autostart feature off when running more than one >= 7.8 database. However, keep in mind that SAP’s startdb currently relies on autostart being switched on and doesn’t explicitly start the DB specific vserver. Autostart can be switched of by editing the [Params-/sapdb/X72/db] section in /sapdb/data/config/Installations.ini: Set XserverAutostart to no (default=yes).
Modular Package Configuration Modular SGeSAP packages can be created using the graphical interface of the Serviceguard Manager plug-in to HP System Management Homepage or the standard Serviceguard command line interface. Graphical interface for modular package creation The new graphical Serviceguard Manager wizard for SGeSAP now greatly simplifies cluster package creation by providing auto-discovery of installed SAP components in the cluster.
Figure 3-2 Package creation dialog with selections for a single package SGeSAP solution 4. 5. In the Select package type window, enter a package name. The Failover package type is pre-selected and Multi-Node is disabled. SGeSAP doesn't support Multi-Node. Then click Next>>. The modules in the Required Modules window are selected by default and can not be changed.
resource becomes available again or fails after the maximum number of attempts specified in the field Retry Count. The default for the parameter is set to 5. It should be raised on demand, if the package logs indicate racing conditions with timing issues. 3. 4. The remote communication value defines the method to be used to remotely execute commands for SAP Application Server handling. Setting the parameter is optional.
To add an SAP instance to the Configured SAP Instances list, enter information into the SAP Instance, Virtual Hostname, and Replicated Instance input fields, and then click <
Optionally, listener_password can be set if a password for the Oracle listener process is configured. The root user and any user with a defined Serviceguard access role (full admin, package admin or monitor) will be able to read this value. The sgesap/dbinstance module can be used for SAP JAVA-only, ABAP-only, or dual stack database instances. The module will detect all available SAP tools to handle the database and use them if possible.
operating environment. If the value is not specified, the local hostname computed during a package operation gets substituted. To remove a configured external SAP instance from this list, click on the radio button adjacent to the external SAP instance that you want to remove, then click Remove. To edit a configured external SAP instance, click on the radio button adjacent to the SAP instance you want to edit, then click Edit>>.
• • sapdisp.mon to monitor a SAP dispatcher that comes as part of a Central Instance or an ABAP Application Server Instance. sapdatab.mon to monitor xservers and MAXDB database instances. These monitors are located in /opt/cmcluster/sap/SID. Each monitor will automatically perform regular checks of the availability and responsiveness of a specific software service within all SAP instances that provide this service in the package. If the WLM application toolkit for SAP is installed, sapdisp.
After you complete all of the Create a Modular Package configuration screens, the Verify and submit configuration change screen will be displayed. Use the Check Configuration and Apply Configuration buttons at the bottom of the screen to confirm and apply your changes. Command line interface for modular package creation This section describes the cluster software configuration.
In the subsection for the DB component there is an optional paragraph for Oracle and SAPDB/MAXDB database parameters. Depending on your need for special HA setups and configurations have a look at those parameters and their description. Preparation Step: MS410 The generic Serviceguard parameters of the configuration file need to be edited.
The following list summarizes how the behavior of SGeSAP is affected with different settings of the cleanup_policy parameter: • • • lazy—no action, no cleanup of resources. normal—removes orphaned resources as reported by SAP tools for the SAP system that is specified in sap_system. An obsolete ORACLE SGA is also removed if a database crash occurred. strict—uses HP-UX commands to free up system resources that belong to any SAP Instance of any SAP system on the host if the Instance is to be started soon.
Parameters that can be set to handle a MAXDB database as part of a package with the sgesap/dbinstance or sgesap/maxdb module. The package parameter db_vendor defines the underlying RDBMS database technology that is to be used with the SAP application. It is preset to maxdb in sgesap/maxdb, but should otherwise be manually set to maxdb. It is still optional to specify this parameter, but either db_vendor or db_system needs to be set in order to include the database in the failover package.
The message server should generally be configured to restart locally by specifying restart in the SAP (start) profile. If Serviceguard recognizes the SAP Message Server to be unavailable for a longer period of time, it assumes that the restart doesn’t work or is accidentally not configured. Serviceguard will switch the package and try to restart on different hardware, usually the active enqueue replication server.
In certain setups, it is necessary to free up resources on the failover node to allow the failover to succeed. Often, this includes the stop of less important SAP Systems, namely consolidation or development environments. The following parameters may be used to configure special treatment of SAP Application Server instances during package start, halt or failover. These Application Servers need not necessarily be virtualized. They must not be clustered themselves.
sap_ext_instance D01 sap_ext_host node1 sap_ext_treat yyynn sap_ext_instance D02 sap_ext_host hostname1 sap_ext_treat yyynn Example 2: The failover node is running a central, non-clustered testsystem QAS and a dialog instance D03 of the clustered SG1. All this should be stopped in case of a failover to the node, in order to free up resources.
application interface. This control framework is implemented using a daemon process that is either started by: • The operating system during boot time OR • The SAP startup script when the instance is started. During the installation of a SAP Netweaver 2004s Web AS, the SAPIinst installer edits a file in /usr/sap/sapservices and adds a line for each sapstartservice instance. During boot time an init script is executed located in: /sbin/init.d/sapinit referenced by /sbin/rc3.
Legacy Package Configuration This section describes the cluster software configuration with the following topics: • • • Serviceguard Configuration SGeSAP Configuration Global Default Settings Serviceguard Configuration Refer to the standard Serviceguard manual Managing Serviceguard to learn about creating and editing a cluster configuration file and how to apply it to initialize a cluster with cmquerycl(1m) and cmapplyconf(1m).
RUN_SCRIPT /etc/cmcluster//.control.script HALT_SCRIPT /etc/cmcluster/ /.control.script Specify subnets to be monitored in the SUBNET section. Installation Step: OS435 This ensures a successful package start only if the required CFS file system(s) are available.
config files .config. If no configuration parameters are provided with the service command, the monitors will default to the settings for a ABAP DVEBMGS Central Instance if possible. Example entries in .control.script: SEVICE_NAME[0]="ciC11ms" SERVICE_CMD[0]="/etc/cmcluster/C11/sapms.mon " SEVICE_NAME[1]="ciC11disp" SERVICE_CMD[1]="/etc/cmcluster/C11/sapdisp.
Example: ENQOR_SCS_PKGNAME_C11=foobar ENQOR_REP_PKGNAME_C11=foorep For SAP kernel 7.x; instances SCS00 and ERS01: ENQOR_SCS_ERS01_PKGNAME_C11=foobar ENQOR_ERS_ERS01_PKGNAME_C11=fooers Optional Step: OS450 For non-CFS shares, it is recommended to set AUTO_VG_ACTIVATE=0 in /etc/lvmrc. Edit the custom_vg_activation() function if needed. Distribute the file to all cluster nodes. Optional Step: OS460 It is recommended to set AUTOSTART_CMCLD=1 in /etc/rc.config.d/cmcluster.
Use cmapplyconf(1m) to add the newly configured package(s) to the cluster. NOTE: If you plan to use a sapnfs package as central NFS service, specify this package in the last position of the cmapplyconf command. Later, if you force a shutdown of the entire cluster with cmhaltcl -f, this package is the last one stopped. This prevents global directories from disappearing before all SAP components in the cluster have completed their shutdown. Verify that the setup works correctly to this point.
• • jci: a SAP JAVA System Central Services Instance that provides Enqueue and Message Service to SAP J2EE engines that might or might not be part of SAP Application Servers (SCS). jd: one or more virtualized SAP JAVA Application Instances. NOTE: It is not allowed to specify a db and a jdb component as part of the same package. It is not allowed to specify a [j]ci and an [a]rep component as part of the same package.
The relocatable IP address of the instance should be specified in CIRELOC. Example: CINAME=DVEBMGS CINR=00 CIRELOC=0.0.0.0 In the subsection for the CI component there is a paragraph for optional parameters. Depending on your need for special HA setups and configurations, have a look at those parameters and their description. Subsection for the AREP component: OS630 SGeSAP supports SAP stand-alone Enqueue Service with Enqueue Replication.
SAP J2EE Engine Enqueue and Message Services are isolated in the System Central Service Instance. SDM is not part of this instance. Specify the instance name in JCINAME, the instance ID number in JCINR and relocatable IP address in JCIRELOC, as in the following example: JCINAME=SCS JCINR=01 JCIRELOC=0.0.0.0 Subsection for the REP component: OS665 SGeSAP supports SAP stand-alone Enqueue Service with Enqueue Replication.
Optional Step: OS670 The following arrays are used to configure special treatment of ABAP Application Servers during package start, halt or failover. These Application Servers are not necessarily virtualized or secured by Serviceguard, but an attempt can be triggered to start, stop or restart them with the package. If any triggered attempt fails, it doesn't automatically cause failure of the ongoing package operation. The attempts are considered to be non-critical.
— — "WINDOWS": Application Server handling is not standardized as there is no standard way to open a remote DOS/Windows shell that starts SAP Application Servers on the Windows platform. sap.functions provides demo functions using the ATAMAN (TM) TCP Remote Logon syntax. They should be replaced by implementations in customer.functions. "SG-PACKAGE": The Application Server runs as Serviceguard package within the same cluster. In this case ASHOST might have different values on the different package nodes.
START=${START_WITH_PKG} STOP=${STOP_WITH_PKG} RESTART=${RESTART_DURING_FAILOVER} STOP_LOCAL=${STOP_IF_LOCAL_AFTER_FAILOVER} STOP_DEP=${STOP_DEPENDENT_INSTANCES} Example 1: The primary node is also running a non-critical Dialog Instance with instance ID 01. It should be stopped and started with the package. In case of a failover, a restart attempt should be made. There is a second instance outside of the cluster that should be treated the same.
This can lead to unsafe shutdowns and Instance crash. To be safe, specify one of the following: WAIT_OWN_AS=1 The shutdown of all application servers takes place in parallel, but the scripts do not continue before all of these shutdown processes have come to an end. WAIT_OWN_AS=2 If the package should also wait for all application servers to come up successfully.
NOTE: Do not use the strict policy unless it is critical that you do. Be aware that the strict option can crash running instances of different SAP systems on the backup host. Use this value only if you have a productive system that is much more important than any other SAP system you have. In this case, a switchover of the productive system is more robust, but additional SAP systems will crash. You can also use strict policy, if your SAP system is the only one unning at the site and you are low on memory.
The array SAPROUTER_START[*] should be set to 1 for each saprouter that should be started with the CI package. Make sure that multiple saprouters do not share the same SAPROUTER_PORT. The SAPROUTER_HOST[*]-array specifies where the saprouter should be started. These parameters are mandatory. An optional SAPROUTER_STRING may be set that will be passed to the saprouter, for example to set the path to the saprouttab file to reside on a shared volume.
During the installation of a SAP Netweaver 2004s Web AS, the SAPIinst installer edits a file in /usr/sap/sapservices and adds a line for each sapstartservice instance. During boot time an init script is executed located in: /sbin/init.d/sapinit referenced by /sbin/rc3.d/S<###>sapinit The sapinit script reads the content in file /usr/sap/sapservices and starts a sapstartsrv for each instance during OS boot time.
wlm_interval = 5; absolute_cpu_units = 1; } # # Create a workload group for the dialog processes and use the wlmsapmap # utility to move all the dialog processes to this workload group. # prm { groups = OTHERS : 1, dialog : 2; procmap = dialog : /opt/wlm/toolkits/sap/bin/wlmsapmap -f /etc/cmcluster/C11/wlmprocmap.C11_D01_node1 -t DIA; } # # Request 25 shares per dialog job.
If legacy-style HA NFS functionality is intended to be integrated into a legacy-style SGeSAP package, copy the HA NFS toolkit scripts into the package directory. Since the SGeSAP package directory can have entries for both, a database and a Central Instance package, it is required to add a package type suffix to the NFS toolkit files during the copy operation. Otherwise, all packages of the package directory would act as NFS server. This is usually not intended.
A Serviceguard service monitor for the HA NFS nfs_upcc.mon is configured by default with modular-style packages. For legacy-style packages it can be configured in section NFS MONITOR in hanfs.. The naming convention for the service should be NFS. Example for SAP System C11: NFS_SERVICE_NAME[0]="dbciC11nfs" NFS_SERVICE_CMD[0]="/etc/cmcluster/C11/nfs.mon" NFS_SERVICE_RESTART[0]="-r 0" For legacy-style packaging, copy the monitor file nfs.
Example: /usr/sap/trans /sapmnt/ :/export/usr/sap/trans :/export/sapmnt/ Add database specific filesystems to this files if they exist. This can be verified using the table in chapter 2. Usually the relocatible IP address of the database package or the SAPNFS package is used. When configuring AUTOFS the automounter map file /etc/auto.direct must NOT be executable. Make sure to set the appropriate permissions of /etc/auto.direct to 644.
Database Configuration This section deals with additional database specific installation steps and contains the following: • • • Additional Steps for Oracle Additional Steps for MAXDB Additional Steps for DB2 Database Configuration 97
Additional Steps for Oracle The Oracle RDBMS includes a two-phase instance and crash recovery mechanism that enables a faster and predictable recovery time after a crash. The instance and crash recovery is initiated automatically and consists of two phases: Roll-forward phase: Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks.
/sapmnt//profile/oracle/listener.ora /sapmnt//profile/oracle/tnsnames.ora Oracle Database Step: OR870 Copy $ORACLE_HOME/network/admin/tnsnames.ora to all additional application server hosts. Be careful if these files were customized after the SAP installation. Oracle Database Step: OR880 Be sure to configure and install the required Oracle NLS files and client libraries as mentioned in section Oracle Storage Considerations included in chapter “Planning the Storage Layout.
If you use multiple packages for the database and SAP components... Set the optional parameter SQLNET.EXPIRE_TIME in sqlnet.ora to a reasonable value in order to take advantage of the Dead Connection Detection feature of Oracle. The parameter file sqlnet.ora resides either in /usr/sap/trans or in $ORACLE_HOME/network/admin. The value of SQLNET.EXPIRE_TIME determines how often (in seconds) SQL*Net sends a probe to verify that a client-server connection is still active.
Additional Steps for MAXDB Logon as root to the primary host of the database where the db or dbci package is running in debug mode. MAXDB Database Step: SD930 If environment files exist in the home directory of the MAXDB user on the primary node, create additional links for any secondary. Example: su - sqd ln -s .dbenv_.csh ln -s .dbenv_.sh exit .dbenv_.csh .dbenv_.
When using raw devices, the database installation on the primary node might have modified the access rights and owner of the device file entries where the data and logs reside. These devices were changed to be owned by user sdb, group sdba. These settings also have to be applied on the secondary, otherwise the database will not change from ADMIN to ONLINE state.
SAP Application Server Configuration This section describes some required configuration steps that are necessary for SAP to become compatible to a cluster environment. The following configuration steps do not need to be performed if the SAP System was installed using virtual IPs. The steps are only required to make a non-clustered installation usable for clustering.
The parameter SAPLOCALHOSTFULL must be set even if you do not use DNS. In this case you should set it to the name without the domain name: SAPLOCALHOSTFULL= The instance profile name is often extended by the hostname. You do not need to change this filename to include the relocatible hostname. SGeSAP also supports the full instance virtualization as of SAPWAS 6.40 and beyond. The startsap mechanism can then be called by specifying the instance's virtual IP address.
Installation Step: IS1170 Batch jobs can be scheduled to run on a particular instance. To select a particular instance by its hostname at the time of job scheduling. The application server name and the hostname retrieved from the Message Server are stored in the Batch control tables TBTCO, TBTCS... When the batch job is ready to run, the application server name is used to start it on the appropriate instance.
If you cluster a Central Instance or an ABAP System Central Service Instance, configure the frontend PCs to attach to . Most of the time, this can be achieved by distributing a new saplogon.ini file to the windows directory. SAP J2EE Engine specific installation steps This section is applicable for SAP J2EE Engine 6.40 based installations that were performed prior to the introduction of SAPINST installations on virtual IPs.
Table 3-9 IS1130 Installation Step (continued) Choose... Change the following values cluster_data -> Propertysheet instance.properties.IDxxxxxx instance.ms.host localhost to (in file) com.sapl.instanceId=__ /usr/sap///j2ee\ /additionalsystemproperties Check for the latest version of SAP OSS Note 757692. Installation is complete. The next step is to do some comprehensive switchover testing covering all possible failure scenarios.
4 SAP Supply Chain Management Within SAP Supply Chain Management (SCM) scenarios two main technical components have to be distinguished: the APO System and the liveCache. An APO System is based on SAP Application Server technology. Thus, sgesap/sapinstance and sgesap/dbinstance modules can be used or ci, db, dbci, d and sapnfs legacy packages may be implemented for APO. These APO packages are set up similar to Netweaver packages. There is only one difference. APO needs access to liveCache client libraries.
NOTE: For installation steps in this chapter that require the adjustment of SAP specific parameter in order to run the SAP application in a switchover environment usually example values are given. These values are for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP's latest recommendation. Whenever possible the SAP OSS note number is given. More About Hot Standby A fatal liveCache failure concludes in a restart attempt of the liveCache instance.
SGeSAP provides a runtime library to liveCache that allows to automatically create a valid local set of liveCache devspace data via Storageworks XP Business Copy volume pairs (pvol/svol BCVs) as part of the standby startup. If required, the master liveCache can remain running during this operation. The copy utilizes fast storage replication mechanisms within the storage array hardware to keep the effect on the running master liveCache minimal.
• • There is no intention to install additional APO Application Servers within the cluster. There is no hot standby liveCache system configured.
Cluster Layout Constraint: • There is no hot standby system configured. The following directories are affected: /sapdb/programs: This can be seen as a central directory with liveCache/MAXDB binaries. The directory is shared between all liveCache/MAXDB Instances that reside on the same host. It is also possible to share the directory across hosts. But it is not possible to use different executable directories for two liveCache/MAXDB Instances on the same host.
Table 4-4 General File System Layout for liveCache (Option 3) (continued) Storage Type Package Mount Point autofs shared sapnfs /var/spool/sql/ini * only valid for liveCache versions lower than 7.6. Option 4: Hot Standby liveCache Two liveCache instances are running in a hot standby liveCache cluster during normal operation. No instance failover takes place. This allows to keep instance-specific data local to each node. The cluster design follows the principle to share as little as possible.
/sapdb/programs/runtime/7401=7.4.1.0, /sapdb/programs/runtime/7402=7.4.2.0, For MAXDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations], [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.ini in the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.ini, Databases.ini and Runtimes.ini for a host with a liveCache 7.5 (LC2) and an APO 4.1 using a MAXDB 7.
vgchange -a e # remount the logical volumes The device minor numbers must be different from all device minor numbers gathered on the other hosts. Distribute the shared volume groups to all potential failover nodes. HP-UX Setup for Options 1, 2 and 3 This section describes how to synchronize and configure the HP-UX installations on all cluster nodes in order to allow that the same liveCache instance is able to run on any of these nodes.
dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ini ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid liveCache Installation Step: LC070 Make sure /var/spool/sql exists as a directory on the backup node. /usr/spool must be a symbolic link to /var/spool.
On the Storageworks XP array pvol/svol pairs should be defined for all LUNs containing DISKDnnnn information of the master. These LUNs should be used for devspace data exclusively. Make sure that all pairs are split into SMPL (simple) state. NOTE: A hot standby configuration requires the BCV copy mechanism to be preconfigured, but LUNs will not be in synchronization mode during normal operation.
Figure 4-2 Hot Standby System Configuration Wizard Screens SGeSAP Modular Package Configuration This section describes how to configure the modular SGeSAP lc liveCache package. The configuration can be done within the configuration screen of the Serviceguard Manager plug-in to System Management Homepage or via the Serviceguard Command Line tools. Figure 4-3 Configuration screen of a liveCache module in Serviceguard Manager liveCache Installation Step: LC171 Create a liveCache package configuration.
cmmakepkg –m sgesap/livecache lc.config Alternatively, from the Serviceguard Manager Main page, click on Configuration in the menu toolbar, then select Create a Modular Package from the drop down menu. Click on the yes radio button following Do you want to use a toolkit? Select the SGeSAP toolkit. In the Select the SAP Components in the Package table, select SAP Livecache Instances. Then click Next>>.
liveCache Installation Step: LC211 Log in on the primary node that has the shared logical volumes mounted. Create a symbolic link that acts as a hook that informs SAP software where to find the liveCache monitoring software to allow the prescribed interaction with it. Optionally, you can change the ownership of the link to sdb:sdba. ln -s /etc/cmcluster//saplc.mon /sapdb//db/sap/lccluster liveCache Installation Step: LC216 For the following steps the SAPGUI is required.
SERVICE_NAME[0]="SGeSAPlc" SERVICE_CMD[0]="/sapdb//db/sap/lccluster SERVICE_RESTART[0]="-r 3" • monitor" All other parameters should be chosen as appropriate for the ndividual setup. liveCache Installation Step: LC180 In /etc/cmcluster//lc.control.script add the following lines to the customer defined functions. These commands will actually start and stop the liveCache instance. function customer_defined_run_cmds { ### Add following line . /etc/cmcluster//sapwas.
Hot standby systems have a preferred default machine for the instance with the master role. This should usually correspond to the hostname of the primary node of the package. The hostname of the failover node defines the default secondary role, i.e. the default standby and alternative master machine. For a hot standby system to become activated, the LC_PRIMARY_ROLE and the LC_SECONDARY_ROLE need to be defined with physical hostnames. No virtual addresses can be used here.
Livecache Service Monitoring SAP recommends the use of service monitoring in order to test the runtime availability of liveCache processes. The monitor, provided with SGeSAP, periodically checks the availability and responsiveness of the liveCache system. The sanity of the monitor will be ensured by standard Serviceguard functionality. The liveCache monitoring program is shipped with SGeSAP in the saplc.mon file. The monitor runs as a service attached to the lc Serviceguard package.
APO Setup Changes Running liveCache within a Serviceguard cluster package means that the liveCache instance is now configured for the relocatable IP of the package. This configuration needs to be adopted in the APO system that connects to this liveCache. Figure 4-3 shows an example for configuring LCA. Figure 4-4 Example HA SCM Layout liveCache Installation Step: GS220 Run SAP transaction LC10 and configure the logical liveCache names LCA and LCD to listen to the relocatable IP of the liveCache package.
, e.g. 1LC1node1 exist. These will only work on one of the cluster hosts. • To find out if a given user key mapping works throughout the cluster, temporarily add relocatable address relocls_s to the cluster node with "cmmodnet -a -i relolc_s network" for testing the following commands.
For option 2: 1. 2. 3. 4. Add a shared logical volume for /export/sapdb/programs to the global NFS package (sapnfs). Copy the content of /sapdb/programs from the liveCache primary node to this logical volume. Make sure /sapdb/programs exists as empty mountpoint on all hosts of the liveCache package. Also make sure /export/sapdb/programs exists as empty mountpoint on all hosts of the sapnfs package. Add the following entry to /etc/auto.direct.
5 SAP Master Data Management (MDM) SGeSAP legacy packaging provides a cluster solution for server components of SAP Master Data Management (MDM), i.e. the MDB, MDS, MDIS, and MDSS servers. They can be handled as a single package or split up into four individual packages. Master Data Management - Overview Figure 5.1 provides a general flow of how SGeSAP works with MDM. • • • The Upper Layer—contains the User Interface components. The Middle Layer—contains the MDM Server components.
Table 5-1 MDM User Interface and Command Line Components Category Description MDM GUI (Graphical user Interface) clients MDM Console, MDM Data Manager Client, MDM Import Manager.... Allows you to use, administer and monitor the MDM components. For example, the MDM console allows you to create and maintain the structure of MDM repositories as wellmas to control access to them. NOTE: The MDM GUI interface is not relevant for the implementation of the SGeSAP scripts.
Installation and Configuration Considerations for MDM 5.5 in a Serviceguard legacy environment The following sections contain a step-by-step guide on the components required to install MDM in a SGeSAP (Serviceguard extension for SAP) environment. The MDM server components that are relevant for the installation and configuration of the SGeSAP scripts are: MDS, MDIS, MDSS and MDB. NOTE: Go to http://service.sap.
All nodes in the cluster mount the NFS exported files systems as NFS clients. Mount point for the NFS client file system is /home/mdmuser. Each of the MDM server (mds, mdss and mdis) components will use it's own directory (e.g.: /home/mdmuser/mds, /home/mdmuser/mdss, /home/mdmuser/mdis) within /home/mdmuser. Should the NFS server node fail - then the storage volume will be relocated to another cluster node, which then will take over the NFS server part.
• • • package could be configured to run on it's own cluster node and by this distribute load over the four cluster nodes. The naming convention for the Serviceguard packages is: mdbMDM - the string mdb stands for mdm database components and is required; the string MDM is used as a placeholder for a System ID and can be set to any value.
NOTE: The file system/export/home/mdmuser will be used by all 3 MDM server components MDS, MDIS and MDSS.
vgchange -a n /dev/vgmdmoradb vgchange -a n /dev/vgmdmuser mkdir -p /oracle/MDM mkdir -p /export/home/mdmuser mkdir -p /home/mdmuser Setup Step: MDM050 Create oramdm account for the Oracle user The following parameters and settings are used: /etc/passwd — — — — user:oramdm uid:205 home:/oracle/MDM group:dba shell:/bin/sh /etc/group — — — — oper::202:oramdm dba::201:oramdm /oracle/MDM/.
dba::201:oramdm,mdmuser /home/mdmuser/.profile — — — — # Note: HOME=/home/mdmuser export MDM_HOME=/opt/MDM export PATH=${PATH}:${MDM_HOME}/bin SHLIB_PATH=${MDM_HOME}/lib:${HOME}/oracle/MDM/920_64/lib export SHLIB_PATH PATH=${HOME}/oracle/MDM/920_64/bin:.
The file /etc/cmcluster/MDMNFS/mdmNFS.config contains information on the Serviceguard package name ( mdmNFS), the cluster nodes ( *) the package can run on and the scripts to execute for running and halting (/etc/cmcluster/MDMNFS/mdmNFS.control.script) this package. clunode1: vi /etc/cmcluster/MDMNFS/mdmNFS.config PACKAGE_NAME mdmNFS NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDMNFS/mdmNFS.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDMNFS/mdmNFS.control.
Distribute /etc/auto.direct to all cluster members with scp. scp -p /etc/auto.direct clunode2:/etc/auto.direct Restart nfs subsystem on all cluster members. /sbin/init.d/nfs.client stop /sbin/init.d/nfs.client start Creating an initial Serviceguard package for the MDB Component The following steps will depend if the MDM Database will be configured as "Single" or "Multiple" MDM Serviceguard package configurations as described earlier.
(b) Multiple MDM Serviceguard packages - create a mdbMDM package Create Serviceguard templates and edit the files for the mdbMDM package. For information on the variables used in this step see the configuration step of the nfsMDM package above. clunode1: mkdir /etc/cmcluster/MDM cmmakepkg -s /etc/cmcluster/MDM/mdbMDM.control.script cmmakepkg -p /etc/cmcluster/MDM/mdbMDM.config vi /etc/cmcluster/MDM/mdbMDM.config PACKAGE_NAME mdbMDM NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDM/mdbMDM.control.
IP[0]="172.16.11.95" SUBNET[0]="172.16.11.0" scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mdsMDM.config cmrunpkg mdsMDM Setup Step: MDM206 (b) Multiple MDM Serviceguard packages - create a mdisMDM package. Create Serviceguard templates and edit the files for the mdisMDM package. For information on the variables used in this step see the configuration step of the nfsMDM package above.
NOTE: At this stage file /etc/cmcluster/MDM/mdssMDM.control.script does not have to be edited as neither an IP address nor a storage volume has to be configured. scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/mdssMDM.config cmrunpkg mdssMDM Setup Step: MDM210 (b) Multiple MDM Serviceguard packages - create a masterMDM package. Create Serviceguard templates and edit the files for the masterMDM package.
Source: Path products: /KITS/920_64/Disk1/products.
This will unzip and copy the kit into directory /KITS/ora9208/Disk1. Start the Oracle upgrade executing the runInstaller command. The following contains a summary of the responses during the installation. /KITS/ora9208/Disk1/runInstller Specify File Locations — — — — Source: Path: /KITS/oa9208/stage/products.xml Destination: Name:MDM Path:/oracle/MDM/920_64 Summary — — — — Success Migrate the MDM database to 9208 — — — — See Oracle 9i Patch Set Notes Release 2 (9.2.0.
NOTE: SAP also provides a kit called "Oracle 9.2.0.8 client software". Do not install "Oracle 9.2.0.8 client software". The Oracle 9.2.0.8 client software is not an Oracle product. It is an SAP product and mainly provides libraries for an R/3 application server to connect to an Oracle database instance. This kit is called OCL92064.SAR and it installs into the /oracle/client/92x_64 directory.
[X] No I will create net service names my self Net Service Name Configuration — — — — [X] Oracle8i or late database service [ ] Oracle 8i release 8.0 database service NetService Name Configuration — — — — database used :MDM protocol used: TCP protocol used db host name: 172.16.11.97 Configure another service: no Net Service Configuration Complete Exit End of Installation Setup Step: MDM220 Upgrade to Oracle 9.2.0.8 for user mdmuser Upgrade to Oracle 9.2.
) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MDM) ) ) /oracle/MDM/920_64/network/admin/listener.ora — — — — # LISTENER.ORA Network Configuration File: # /oracle/MDM/920_64/network/admin/listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.
NOTE: The MDM servers (mds, mdiss, mdss) install their files into the /opt/MDM directory. Execute the following steps to install MDM Import Server: • • • • extract the zip archive uncompress and install the kit into the /opt/MDM directory create a /home/mdmuser/mdis directory cd /home/mdmuser/mdis and start the mdis process (this will create the mdis environment in the current directory) su cd /KITS/MDM_Install/MDM55SP04P03/MDM IMPORT SERVER 5.5/HPUX_64 /usr/local/bin/unzip MDMIMS55004P_3.
NOTE: The MDM servers (mds, mdiss, mdss) install their files into the /opt/MDM directory. Execute the following steps to install MDM Syndication Server: • • • • extract the zip archive uncompress and install the kit into the /opt/MDM directory create a /home/mdmuser/mdss directory cd /home/mdmuser/mdss and start the mdss process (this will create the mdss environment in the current directory) su cd /KITS/MDM_Install/MDM55SP04P03/MDM SYNDICATION SERVER\ 5.5/HPUX_64 /usr/local/bin/unzip MDMSS55004P_3.
• • • MDM_SCR is the script to execute MDM_ACTION is start for running a package and stop for halting a package MDM_COMPONENT which of MDM components should be started or stopped, in this case the mgroup component vi /etc/cmcluster/MDM/mgroupMDM.control.script function customer_defined_run_cmds { typeset MDM_SCR="/etc/cmcluster/MDM/sapmdm.
${MDM_SCR} "${MDM_ACTION}" "${MDM_COMPONENT}" test_return 52 } Setup Step: MDM240 Configure sap.config File /etc/cmcluster/MDM/sap.config contains the SGeSAP specific parameters for MDM. The following table is an excerpt and explains the parameters that can be set for the MDM configuration. Table 5-3 MDM parameter descriptions Parameter Description MDM_DB=ORACLE The database type being used. MDM_DB_SID=MDM The Oracle SID of the MDM database. MDM_DB_ADMIN=oramdm The Oracle admin user.
Table 5-4 MDM_MGROUP and MDM_MASTER dependencies Dependency Description MDM_MGROUP_DEPEND="\ mdb mds mdis mdss" The elements (mdb mds...) of variable MDM_MGROUP_DEPEND are the names of the MDM components that are started from a single Serviceguard package. The order of the MDM components is important. MDM_MASTER_DEPEND="\ mdbMDM mdsMDM mdisMDM\ mdssMDM" Each element (mdbMDM mdsMDM...) of variable MDM_MASTER_DEPEND is the name of a Serviceguard package to start.
PRODUCT_HA:MDM:o:mdm:sap \ PRODUCT_HA_INI:MDM:o:mdm:sap \ PRODUCT_HA_NOREAL:MDM:o:mdm:sap \ " MDM_CRED="Admin: " MDM_MONITOR_INTERVAL=60 MDM_MASTER_DEPEND="mdbMDM mdsMDM mdisMDM mdssMDM" Copy file sap.config to second node. scp -pr /etc/cmcluster/MDM/sap.config \ clunode2:/etc/cmcluster/MDM Setup Step: MDM246 (a) Enable SGeSAP monitoring for the "Single" mgroupMDM package The SGeSAP scripts include check functions to monitor the health of MDM processes. The variableMDM_MONITOR_INTERVAL=60 in file sap.
NOTE: The masterMDM Serviceguard package does not require a check function. The only goal of this package is to coordinate a cluster wide start and stop of the MDM server packages and therefore does not need a monitor.
Serviceguard log files: /var/adm/syslog/sylog.log Installation and Configuration Considerations for MDM 7.10 in a Serviceguard modular environment As of SGeSAP 5.10, SAP MDM 7.1 is only supported in a Serviceguard modular environment. The following section provides information on how to install and configure MDM 7.1 in this environment. NOTE: Go to www.service.sap.
• file system can only be mounted exclusively and accessed from the cluster node running the Serviceguard package though. NFS client: All cluster nodes mount the file system as NFS clients at the same time.
For performance reasons, the MDM database (MDB) file systems will be based on relocateable storage (the physical storage volume / file system can relocate between the cluster nodes, but only ONE node in the cluster will mount the file system). The file system will be mounted by the cluster node on which the database instance is started/running. The directory mount point is /oracle/MO7.
Add MDM virtual IP addresses and hostnames to /etc/hosts. Virtual IP addresses will be enabled on the cluster node where the corresponding Serviceguard package is started and disabled when the package is stopped. The MO7 NFS files systems will be accessible under 172.16.11.95/nfsMO7reloc for both (a) = ONE as well as for (b) = FOUR configurations as described above. For the (a) ONE package configuration the database will be accessible under the 172.16.11.
mkdir /dev/vgmo7oracli mknod mknod mknod mknod mknod mknod /dev/vgmo7ora/group c 64 0x050000 /dev/vgmo7sapmnt/group c 64 0x060000 /dev/vgmo7MDIS/group c 64 0x080000 /dev/vgmo7MDS/group c 64 0x070000 /dev/vgmo7MDSS/group c 64 0x090000 /dev/vgmo7oracli/group c 64 0x0a0000 Create relocatable storage pvcreate pvcreate pvcreate pvcreate pvcreate pvcreate /dev/rdsk/c0t2d0 /dev/rdsk/c0t3d0 /dev/rdsk/c0t4d0 /dev/rdsk/c0t5d0 /dev/rdsk/c0t6d0 /dev/rdsk/c0t7d0 vgcreate vgcreate vgcreate vgcreate vgcreate vgcreate
vgexport vgexport vgexport vgexport -p -p -p -p -s -s -s -s -m -m -m -m /tmp/vgmo7MDS.map /tmp/vgmo7MDIS.map /tmp/vgmo7MDSS.map /tmp/vgmo7oracli.map /dev/vgmo7MDS /dev/vgmo7MDIS /dev/vgmo7MDSS /dev/vgmo7oracli scp /tmp/*.
group:dba shell:/bin/sh /etc/group — — — — oper::202:oramo7 dba::201:oramo7 /oracle/MO7/.profile — — — — — — — — stty erase "^?" set -o emacs alias __A=`echo "\020"` alias __B=`echo "\016"` alias __C=`echo "\006"` alias __D=`echo "\002"` alias __H=`echo "\001"` # use emacs commands # up arrow = back a command # down arrow = down a command # right arrow = forward a character # left arrow = back a character # home = start of line . /etc/cmcluster.
# Volume Groups are configured for this package. vg vgmo7sapmnt vg vgmo7oracli # Package file systems ...
Create Serviceguard configuration file and edit the file for the mcsMO7 package. clunode1: mkdir -p /etc/cmcluster/MO7 cmmakepkg > /etc/cmcluster/MO7/mcsMO7.conf vi /etc/cmcluster/MO7/mcsMO7.conf package_name node_name log_level mcsMO7 * 5 ip_subnet ip_address 172.16.11.0 172.16.11.
vi /etc/cmcluster/MO7/mdbMO7.conf package_name mdbMO7 node_name * ip_subnet ip_address vg 172.16.11.0 172.16.11.96 /dev/vgmo7ora fs_name fs_directory fs_mount_opt fs_umount_opt fs_fsck_opt fs_type /dev/vgmo7ora/lvmo7ora /oracle/MO7 "-o rw" "" "" "" Apply the Serviceguard package configuration file. cmapplyconf -P /etc/cmcluster/MO7/mdbMO7.conf cmrunpkg mdbMO7 Setup Step: MDM204 (b) Multiple MDM Serviceguard packages - create a mdsMO7 package cmmakepkg > /etc/cmcluster/MO7/mdsMO7.
cmapplyconf -P /etc/cmcluster/MO7/mdisMO7.conf cmrunpkg mdisMO7 Setup Step: MDM208 (b) Multiple MDM Serviceguard packages - create a mdssMO7 package. clunode1: cmmakepkg > /etc/cmcluster/MO7/mdssMO7.conf vi /etc/cmcluster/MO7/mdssMO7.conf package_name mdssMO7 node_name * ip_subnet 172.16.11.0 ip_address 172.16.11.
Select configuration Option — — — — [x] Create a database [ ] Configure Automatic Storage Management [ ] Install database Software only Select Database Configuration — — — — [x] General Purpose [ ] Transaction Processing [ ] Data Warehouse [ ] Advanced Specify Database Configuration Options — — — — Database Naming Global Database Name [MO7] SID [MO7] Database Character set [Unicode standard UTF-8 AL32UTF8] Select Database Management Option — — — — [ ] Use Grid Control for Database Management [x] Use Databas
Select Inventory directory and credentials: — — — — Enter the full path of the inventory directory: [/oracle/oraInventory] Specify the Operating System group name: [dba] Select Installation Type: — — — — [ ] InstantClient [ ] Administrator [ ] Runtime [x] Custom Specify Home Details: — — — — Name: [MO7] Path: [/oracle/client] Available Product Components: — — — — [ ] [x] Oracle net 10.2.0.1.
vi /oracle/MO7/102_64/network/admin/listener.ora — — — — LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.97)(PORT = 1521)) ) ) STARTUP_WAIT_TIME_LISTENER = 0 CONNECT_TIMEOUT_LISTENER = 10 TRACE_LEVEL_LISTENER = OFF SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = MO7) (ORACLE_HOME = /oracle/MO7/102_64) ) ) Start the Oracle listener. Test the listener configuration and connect to the MDM database via the newly configured virtual IP address 172.16.11.
Connected to: Oracle Database 10g Release 10.2.0.1.0 - 64bit Production Setup Step: MDM226 Install the MDM MDS server. Select if MDM installation is a Central System installation or a Distributed System installation. The following contains a summary of responses for a distributed MDM system. Refer to the MDM installation guide for detailed information. cd /Kits/Server_Installation/Installation_Master/MDM_IM_HPUX_IA64 ./sapinst -nogui SAPINST_USE_HOSTNAME=172.16.11.
[/usr/sap/MO7/SYS/profile] MDIS Instance Instance Number [02] MDS for MDIS MDS Host [mdsreloc] Unpack Archives [mdis.sar] [shared.sar] [SAPINSTANCE.SAR] Execution of SAP MDM Installation -> Distributed System -> MDIS has been completed successfully. Setup Step: MDM226 Install the MDM MDSS server. Select if MDM installation is a Central System installation or a Distributed System installation. The following contains a summary of responses for a distributed MDM system.
cmhaltpkg mcsMO7 nfsMO7 Halt Serviceguard packages for configuration (b) cmhaltpkg mdisMO7 mdsMO7 mdsMO7 mdbMO7 nfsMO7 Setup Step: MDM236 Copy files to second cluster node. All nodes of the cluster require a copy of the configuration changes. Copy modified files to the second cluster node. First save a backup on clunode2. clunode2: cp cp cp cp /etc/passwd /etc/passwd.ORG /etc/hosts /etc/hosts.ORG /etc/group /etc/group.ORG /etc/services /etc/services.ORG clunode1: cd /usr/sap; tar cvf /tmp/usrsap.tar .
sgesap/stack/sap_instance MDSS03 Example for defining an MDM MDSS instance with instance number 03 in the same package sgesap/stack/sap_virtual_hostname mdsreloc Defines the virtual IP hostname that is enabled with the start of this package sgesap/mdm_spec/mdm_mdshostspec_host mdsreloc The MDS server will be accessible under this mdsreloc virtual IP address/hostname, sgesap/mdm_spec/mdm_credentialspec_user Admin User credential for executing MDM CLIX commands sgesap/mdm_spec/mdm_credentialspec_p
sgesap/mdm_spec/mdm_repositoryspec_repname PRODUCT_HA_REP sgesap/mdm_spec/mdm_repositoryspec_dbsid MO7 sgesap/mdm_spec/mdm_repositoryspec_dbtype o sgesap/mdm_spec/mdm_repositoryspec_dbuser mo7adm sgesap/mdm_spec/mdm_repositoryspec_dbpasswd abcxyz service_name mcsMO7mon service_cmd /opt/cmcluster/sap/SID/sapmdm.
sgesap/mdm_spec/mdm_repositoryspec_repname sgesap/mdm_spec/mdm_repositoryspec_dbsid sgesap/mdm_spec/mdm_repositoryspec_dbtype sgesap/mdm_spec/mdm_repositoryspec_dbuser sgesap/mdm_spec/mdm_repositoryspec_dbpasswd # Dependency mdbMO7 dependency_name dependency_condition dependency_location service_name service_cmd service_restart service_fail_fast_enabled service_halt_timeout PRODUCT_HA_REP MO7 o mo7adm abcxyz mdbMO7 mdbMO7=UP ANY_NODE mdsMO7mon /opt/cmcluster/sap/SID/sapmdm.
sgesap/mdm_spec/mdm_mdshostspec_host mdsreloc sgesap/mdm_spec/mdm_credentialspec_user Admin sgesap/mdm_spec/mdm_credentialspec_password "" service_name service_cmd service_restart service_fail_fast_enabled service_halt_timeout mdssMO7mon /opt/cmcluster/sap/SID/sapmdm.mon none no 0 To activate the changes in the Serviceguard configuration files run the following commands: cmcheckconf -P /etc/cmcluster/MO7/mdssMO7.conf cmapplyconf -P /etc/cmcluster/MO7/mdssMO7.
6 SGeSAP Cluster Administration SGeSAP clusters follow characteristic hardware and software setups. An SAP application is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a virtualization layer that keeps the application independent of specific server hardware.
• Halt Package, or Enable Package Maintenance to bring up the screen(s) that will allow you to perform each of these operations. You can also perform administrative tasks by clicking on the Packages tab on the Serviceguard Manager Main page to bring up the Packages screen. Select the package you want to perform administrative tasks on, then click on Administration in the menu bar to display a drop down menu of administrative tasks.
To update (edit) a SGeSAP toolkit package configuration: • • From the View Window on the right hand side of the Serviceguard Manager Main page, right click on a package icon to bring up the Operations Menu, then click Edit a Package to bring up the first in a series of screens where you can make edits to package properties.
All directories below /export have an equivalent directory whose fully qualified path comes without this prefix. These directories are managed by the automounter. NFS file systems get mounted automatically as needed. Servers outside of the cluster that have External Dialog Instances installed are set up in a similar way. Refer to /etc/auto.direct for a full list of automounter file systems of SGeSAP.
rdisp/btcname= __ rslg/collect_daemon/host = There are also two additional profile parameters SAPLOCALHOST and SAPLOCALHOSTFULL included as part of the Instance Profile of the Central Instance. Anywhere SAP uses the local hostname internally, this name is replaced by the relocatable values or .domain.organization of these parameters. Make sure that they are always defined and set to the correct value.
servers running on different nodes, any failure of the update server leads to subsequent outages at these nodes. Configure the update server on a clustered instance. Using local update servers should only be considered, if performance issues require it. Upgrading SAP Software Upgrading the version of the clustered SAP application does not necessarily require changes to SGeSAP. Usually SGeSAP detects the release of the application that is packaged automatically and treats it as appropriate.
if not already delivered in a translated form. At this first time of execution, the translated code is also stored in the database for next time use. This table is usually sized to hold the code for one platform. If you deploy application servers of different platforms within a single SAP system, this table needs to be sized appropriately to avoid unnecessary translations.
Take note that the referenced local executable link does not reside on the shared disc - it is on the local file system on each cluster node pointing to the appropriate executables for a HP 9000 or Integrity platform.