Managing Serviceguard Extension for SAP Version B.05.
© Copyright 2012 Hewlett-Packard Development Company, L.P Legal Notices Serviceguard, Serviceguard Extension for SAP, Serviceguard Extension for RAC, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.
Contents 1 Overview..................................................................................................6 About this Manual....................................................................................................................6 Related Documentation..............................................................................................................6 2 Designing SGeSAP Cluster Scenarios............................................................
Creation of Replication Instance......................................................................................54 HP-UX Configuration...............................................................................................................56 Directory Structure Configuration..........................................................................................57 Cluster Filesystem Configuration......................................................................................
The MDM SGeSAP File System Layout.................................................................................139 Single or Multiple MDM Serviceguard Package Configurations..............................................140 Single MDM Serviceguard Package (ONE).....................................................................140 Multiple MDM Serviceguard packages (FOUR+ONE)......................................................140 Creating an initial Serviceguard package for the MDB Component...........
1 Overview About this Manual This document describes how to configure and install highly available SAP systems on HP-UX 11i v3 using Serviceguard. To understand this document, a knowledge of the basic Serviceguard concepts and commands is required, in addition to experience in the Basis Components of SAP. This manual consists of six chapters: • Chapter 1—Overview • Chapter 2—Designing SGeSAP Cluster Scenario How to design a High Availability SAP Environment and cluster SAP components.
• Serviceguard Release Notes • HP P9000 RAID Manager Installation and Configuration User Guide Related Documentation 7
2 Designing SGeSAP Cluster Scenarios This chapter introduces the basic concepts used by the HP Serviceguard Extension for SAP (SGeSAP) and explains several naming conventions. The following sections provide recommendations and examples for typical cluster layouts that can be implemented for SAP environments. General Concepts of SGeSAP SGeSAP extends HP Serviceguard's failover cluster capabilities to SAP application environments.
the failover entity for a combination of ABAP-stack, JAVA-stack, double-stack instances; and optionally Central Service Instances or Enqueue Replication Service Instances of an SAP System. For MaxDB or Oracle-based SAP database services, the module sgesap/dbinstance can be used. The module to cluster SAP liveCache instances is called sgesap/livecache.
Configuration Restrictions • It is not allowed to specify a single SGeSAP package with two database instances in it. An environment with db and jdb requires at least two packages to be defined. • Module-based SGeSAP database packages cannot be combined with a legacy based NFS toolkit to create a single package. • It is not a requirement to do so, but it can help to reduce the complexity of a cluster setup, if SCS and ASCS are combined in a single package.
(and similarly the legacy package type db) clusters any of these databases. The module unifies the configuration, so that database package administration for all database vendors is treated identically. sgesap/dbinstance can be used with any type of SAP application, independent of whether it is ABAP-based or JAVA-based or both. In case they are available, the module will take advantage of database tools that are shipped with certain SAP applications.
Example: A simple mutual failover scenario for an ABAP application defines two packages, called dbSID and ascsSID (or ciSID for old SAP releases). Robust Failover Using the One Package Concept In a one-package configuration, the database, NFS and SAP SPOFs run on the same node at all times and are configured in one SGeSAP package. Other nodes in the Serviceguard cluster function as failover nodes for the primary node on which the system runs during normal operation.
Follow-and-Push Clusters with Replicated Enqueue In case an environment has very high demands regarding guaranteed uptime, it makes sense to activate a Replicated Enqueue with SGeSAP. With this additional mechanism, it is possible to failover ABAP and/or JAVA System Central Service Instances without impacting ongoing transactions on Dialog Instances.
The SGeSAP packaging of the ERS Instance provides startup and shutdown routines, failure detection, split-brain prevention and quorum services to the mechanism. SGeSAP also delivers an EMS (HP-UX Event Monitoring Service) that implements a cluster resource called /applications/sap/enqor/ers for each Replicated Enqueue in the cluster. Monitoring requests can be created to regularly poll the status of each Replicated Enqueue.
A dedicated SAPNFS package is specialized to provide access to shared filesystems that are needed by more than one mySAP component. Typical filesystems served by SAPNFS would be the common SAP transport directory or the global MaxDB executable directory of MaxDB 7.7. The MaxDB client libraries are part of the global MaxDB executable directory and access to these files is needed by APO and liveCache at the same time. Beginning with MaxDB 7.
Figure 4 Failover Node with Application Server package Figure 4 illustrates a common configuration with the adoptive node running as a Dialog Server during normal operation. Node1 and node2 have equal computing power and the load is evenly distributed between the combination of database and Central Service Instance on node1 and the additional Dialog Instance on node2. If node1 fails, the Dialog Instance package will be shut down during failover of the dbciSID package.
if Dialog Instances are external to the cluster, they may be affected by package startup and shutdown. For convenience, Additional Dialog Instances can be started, stopped or restarted with any SGeSAP package that secures critical components. Some SAP applications require the whole set of Dialog Instances to be restarted during failover of the Central Service package. This can be triggered with SGeSAP means.
Figure 5 Replicated Enqueue Clustering for ABAP and JAVA Instances Figure 1-5 shows an example configuration. The dedicated failover host can serve many purposes during normal operation. With the introduction of Replicated Enqueue Servers, it is a good practice to consolidate a number of Replicated Enqueues on the dedicated failover host. These replication units can be halted at any time without disrupting ongoing transactions for the systems they belong to.
3 SGeSAP Cluster Administration SGeSAP clusters follow characteristic hardware and software setups. An SAP application is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a virtualization layer that keeps the application independent of specific server hardware.
To run, halt, move or enable maintenance on a SGeSAP toolkit package: • From the View window on the right side of the Serviceguard Manager Main page, right click on a package icon to bring up the Operations menu, then click Run Package, Halt Package, or Enable Package Maintenance to bring up the screen(s) that will allow you to perform each of these operations. • You can also perform administrative tasks by clicking the Packages tab on the Serviceguard Manager Main page to bring up the Packages screen.
In addition to standard Serviceguard package dependencies, SGeSAP packages keep track of SAP enqueue replication relationships. • Click on the Dependencies tab on the Serviceguard Manager Main page to bring up the Dependencies screen. The enqueue replication relationships are displayed using dashed arrows, while standard package dependencies are given in solid lines.
failover. The SGeSAP monitor postpones monitoring activities while the instance is manually stopped. The SGeSAP monitor postpones monitoring activities while the instance is manually stopped. The Serviceguard package continues to run and does not failover. NOTE: profile.
NOTE: Packages that have several Netweaver instances configured, continue to monitor all the instances that are not manually halted. If any actively monitored instance fails, it results in a failover and restart of the whole package. One of the methods to restart a manually halted instance is to issue the following command: sapcontrol -nr -function Start Any other startup method provided by SAP's sapcontrol command works in the similar way.
database runs. The changed volume group configuration has to be redistributed to all cluster nodes afterwards via vgexport(1m) and vgimport(1m). It is a good practice to keep a list of all directories that were identified in chapter “Designing SGeSAP Cluster Scenarios” (page 8) to be common directories that are kept local on each node. As a rule of thumb, files that get changed in these directories need to be manually copied to all other cluster nodes afterwards. There might be exceptions.
touch /etc/cmcluster/ /debug — debug mode for the SGeSAP legacy SGeSAP packages of SAP system. The SGeSAP node will now be in debug mode. If the file is created in the package directory /etc/cmcluster/, it will only turn on the debug mode for packages in that directory. The debug mode allows package start up to the level of SAP specific steps. All instance startup attempts will be skipped. Service monitors will be started, but they don't report failures as long as the debug mode is turned on.
Batch jobs can be scheduled to run on a particular instance. Generally speaking, it is better not to specify a destination host at all. Sticking to this rule, the batch scheduler chooses a batch server that is available at the start time of the batch job. However, if you want to specify a destination host, specify the batch server running on the highly available Central Instance.
SGeSAP specific check routines which are not related with SAP requirements towards local operating environment configurations are not deactivated and are executed as part of both cmcheckconf(1m) and cmapplyconf(1m) commands. Upgrading SAP Software Upgrading the version of the clustered SAP application does not necessarily require changes to SGeSAP. Usually SGeSAP detects the release of the application that is packaged automatically and treats it as appropriate.
Most of SAPs business transactions are still written in ABAP, SAPs own programming language. All programs are stored as source code in the database and translated at the first time of execution if not already delivered in a translated form. At this first time of execution, the translated code is also stored in the database for next time use. This table is usually sized to hold the code for one platform.
The executable directory is substituted by a link referencing another local symbolic link that distinguishes between a HP 9000 or Integrity based system. cd /sapmnt/ ln -s /sapmnt/exe_local exe Take note that the referenced local executable link does not reside on the shared disc—it is on the local file system on each cluster node pointing to the appropriate executables for a HP 9000 or Integrity platform.
4 Planning the Storage Layout Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate storage groups. Two volume managers can be used with Serviceguard: the standard Logical Volume Manager (LVM) of HP-UX and the Veritas Volume Manager (VxVM). SGeSAP can be used with both volume managers.
Each file system that gets added to a system by SAP installation routines must be classified and a decision has to be made: • Whether the file system needs to be kept as a local copy on internal disks of each node of the cluster. • Whether the file system needs to be shared on a SAN storage device to allow failover and exclusive activation. • Whether the file system needs to provide shared access to more than one node of the cluster at the same time.
shown that it is a good practice to create local copies for all files in the central executable directory. This includes shared libraries delivered by SAP. To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally. Directories that Reside on Shared Disks Volume groups on SAN shared storage are configured as part of the SGeSAP packages.
Table 4 Instance Specific Volume Groups for exclusive activation with a package (continued) Mount Point Access Point Recommended packages VG Name Device minor number (SAP kernel 7.
The use of a HA NFS service can be configured to export file systems to external Application Servers that manually mount them. A dedicated NFS package is not possible. Dedicated NFS requires option 1. Common Directories that are Kept Local The following common directories and their files are kept local on each node of the cluster: • /home/adm — the home directory of the SAP system administrator with node specific startup log files.
Table 6 File systems for the SGeSAP package in NFS Idle Standby Clusters Mount Point Access Point Remarks /sapmnt/ shared disk and HA NFS required /usr/sap/ shared disk /usr/sap/trans shared disk and HA NFS VG Name Device minor number optional The Table 6 (page 35) can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes.
Directories that Reside on CFS The following table shows a recommended example on how to design SAP file systems for CFS shared access.
Table 8 Availability of SGeSAP Storage Layout Options for Different Database RDBMS (continued) DB Technology Supported Platforms SGeSAP Storage Cluster Software Bundles Layout Options IBM DB2 Itanium NFS 1. Serviceguard or any Serviceguard Storage Management bundle 2. SGeSAP 3. Serviceguard HA NFS Toolkit Sybase ASE Itanium NFS 1. Serviceguard or any Serviceguard Storage Management bundle 2. SGeSAP 3.
1. 2. The Oracle RDBMS and database tools rely on an ORA_NLS[] setting that refers to NLS files that are compatible to the version of the RDBMS. Oracle 9.x needs NLS files as delivered with Oracle 9.x. The SAP executables rely on an ORA_NLS[] setting that refers to NLS files of the same versions as those that were used during kernel link time by SAP development. This is not necessarily identical to the installed database release.
NOTE: The configurations are designed to be compliant to SAP OSS note 830982. The note describes SAP recommendations for Oracle RAC configurations. A support statement from SAP regarding RAC clusters on HP-UX can be found as part of SAP OSS note 527843. A current support statement in note 527843 is required, before any of the described RAC options can be implemented. The note maintenance is done by SAP and the note content may change at any time without further notice.
in separate files Installations.ini, Databases.ini, and Runtimes.ini in the IndepData path /sapdb/data/config. NOTE: The[Globals] section is commonly shared between LC1/LC2 and AP1/AP2. This prevents setups that keep the directories of LC1 and AP1 completely separated. A sample SAP_DBTech.ini, Installations.ini, Databases.ini, and Runtimes.ini for a host with a liveCache 7.5 (LC2) and an APO 4.1 using a MaxDB 7.5 (AP2) from /var/spool/sql/ini/SAP_DBTech.
kernel processes are written into the working directory. These core files have file sizes of several Gigabytes. Sufficient free space needs to be configured for the shared logical volume to allow core dumps. For MaxDB version 7.8 or later, this directory is replaced by /sapdb//data. NOTE: For MaxDB RDBMS starting with version 7.6, these limitations do not exist. The working directory is utilized by all instances (IndepData/wrk) and can be globally shared.
Table 12 File System Layout for SAPDB Clusters (continued) Mount Point Access Point Potential Owning Packages /etc/opt/sdb local none /var/lib/sdb local none VG Type VG Name Device Minor Number environment specific *Only valid for versions lower than 7.6. **Only valid for versions 7.8 or higher. NOTE: Using tar or cpio is not a safe method to copy or move directories to shared volumes.
Table 14 File System Layout for ASE Clusters Mount Point Access Point Potential Owning Packages VG Type /sybase/ shared disk db database sybase//sapdata dbci specific sybase//saplog jdb sybase//sapdiag jdbjci VG Name Device Minor Number sybase//sybsystem sybase//sybtmp Database Instance Storage Considerations 43
5 Step-by-Step Cluster Conversion This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). It gives examples for each task in great detail. Actual implementations might require a slightly different approach. Many steps synchronize cluster host configurations or virtualize SAP instances manually. If these tasks are already covered by different means, it might be sufficient to quickly check that the requested result is already achieved.
There is one exception in the naming convention, which concerns the dedicated NFS package sapnfs that might serve more than one SAP SID. SGeSAP modules are implemented to work independent of the used package naming. For these packages, the above naming scheme is a recommendation.
The SAP Application Server installation types are ABAP-only, Java-only, and double-stack. The latter includes both the ABAP and the Java stack. In principle, all SAP cluster installations look very similar. Older SAP systems get installed in the same way as they would without a cluster. Cluster conversion takes place afterwards and includes a set of manual steps. Some of these steps can be omitted because the introduction of high-availability installation options to the SAP installer SAPINST.
The Central System and Distributed System installations build a traditional SAP landscape. They will install a database and a monolithic Central Instance. Exceptions are Java-only based installations. NOTE: For Java-only based installations, the only possible installation option is a High Availability System installation. It is strongly recommended to use the "High Availability System" option for all new installations that are meant to be used with SGeSAP. A SAP Application Server 7.
Immediately issues an error message, because System Central Services and Enqueue Replication cannot share the same package. Preparation Step: MS1210 The created configuration file needs to be edited. Refer to the Managing Serviceguard user guide for general information about the generic file content. A minimum configuration will do for the purpose of supporting the SAP installation. At least the following parameters should be edited: package_name, node_name, ip_address and monitored_subnet.
Installation Step: IS1330 The installation is done using the virtual IP provided by the Serviceguard package. SAPINST can be invoked with a special parameter called SAPINST_USE_HOSTNAME. This prevents the installer routines from comparing the physical hostname with the virtual address and drawing wrong conclusions. The installation of the entire SAP Application Server 7.0 will happen in several steps, depending on the installation type. Each time a different virtual hostname can be provided.
When starting the SAPINST installer for kernel 6.40, the first screen shows installation options that are generated from an XML file called product.catalog located at /IM_OS/SAPINST/UNIX/. The standard catalog file product.catalog has to be either: • Replaced by product_ha.catalog in the same directory on a local copy of the DVD -Or- • The file product_ha.
part 'Creation of Replication Instance' is required. The split of the Central Instance is then already effective and an [A]SCS instance was created during installation.
su - adm cd/usr/sap//ASCS mkdir data log sec work Replicated Enqueue Conversion: RE040 If the used SAP kernel has a release older than 6.40... Download the executables for the Standalone Enqueue server from the SAP Service Marketplace and copy them to /sapmnt//exe. There should be at least three files that are added/replaced: enserver, enrepserver, and ensmon.
# start SCSA handling #----------------------------------------------------------------------Execute_00 =local $(DIR_EXECUTABLE)/sapmscsa –n pf=$(DIR_PROFILE)/_ASCS_ #----------------------------------------------------------------------# start message server #----------------------------------------------------------------------MS =ms.
#----------------------------------------------------------------------# start SCSA #----------------------------------------------------------------------Execute_00 = local $(DIR_EXECUTABLE)/sapmscsa -n pf=$(DIR_PROFILE)/_DVEBMGS_ #----------------------------------------------------------------------# start application server #----------------------------------------------------------------------_DW = dw.
A couple of required SAP executables should be copied from the central executable directory /sapmnt//exe to the instance executable directory/usr/sap//ERS/exe. For SAP kernel 6.40, the list includes: enqt, enrepserver, ensmon, libicudata.so.30, libicui18n.so.30, libicuuc.so.30, libsapu16_mt.so, libsapu16.so, librfcum.so, sapcpe, and sapstart. For some releases, the shared library file extension .so is replaced by .sl. Apart from this, the procedure remains the same.
SAPSYSTEM = SAPSYSTEMNAME = INSTANCE_NAME = ERS #-------------------------------------------------------------------# Special settings for this manually set up instance #-------------------------------------------------------------------DIR_EXECUTABLE = $(DIR_INSTANCE)/exe DIR_PROFILE = $(DIR_INSTANCE)/profile DIR_CT_RUN = /usr/sap//SYS/exe/runU #-------------------------------------------------------------------# Settings for enqueue monitoring tools (enqt, ensmon) #------------
Several of the following steps must be repeated on each node. Record the steps completed for each node as you complete them. This helps identify errors in the event of a malfunction later in the integration process. The HP-UX configuration task is split into the following sections: • Directory Structure Configuration—this section adds implementation practices to the architectural decisions made in the chapter "Storage Layout Considerations.
All SAP Instances (CI, SCS, ASCS, D,...) must have instance numbers that are unique for the whole cluster. Exceptions are (A)REP instances that follow an old naming convention of sharing the instance ID with their corresponding (A)SCS instance. This naming convention should no longer be used, since it cannot be combined with the SAP startup framework. For a clustered instance with instance number , execute the command: ls -d /usr/sap/???/* It should not reply any match on any node.
Integrate all SAP related CFS disk groups and file systems. The package dependencies to the system multi-node package get created automatically. The mount points need to be created manually on all alternate nodes first.
mkdir /usr/sap/.new cd /usr/sap/ bdf . # Remember the filesystem column. # It will be referred to as later. find . -depth -print|cpio -pd/usr/sap/.new cd / umount /usr/sap/ rmdir /usr/sap/ mv /usr/sap/.new/usr/sap/ chmod 751 /usr/sap/ chown adm:sapsys/usr/sap/ cd/usr/sap//DVEBMGS rm -r * # be careful with this cd ..
--...> param_directput RUNDIRECTORY /sapdb//wrk OK --...> Installation Step: IS050 This step describes how to create a local copy of the SAP binaries. The step is mandatory for legacy packages. It is also required to do this, if a module-based package is used for a system with SAP kernel <7.0. Check if the Central Instance host and all application servers have a directory named /usr/sap//SYS/exe/ctrun. If the directory exists, this step can be skipped.
directory. For each additional file in the central executable directory it needs to contain a line with the following format: local_copy check_exist SAP also delivers template file lists within the central executable directory. These files have a filename of the format *.lst. The following files override sapcpeft: instance.lst, instancedb.lst, tools.lst, and inhouse.lst. Either the *.lst files should get removed, or they should be used instead of sapcpeft.
Table 16 Groupfile File Groups Groups GID Group members sapsys sapinst sdba oper dbadm dbmnt dbctl dbmon Installation Step: IS080 Open the password file, /etc/passwd, on the primary side. If any of the users listed in Table 17 (page 63) exist on the primary node, recreate them on the backup node. Assign the users on the backup nodes the same user and group ID as the primary nodes. NOTE: mode.
Table 18 Services on the Primary Node (continued) Service name Service port sapdb saphostctrl saphostctrls sapdb2 DB2_db2 DB2_db2_1 DB2_db2_2 DB2_db2_END Installation Step: IS100 Change the HP-UX kernel on the backup node to meet the SAP requirements. Compare the Tunable Parameters section of /stand/system on all nodes. All values on the backup nodes must reach or exceed the values of the primary node.
On each host the files /home/adm/startsap _ and /home/adm/stopsap__ exist and contain a line that specifies the start profile. After a standard installation of a Central Instance this line is similar to: START_PROFILE="START__" As adm, change the line individually on each node.
Import the shared volume groups using the minor numbers specified in Table 4 (page 32) in chapter “Planning the Storage Layout” (page 30) . The whole volume group distribution should be done using the command line interface. Do not use SAM. SAM will not create minor numbers in the intended fashion. Specify the device minor numbers explicitly by creating the group file manually.
Make sure that the required software packages are installed on all cluster nodes: • Serviceguard Extension for SAP, T2803BA The swlist command may be utilized to list available software on a cluster node. If a software component is missing, install the required product depot files using the swinstall tool. Installation Step: IS260 You need to allow remote access between cluster hosts. This can be done by using remote shell remsh(1) or secure shell ssh(1) mechanisms.
configuration file sap.config under section Configuration of the optional Application Server Handling. # REM_COMM=ssh # REM_COMM=remsh Installation Step: IS280 If storage layout option 1 is used, create all SAP instance directories below /export as specified in chapter 2. Example: su - adm mkdir -p /export/sapmnt/ mkdir -p /export/usr/sap/trans exit MaxDB Database Step: SD290 Create all MaxDB directories below /export as specified in chapter 2.
Example: hosts: files[NOTFOUND=continue UNAVAIL=continue TRYAGAIN=continue]dns Installation Step: IS330 In some older SAP releases, during installation, SAP appends some entries to the standard .profile files in the user home directories instead of using a new file defined by the SAP System. On HP-UX, by default, there is the following in the given profiles: set -u This confuses the .dbenv*.sh and .sapenv*.sh files of the SAP System. They fail during execution if the environment is not setup properly.
Figure 10 Deploy SGeSAP Packages 2. Select the package option you want to deploy in the package. NOTE: Multiple packages is the default and recommended option, which allows you to distribute the non-redundant SAP components across multiple, separate packages to avoid dependencies between SAP components. This option must be selected if each non-redundant SAP component has exclusive resources available (e.g: volume groups, virtual hostnames, and IP addresses).
Figure 11 Generating SGeSAP package configuration file(s) The package configuration file banner includes a configuration file name. To view or modify the configuration file information, click the arrow button ↑ on the right side of the configuration file banner. On successful generation of the SGeSAP package configuration file operation, the generated configuration file is displayed in a horizontal banner below the operation log window. The configuration file is not available for editing.
6. 7. To cancel the SAP toolkit package SGeSAP package deployment operation, click Cancel . You will return to the HP Serviceguard Manager homepage. Click Apply to complete the SGeSAP package deployment operation. If the package is up-to-date, then theApply button will not be enabled. Apply option may halt some packages when applied. A warning message is displayed if one or more packages need to be halted. Packages might be stopped when pressing Apply.
To start the Serviceguard Manager GUI, go to the hostname of a cluster node: http://hostname:2301. From the Serviceguard Manager Main page, click on Configuration in the menu toolbar. From the drop down menu, select Create a Modular Package. 1. A Create a Modular Package screen for selecting toolkits appears. Next to Do you want to use a toolkit?, select yes, 2. Select the SGeSAP toolkit. The HA NFS toolkit can be selected in addition, if the HA NFS toolkit is installed.
selections. Each Serviceguard or Serviceguard toolkit module adds one or more screens to the dialog. Additional information on the SGeSAP specific screens are described below. GI020 Select the SAP system that is to be clustered (sgesap/sap_global module) Most SGeSAP packages are associated with a single SAP system. There is a dialog screen that allows to select that system as well as a couple of package configuration settings that have impact on all SGeSAP modules in the package. 1.
and configured to autostart the SAP instance, SGeSAP will monitor the operation to make sure that the instance startup succeeds. 1. In most situations, Serviceguard Manager Auto-Discovery can discover the SAP instances in the cluster. If Serviceguard Manager can discover the SAP instances, you can either select an SAP instance from the drop-down list, or enter one manually. If Serviceguard Manager cannot auto-discover SAP instances in the cluster, the Select or Enter SAP Instance feature will not appear.
The Virtual Hostname parameter corresponds to the virtual hostname that got specified during SAP installation. The SAP virtual hostname is a string value. It is not possible to specify the corresponding IPv4 or IPv6 address. If the string is empty, the DNS resolution of the first specified package ip_address parameter will be substituted. In this case, the script only works properly if reliable address resolution is available. Domain name extensions are not part of the virtual hostname.
The external SAP instance parameters may be used to configure special treatment of SAP Application Server instances during package start, halt, or failover. These Application Servers need not necessarily be virtualized. They must not be clustered themselves. Clustered SAP Instances can be influenced by specifying Serviceguard package dependencies. An attempt can be triggered to start, stop, restart, or notify instances that belong to the same SAP system as the instances in the package.
To remove an SAP infrastructure software component from this list, click on the radio button adjacent to the SAP infrastructure software component you want to remove, then click Remove. To edit a configured SAP infrastructure software component, click on the radio button adjacent to the SAP infrastructure software component you want to edit, then click Edit>>. The SAP infrastructure software component information will move to the Type, Start/Stop, and Parameters input fields—where you can make changes.
toolkit documentation describes where the input files need to be referenced in the (g)WLM setup in order to activate this functionality. The sapdisp.mon for modular packages creates files called /var/adm/cmcluster/wlmprocmap.__. The parameter might be a virtual hostname, if the SAP instance runs on a virtual hostname.
Create the SGeSAP package configuration file. The file might already exist, if the SAP System got installed onto virtual IP addresses. The SGeSAP configuration file is usually created with one of the following commands: cmmakepkg -m sgesap/all > ./sap.config — for a configuration file that covers all available SGeSAP module functionality and combines all SGeSAP modules in a single package. cmmakepkg -m sgesap/sapinstance > ./sap.
Alternatively, enable Serviceguard package_admin access role for your adm in the package configuration. Preparation Step: MS410 The generic Serviceguard parameters of the configuration file need to be edited. Refer to the Managing Serviceguard User's Guide for general information about the generic file content: At least you’ll want to specify package_name, node_name, ip_address, and monitored_subnet in the file that you created with step MS400.
Prior to any instance startup attempts the SGeSAP tries to free up unused or unimportant resources to make the startup more likely to succeed. A database package only frees up database related resources, a SAP Instance package only removes IPCs belonging to SAP administrators. The following list summarizes how the behavior of SGeSAP is affected with different settings of the cleanup_policy parameter: • lazy—no action, no cleanup of resources.
The package parameter db_vendor defines the underlying RDBMS database technology that is to be used with the SAP application. It is preset to oracle in sgesap/oracledb, but should otherwise be manually set to oracle. It is still optional to specify this parameter, but either db_vendor or db_system needs to be set in order to include the database in the failover package. Optionally, listener_name can be set if the Oracle listener is defined on a name different from the default value LISTENER.
The sap_virtual_hostname parameter corresponds to the virtual hostname that got specified during SAP installation. The SAP virtual hostname is a string value. It is not possible to specify the corresponding IPv4 or IPv6 address. If the string is empty, the DNS resolution of the first specified package ip_address parameter will be substituted. In this case, the script only works properly if reliable address resolution is available. Domain name extensions are not part of the virtual hostname.
The SGeSAP ENQueue Operation Resource (ENQOR) needs to be specified for each SCS, ASCS or ERS instance. The monitoring resource is provided by SGeSAP within the HP-UX Event Monitoring System (EMS). It does not need to be created. It is available, if the system has an SAP system installed, that uses [A]SCS/ERS instance pairs.
sap_ext_treat defines the way the instance is treated if the status of the package changes. sap_ext_treat values are of a sequence of five 'y' and 'n' characters, each position representing the activation or deactiviation of a feature. Set the 1. position to 'y' to automatically make an attempt to start the instance with each package startup operation. Set the 2. position to 'y' to automatically make an attempt to stop the instance with each package shutdown operation. Set the 3.
sapccmsr = a SAP additional monitor collector rfcadapter = a SAP XI/PI/EAI remote function call adapter sapwebdisp = a SAP webdispatcher (installed to /usr/sap//sapwebdisp) The following values can be specified more than once: saprouter = a SAP software network routing tool biamaster = a SAP BIA master nameserver (not for production use) sap_infra_treat defines whether the tool will only be started/notified with the package startup, or whether it will also be stopped as part of a package shutdown (defa
Module-based SGeSAP packages will handle the sapstart service agent automatically. For SAP releases based on kernel 7.10 or higher, SGeSAP can also register an instance with the startup framework as part of the package startup operation and de-register it as part of the package halt. If the service agent is started via SGeSAP and configured to autostart the SAP instance, SGeSAP will monitor the operation to make sure that the instance startup succeeds.
recommended to be named .control.script for a control file and .config for a configuration file. For any kind of package (substitute with your package type): cmmakepkg -s /etc/cmcluster//.control.script cmmakepkg -p /etc/cmcluster//.config Installation Step: LS430 The created configuration file(s) need to be edited. Refer to the Managing Serviceguard User's Guide for general information about the file content.
continues to run. Make sure that the packaged SAP instance is already running when removing the debug file or an immediate failover will occur. As long as the debug file exists locally, a package shutdown would not try to stop the SAP instance or its database. A local restart of the package would only take care of the HA NFS portion. The instance would not be started with the package and the monitor would be started in paused mode. Example entries in .
ENQOR_ASCS_PKGNAME_= ENQOR_SCS_PKGNAME_= ENQOR_AREP_PKGNAME_= ENQOR_REP_PKGNAME_= Starting with SAP kernel 7.x and resources. /applications/sap/enqor/ers use: ENQ_SCS_ERS_PKGNAME_= ENQ_ERS_ERS_PKGNAME_= needs to be replaced by the SAP instance number. needs to be replaced with the 3-letter SAP system identifier.
test_return 52 } The SAP specific control file sapwas.cntl needs two arguments—the MODE (start/stop) and the SAP System ID (SID). Don't omit the leading period sign in each line that calls sapwas.cntl. Installation Step: LS500 Distribute the package setup to all failover nodes.
be specified exactly once and it will configure all packaged components of a within the cluster. This is a central place of configuration, even if the intention is to divide the components into several different packages. The mapping of components to packages will be done automatically. There is one subsection per package type: • db: a database utilized SAP ABAP or SAP add-in installations. Do not combine a database with a liveCache in a single package.
Specify the relocatible IP address of the database instance. Be sure to use exactly the same syntax as configured in the IP[]-array in the package control file. Example: DBRELOC=0.0.0.0 In the subsection for the DB component, there is an optional paragraph for Oracle and SAPDB/MaxDB database parameters. Depending on your need for special HA setups and configurations, have a look at those parameters and their description.
Default SAP installations load the J2EE database scheme into the same database that the ABAP Engine is using. In this case, no JDB component needs to be specified and this configuration section does not need to be maintained. For the parameter JDB, the underlying RDBMS is to be specified. Supported databases are ORACLE or SAPDB, for example: JDB=ORACLE Also, a relocatable IP address has to be configured for the database for J2EE applications in JDBRELOC.
During startup of a package , SGeSAP checks the existence of SAP specific configuration files in the following order: • If a sap.conf file exists, it is effective for compatibility reasons. • If a sap.conf file exists, it will overrule previous files and is effective. • If a sap.config file exists, it will overrule previous files and is effective. • If a sap.config file exists, it will overrule previous files and is effective.
START_WITH_PKG, STOP_WITH_PKG, RESTART_DURING_FAILOVER, STOP_IF_LOCAL_AFTER_FAILOVER, STOP_DEPENDENT_INSTANCES. • ◦ ASTREAT[*]=0 means that the Application Server is not affected by any changes that happens to the package status. This value makes sense to (temporarily) unconfigure the instance. ◦ ${START_WITH_PKG}: Add 1 to ASTREAT[*] if the Application Server should automatically be started during startup of the package.
Table 20 Overview of reasonable ASTREAT values ASTREAT value STOP_DEP STOP_LOCAL RESTART STOP START Restrictions 0 0 0 0 0 0 1 0 0 0 0 1 (1) Should only be configured for AS that belong to the same SID 2 0 0 0 1 (2) 0 3 0 0 0 1 (2) 1 4 0 0 1 (4) 0 0 5 0 0 1 (4) 0 1 (1) 6 0 0 1 (4) 1 (2) 0 7 0 0 1 (4) 1 (2) 1 (1) 8 0 1 (8) 0 0 0 Can be configured for AS belonging to same SID or AS part of other SID 9 0 1 (8) 0 0 1 (1) 10 0 1 (8) 0 1 (1)
The Central Instance is treated the same way as any of the additional packaged or unpackaged instances. Use the ASTREAT[*]-array to configure the treatment of a Central Instance. For Example, if a Central Instance should be restarted as soon as a DB-package fails over specify the following in sapdb.
Optional Step: OS720 If several packages start on a single node after a failover, it is likely that some packages start up faster than others on which they might depend. To allow synchronization in this case, there is a loop implemented that polls the missing resources regularly. After the first poll, the script will wait 5 seconds before initiating the next polling attempt. The polling interval is doubled after each attempt with an upper limit of 30 seconds.
Table 21 Optional Parameters and Customizable Functions List Command: Description: start_addons_predb Additional startup steps on database host after all low-level resources are made available after the system has been cleaned up for db start before start of database instance. start_addons_postdb Additional startup steps on database host after start of the database instance. start_addons_preci Additional startup steps on Central Instance host after start of the database instance.
Specify SAPCCMSR_START=1 if there should be a CCMS agent started on the DB host automatically. You should also specify a path to the profile that resides on a shared lvol. Example: SAPCCMSR_START=1 SAPCCMSR_CMD=sapccmsr SAPCCMSR_PFL="/usr/sap/ccms/${SAPSYSTEMNAME}_${CINR}/sapccmsr/${SAPSYSTEMNAME}.pfl” Global Defaults The fourth section of sap.config is rarely needed. It mainly provides various variables that allow overriding commonly used default parameters.
The following example WLM configuration file illustrates how to favor dialog to batch processing. prm { groups = OTHERS : 1, Batch : 2, Dialog : 3, SAP_other: 4; users = adm: SAP_other, ora: SAP_other; # # utilize the data provided by sapdisp.mon to identify batch and dialog # procmap = Batch : /opt/wlm/toolkits/sap/bin/wlmsapmap -f /etc/cmcluster//wlmprocmap.__ -t BTC, Dialog : /opt/wlm/toolkits/sap/bin/wlmsapmap -f /etc/cmcluster/P03/wlmprocmap.
Non-default SAP parameters that can be defined include EXEDIR for the SAP access path to the SAP kernel executables (default: /usr/sap//SYS/exe/run) and JMSSERV_BASE (default:3900) for the offset from which the message server port of JAVA SCS instances will be calculated as JMSSERV_BASE+.
/sapmnt/ /usr/sap/trans Exported directories can usually be found beneath the special export directory /export. The directories to be exported are specified including their export options, using the XFS[*] array of the hanfs. script out of legacy-style HA-NFS toolkit package. This script is called within the runtime by the standard Serviceguard control script. In modular-style packages, specify the parameter nfs/hanfs_export/XFS instead.
AUTOMOUNT=1 AUTO_MASTER="/etc/auto_master" AUTO_OPTIONS="-f $AUTO_MASTER" Installation Step: IS810 Make sure that at least one NFS client daemon and one NFS server daemon is configured to run. This is required for the automounter to work. Check the listed variables in /etc/rc.config.d/nfsconf. They should be specified as greater or equal to one. Example: NFS_CLIENT=1 NFS_SERVER=1 NUM_NFSD=4 NUM_NFSIOD=4 Installation Step: IS820 Add the following line to your /etc/auto_master file: /- /etc/auto.
The instance and crash recovery is initiated automatically and consists of two phases: Roll-forward phase: Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. Following parameters can be used to tune the roll forward phase: • The parameter RECOVERY_PARALLELISM controls the number of concurrent recovery processes. • The parameter FAST_START_IO_TARGET controls the time a crash / instance recovery may take.
Copy $ORACLE_HOME/network/admin/tnsnames.ora to all additional application server hosts. Be careful if these files were customized after the SAP installation. Oracle Database Step: OR880 Be sure to configure and install the required Oracle NLS files and client libraries as mentioned in section Oracle Storage Considerations included in chapter “Planning the Storage Layout.” Also refer to SAP OSS Note 180430 for more details. Optional Step: OR890 If you use more than one SAP system inside of your cluster...
Set the optional parameter SQLNET.EXPIRE_TIME in sqlnet.ora to a reasonable value in order to take advantage of the Dead Connection Detection feature of Oracle. The parameter file sqlnet.ora resides either in /usr/sap/trans or in $ORACLE_HOME/network/admin. The value of SQLNET.EXPIRE_TIME determines how often (in seconds) SQL*Net sends a probe to verify that a client-server connection is still active.
Additional Steps for MaxDB Logon as root to the primary host of the database where the db or dbci package is running in debug mode. MaxDB Database Step: SD930 If environment files exist in the home directory of the MaxDB user on the primary node, create additional links for any secondary. Example: su - sqd ln -s .dbenv_.csh .dbenv_.csh ln -s .dbenv_.sh .dbenv_.
When using raw devices, the database installation on the primary node might have modified the access rights and owner of the device file entries where the data and logs reside. These devices were changed to be owned by user sdb, group sdba. These settings also have to be applied on the secondary, otherwise the database will not change from ADMIN to ONLINE state.
If environment files exist in the home directory of the Sybase database user for the primary node, create additional links on the secondary nodes. Example: su - syb ln -s .dbenv_.csh .dbenv_.csh ln -s .dbenv_.sh .dbenv_.
SAPDBHOST = rdisp/mshost = rdisp/mshost = rdisp/sna_gateway = rdisp/vbname = __ rdisp/btcname = __ rslg/collect_daemon/host = If you don't have Replicated Enqueue: rdisp/enqname = __ The following parameters are only necessary if an application server is installed on the adoptive node.
filenames of the instance profile and instance start profile. All references to the instance profiles that occur in the start profile need to be changed to include the virtual IP address instance profile filename. SAPLOCALHOST is set to the hostname per default at startup time and is used to build the SAP application server name: __ This parameter represents the communication path inside an SAP system and between different SAP systems.
do not need to change batch jobs, which are tied to the locality, after a switchover, even if the hostname which is also stored in the above tables differs. Installation Step: IS1180 Within the SAP Computing Center Management System (CCMS) you can define operation modes for SAP instances. An operation mode defines a resource configuration for the instances in your SAP system.
SAP J2EE Engine specific installation steps This section is applicable for SAP J2EE Engine 6.40 based installations that were performed prior to the introduction of SAPINST installations on virtual IPs. There is a special SAP OSS Note 757692 that explains how to treat a hostname change and thus the virtualization of a SAP J2EE Engine 6.40.
Table 23 IS1130 Installation Step (continued) Choose... Change the following values Check for the latest version of SAP OSS Note 757692. Installation is complete. The next step is to do some comprehensive switchover testing covering all possible failure scenarios. It is important that all relevant SAP application functions are tested on the switchover nodes. There exist several documents provided by HP or SAP that can guide you through this process.
6 SAP Supply Chain Management Within SAP Supply Chain Management (SCM) scenarios two main technical components have to be distinguished: the APO System and the liveCache. An APO System is based on SAP Application Server technology. Thus, sgesap/sapinstance and sgesap/dbinstance modules can be used or ci, db, dbci, d and sapnfs legacy packages may be implemented for APO. These APO packages are set up similar to Netweaver packages. There is only one difference. APO needs access to liveCache client libraries.
Serviceguard are already installed properly on all cluster hosts. Sometimes a condition is specified with the installation step. Follow the information presented only if the condition is true for your situation. NOTE: For installation steps in this chapter that require the adjustment of SAP specific parameter in order to run the SAP application in a switchover environment usually example values are given.
The hot standby mechanism also includes data replication. The standby maintains its own set of liveCache data on storage at all times. SGeSAP provides a runtime library to liveCache that allows to automatically create a valid local set of liveCache devspace data via Storageworks XP Business Copy volume pairs (pvol/svol BCVs) as part of the standby startup. If required, the master liveCache can remain running during this operation.
Option 1: Simple Clusters with Separated Packages Cluster Layout Constraints: • The liveCache package does not share a failover node with the APO Central Instance package. • There is no MaxDB or additional liveCache running on cluster nodes. • There is no intention to install additional APO Application Servers within the cluster. • There is no hot standby liveCache system configured.
Option 3: Full Flexibility If multiple MaxDB based components are either planned or already installed, the setup looks different. All directories that are shared between MaxDB instances must not be part of the liveCache package. Otherwise a halted liveCache package would prevent that other MaxDB instances can be started. Cluster Layout Constraint: • There is no hot standby system configured.
Table 27 General File System Layout for liveCache (Option 3) (continued) Storage Type Package Mount Point shared sapnfs /export/sapdb/programs autofs shared sapnfs /export/sapdb/data autofs shared sapnfs /var/spool/sql/ini * only valid for liveCache versions lower than 7.6. Option 4: Hot Standby liveCache Two liveCache instances are running in a hot standby liveCache cluster during normal operation. No instance failover takes place. This allows to keep instance-specific data local to each node.
/sapdb/programs/runtime/7250=7.2.5.0, /sapdb/programs/runtime/7300=7.3.0.0, /sapdb/programs/runtime/7301=7.3.1.0, /sapdb/programs/runtime/7401=7.4.1.0, /sapdb/programs/runtime/7402=7.4.2.0, For MaxDB and liveCache Version 7.5 (or higher) the SAP_DBTech.ini file does not contain sections [Installations], [Databases] and [Runtime]. These sections are stored in separate files Installations.ini, Databases.ini and Runtimes.ini in the IndepData path /sapdb/data/config. A sample SAP_DBTech.ini, Installations.
cd / # umount all logical volumes of the volume group vgchange -a n vgchange -c y vgchange -a e # remount the logical volumes The device minor numbers must be different from all device minor numbers gathered on the other hosts. Distribute the shared volume groups to all potential failover nodes.
Verify that the symbolic links listed below in directory /var/spool/sql exist on both cluster nodes. dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ini ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid liveCache Installation Step: LC070 Make sure /var/spool/sql exists as a directory on the backup node. /usr/spool must be a symbolic link to /var/spool.
In this step, the correct storage layout to support the hardware based copy mechanisms for the devspaces of the data files gets created. On the Storageworks XP array pvol/svol pairs should be defined for all LUNs containing DISKDnnnn information of the master. These LUNs should be used for devspace data exclusively. Make sure that all pairs are split into SMPL (simple) state.
Figure 16 Hot Standby System Configuration Wizard Screens SGeSAP Modular Package Configuration This section describes how to configure the modular SGeSAP lc liveCache package. The configuration can be done within the configuration screen of the Serviceguard Manager plug-in to System Management Homepage or via the Serviceguard Command Line tools. Figure 17 Configuration screen of a liveCache module in Serviceguard Manager liveCache Installation Step: LC171 Create a liveCache package configuration.
cmmakepkg –m sgesap/livecache lc.config Alternatively, from the Serviceguard Manager Main page, click on Configuration in the menu toolbar, then select Create a Modular Package from the drop down menu. Click on the yes radio button following Do you want to use a toolkit? Select the SGeSAP toolkit. In the Select the SAP Components in the Package table, select SAP Livecache Instances. Then click Next>>.
Log in on the primary node that has the shared logical volumes mounted. Create a symbolic link that acts as a hook that informs SAP software where to find the liveCache monitoring software to allow the prescribed interaction with it. Optionally, you can change the ownership of the link to sdb:sdba. ln -s /etc/cmcluster//saplc.mon /sapdb//db/sap/lccluster liveCache Installation Step: LC216 For the following steps the SAPGUI is required. Logon to the APO central instance as user SAP*.
SERVICE_NAME[0]="SGeSAPlc" SERVICE_CMD[0]="/sapdb//db/sap/lccluster SERVICE_RESTART[0]="-r 3" • monitor" All other parameters should be chosen as appropriate for the ndividual setup. liveCache Installation Step: LC180 In /etc/cmcluster//lc.control.script add the following lines to the customer defined functions. These commands will actually start and stop the liveCache instance. function customer_defined_run_cmds { ### Add following line . /etc/cmcluster//sapwas.
Hot standby systems have a preferred default machine for the instance with the master role. This should usually correspond to the hostname of the primary node of the package. The hostname of the failover node defines the default secondary role, i.e. the default standby and alternative master machine. For a hot standby system to become activated, the LC_PRIMARY_ROLE and the LC_SECONDARY_ROLE need to be defined with physical hostnames. No virtual addresses can be used here.
Livecache Service Monitoring SAP recommends the use of service monitoring in order to test the runtime availability of liveCache processes. The monitor, provided with SGeSAP, periodically checks the availability and responsiveness of the liveCache system. The sanity of the monitor will be ensured by standard Serviceguard functionality. The liveCache monitoring program is shipped with SGeSAP in the saplc.mon file. The monitor runs as a service attached to the lc Serviceguard package.
APO Setup Changes Running liveCache within a Serviceguard cluster package means that the liveCache instance is now configured for the relocatable IP of the package. This configuration needs to be adopted in the APO system that connects to this liveCache. Figure 4-3 shows an example for configuring LCA. Figure 18 Example HA SCM Layout liveCache Installation Step: GS220 Run SAP transaction LC10 and configure the logical liveCache names LCA and LCD to listen to the relocatable IP of the liveCache package.
, e.g. 1LC1node1 exist. These will only work on one of the cluster hosts. • To find out if a given user key mapping works throughout the cluster, temporarily add relocatable address relocls_s to the cluster node with "cmmodnet -a -i relolc_s network" for testing the following commands.
1. 2. 3. 4. Add a shared logical volume for /export/sapdb/programs to the global NFS package (sapnfs). Copy the content of /sapdb/programs from the liveCache primary node to this logical volume. Make sure /sapdb/programs exists as empty mountpoint on all hosts of the liveCache package. Also make sure /export/sapdb/programs exists as empty mountpoint on all hosts of the sapnfs package. Add the following entry to /etc/auto.direct.
7 SAP Master Data Management (MDM) SGeSAP legacy packaging provides a cluster solution for server components of SAP Master Data Management (MDM), i.e. the MDB, MDS, MDIS, and MDSS servers. They can be handled as a single package or split up into four individual packages. Master Data Management - Overview Figure 5.1 provides a general flow of how SGeSAP works with MDM. • The Upper Layer—contains the User Interface components. • The Middle Layer—contains the MDM Server components.
Table 29 MDM User Interface and Command Line Components (continued) Category Description NOTE: The MDM GUI interface is not relevant for the implementation of the SGeSAP scripts. MDM CLI (Command Line Interface) clients MDM CLIX... Allows you to manage the MDM software and MDM repositories using a command line interface. The MDM CLIX interface is used by the SGeSAP scripts to manage the repositories.
The MDM server components that are relevant for the installation and configuration of the SGeSAP scripts are: MDS, MDIS, MDSS and MDB. NOTE: Go to http://service.sap.com/installmdm to access the SAP Service Marketplace web site, where you can then access the required SAP documentation needed for this installation. You must be a SAP registered user to access these documents. MDM 5.5 SP05 - Master Guide MDM 5.5 SP05 - Console Reference Guide MDM 5.5 SP05 - Release Notes MDM 5.
NOTE: The advantage of the NFS server/client approach from a system management viewpoint is only one copy of all the MDM server files have to be kept and maintained on the cluster instead of creating and distributing copies to each cluster node. Regardless of on which node any of the MDM server components are running in the cluster - the MDM server files are always available in the/home/mdmuser directory. The disadvantage: I/O performance might become a bottleneck.
• A "fifth" Serviceguard package called masterMDM must also be configured for this configuration type. It will be used to ensure that cluster-wide the four MDM Serviceguard packages are started in the correct order: mdbMDM -> mdsMDM -> mdisMDM -> mdssMDMregardless of on which cluster node they are started. The same is true for stopping the packages - the reverse order will be used.
lvcreate -L 7000 -n lvmdmuser /dev/vgmdmuser newfs -F vxfs -o largefiles /dev/vgmdmoradb/rlvmdmoradb newfs -F vxfs -o largefiles /dev/vgmdmuser/rlvmdmuser Create the mount point for Oracle. mkdir -p /oracle/MDM Create NFS export/import mount points for the /home/mdmuser home directory. The directory /export/home/mdmuser is the NFS server ( NFS export) mount point, /home/mdmuser is the NFS client (NFS import) mount point.
user:oramdm uid:205 home:/oracle/MDM group:dba shell:/bin/sh /etc/group — — — — oper::202:oramdm dba::201:oramdm /oracle/MDM/.profile — — — — export ORACLE_HOME=/oracle/MDM/920_64 export ORA_NLS=/oracle/MDM/920_64/ocommon/NLS_723/admin/data ORA_NLS32=/oracle/MDM/920_64/ocommon/NLS_733/admin/data export ORA_NLS32 export ORA_NLS33=/oracle/MDM/920_64/ocommon/nls/admin/data export ORACLE_SID=MDM NLSPATH=/opt/ansic/lib/nls/msg/%L/%N.cat:/opt/ansic/lib/\ nls/msg/C/%N.
stty erase "^?" set -o emacs alias __A=`echo alias __B=`echo alias __C=`echo alias __D=`echo alias __H=`echo # use emacs commands "\020"` # up arrow = back a command "\016"` # down arrow = down a command "\006"` # right arrow = forward a character "\002"` # left arrow = back a character "\001"` # home = start of line Setup Step: MDM070 Copy files to second cluster node. All nodes of the cluster require a copy of the configuration changes. Copy modified files to the second cluster node.
vi /etc/cmcluster/MDMNFS/mdmNFS.control.script VG[0]="vgmdmuser" LV[0]="/dev/vgmdmuser/lvmdmuser"; FS[0]="/export/home/mdmuser"; \ FS_MOUNT_OPT[0]=""; \ FS_UMOUNT_OPT[0]=""; \ FS_FSCK_OPT[0]=""; \ FS_TYPE[0]="vxfs" IP[0]="172.16.11.96" SUBNET[0]="172.16.11.0" The file hanfs.sh contains NFS directories that will exported with export options. These variables are used by the command exportfs -i to export the file systems and the command exportfs -u to unexport the file systems.
of the database software. At a later step the SGeSAP specific scripts will be added to the package. Depending on the goal chose either option: • (a) = Single MDM Serviceguard package - create a mgroupMDM package. • (b) = Multiple MDM Serviceguard packages - create a mdbMDM package At this stage the steps in option (a) or (b) are the same. Only the package name is different - the storage volume used is the same in both cases.
NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDM/mdbMDM.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDM/mdbMDM.control.script HALT_SCRIPT_TIMEOUT NO_TIMEOUT vi /etc/cmcluster/MDM/mdbMDM.control.script IP[0]="172.16.11.97" SUBNET[0]="172.16.11.
clunode1: cmmakepkg -s /etc/cmcluster/MDM/mdisMDM.control.script cmmakepkg -p /etc/cmcluster/MDM/mdisMDM.config vi /etc/cmcluster/MDM/mdisMDM.config PACKAGE_NAME mdisMDM NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDM/mdisMDM.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDM/mdisMDM.control.script HALT_SCRIPT_TIMEOUT NO_TIMEOUT NOTE: At this stage file /etc/cmcluster/MDM/mdisMDM.control.script does not have to be edited as neither an IP address nor a storage volume has to be configured.
NODE_NAME * RUN_SCRIPT /etc/cmcluster/MDM/masterMDM.control.script RUN_SCRIPT_TIMEOUT NO_TIMEOUT HALT_SCRIPT /etc/cmcluster/MDM/masterMDM.control.script HALT_SCRIPT_TIMEOUT NO_TIMEOUT NOTE: At this stage file /etc/cmcluster/MDM/masterMDM.control.script does not have to be edited as neither an IP address nor a storage volume has to be configured. scp –pr /etc/cmcluster/MDM clunode2:/etc/cmcluster/MDM cmapplyconf -P /etc/cmcluster/MDM/masterMDM.
[ ] Data Warehouse [ ] Customized [ ] Software Only Database Identification — — — — Global Database Name: MDM SID:MDM Database File Location — — — — Directory for Database Files: /oracle/MDM/920_64/oradata Database Character Set — — — — [ ] Use the default character set [X] Use the Unicode (AL32UTF8) as the character set Choose JDK Home Directory — — — — Enter JDK Home: /opt/java1.3 Summary — — — — -> Install Setup Step: MDM214 Upgrade to Oracle 9.2.0.8 MDM requires a minimum version of 9.2.0.8 of Oracle.
Start the "Oracle Server" - Client installation for user oramdm Before running the upgrade halt the Oracle database and halt the Oracle listener. The next steps are ore installing the "Oracle Server - Client" bits to be able to run sqlplus command over a network connection. The "Oracle Server - Client" is installed for user oramdm. Start the installation executing the runInstaller command. The following contains a summary of the responses during the installation. NOTE: Oracle Server - Client version 9.2.0.
/KITS/920_64/Disk1/runInstaller File locations — — — — Source: Path products: /KITS/Disk1/products.jar Destination: Name: MDM (OUIHome) Path: /home/mdmuser/oracle/MDM/920_64 Available Products — — — — [ ] Oracle 9i Database [X] Oracle 9i client [ ] Oracle 9i management Installation Type — — — — [X ] Administrator [ ] Runtime [ ] Custom Choose JDK Home Directory — — — — Enter JDK Home: /opt/java1.
— — — — Source: Path: /KITS/oa9208/stage/products.xml Destination: Name:MDM Path:/home/mdmuser/oracle/MDM/920_64 Summary — — — — Success Setup Step: MDM222 Configure tnsnames.ora and listener.ora Change into directory /oracle/MDM/920_64/network/admin/ and edit tnsnames.ora and listener.ora. Replace the HOST= clunode1 with the virtual address or virtual hostname for the database 172.16.11.97 or mdbreloc. clunode1: su cd cp cp - oramdm /oracle/MDM/920_64/network/admin/ tnsnames.ora tnsnames.ora.
) ) Setup Step: MDM224 Configure /etc/tnsnames.ora , /etc/listener.ora and /etc/sqlnet.ora Copy the above three files in to the /etc/directory so the configuration information is also globally available. Copy the modified files to the second cluster node. cd cp cp cp /oracle/MDM/920_64/network/admin/ tnsnames.ora /etc/tnsnames.ora listener.ora /etc/listener.ora sqlnet.ora /etc/sqlnet.ora scp -p /etc/listener.ora clunode2 /etc/listener.ora scp -p /etc/sqlnet.ora clunode2 /etc/sqlnet.
su - mdmuser mkdir /home/mdmuser/mdis cd mdis mdis Setup Step: MDM228 Install the MDM SERVER 5.5 NOTE: The MDM servers (mds, mdiss, mdss) install their files into the /opt/MDM directory. Execute the following steps to install MDM Server: • extract the zip archive • uncompress and install the kit into the /opt/MDM directory • create a /home/mdmuser/mds directory • start the mds process (this will create the mds environment in the current directory) su cd /KITS/MDM_Install/MDM55SP04P03/MDM SERVER 5.
Edit the mds.ini, mdis.ini and mdss.ini files and add the virtual IP hostname for the mds component to the [GLOBAL] section. The mdis and mdss components use this information to connect to the mds server. /home/mdmuser/mds/mds.ini [GLOBAL] Server=mdsreloc . . /home/mdmuser/mdis/mdis.ini [GLOBAL] Server=mdsreloc . . /home/mdmuser/mdss/mdss.ini [GLOBAL] Server=mdsreloc . .
(b) Multiple Serviceguard package - continue with mdsMDM, mdisMDM, mdsMDM andmasterMDM configuration If the Multiple Serviceguard package option was chosen in the beginning of this chapter, continue with the configuration of the mdsMDM, mdisMDM, mdssMDM and masterMDM packages. Insert in the function customer_defined_run_cmds and customer_defined_halt_cmds the following shell scripts. These scripts are responsible for executing the SGeSAP specific scripts e.g.
Table 31 MDM parameter descriptions (continued) Parameter Description commands. Note: MDSS and MDIS do not require a virtual IP address. MDM_PASSWORD="" The password used to access MDM. MDM_REPOSITORY_SPEC="PRODUCT_HA:MDM:o:mdm:sap PRODUCT_HA_INI:MDM:o:mdm:sap" The following a brief description of the values: Repository= PRODUCT_HADBMS instance= MDMDatabase type= o (Oracle)Username= mdmPassword= sap MDM_CRED="Admin: " Clix Repository Credentials: the password used for repository related commands.
PRODUCT_HA_INI:MDM:o:mdm:sap \ PRODUCT_HA_NOREAL:MDM:o:mdm:sap \ " MDM_CRED="Admin: " MDM_MONITOR_INTERVAL=60 MDM_MGROUP_DEPEND="mdb mds mdis mdss" Copy file sap.config to the second node: scp -pr /etc/cmcluster/MDM/sap.config \ clunode2:/etc/cmcluster/MDM Setup Step: MDM244 (b) Multiple Serviceguard package - configure sap.config Edit file sap.config and add the following parameters for a Multiple Serviceguard package configuration.
SERVICE_NAME[0]="mgroupMDMmon" SERVICE_CMD[0]="/etc/cmcluster/MDM/sapmdm.sh check mgroup" SERVICE_RESTART[0]="" To activate the changes in the configuration files run the following commands: cmapplyconf -P /etc/cmcluster/MDM/mgroupMDM.config cmrunpkg mgroupMDM Setup Step: MDM248 (b) Enable SGeSAP monitoring for "Multiple" MDM packages The SGeSAP scripts include check functions to monitor the health of MDM processes. The variable MDM_MONITOR_INTERVAL=60 in file sap.
mdisMDM and mdssMDM) are halted from the previous installation tasks.The command cmhaltpkg masterMDM will stop all MDM Serviceguard packages in the reverse order as they were started. NOTE: Since the masterMDM package uses standard cmrunpkg commands to start other packages - the node on which to run the underlying Serviceguard packages on - is obtained from the variable NODE_NAME as specified in files mdmMDM.config, mdsMDM.config, mdisMDM.config and mdssMDM.config.
Prerequisites You must having the following installed and already configured: • HP-UX and Serviceguard • A Serviceguard cluster with at least two nodes attached to the network. (Node names: clunode1 and clunode2) • Any shared storage supported by Serviceguard. The shared storage used for this configuration is based on (Enterprise Virtual Array) EVA—a fibre channel based storage solution in an HPVM configuration.
Contains SAP directories and files for the different SAP instances. /usr/sap/MO7/SYS Contains the SAP directories exe, gen, global, profile and src for SAP system MO7. /usr/sap/MO7/MDS01 /usr/sap/MO7/MDIS02 /usr/sap/MO7/MDSS03 These directories are on local and relocatable storage and contain log and working directories for the MDM server components. /sapmnt/MO7 /export/sapmnt/MO7 The /sapmnt/MO7 directory is required on every cluster node a MDS, MDIS or MDSS component can run on.
• The mcsMO7 package contains code for dependency checks: it will ensure that the MDM components are started and stopped in the correct order: mdb -> mds -> mdis -> mdss—and in the reverse order for stop. • The advantage of the Single MDM Serviceguard package is a simpler setup—less steps are involved in setting up and running the configuration—but with the restriction that all four MDM server components always have to run together on the same cluster node.
/export/sapmnt/MO7, /export/oracle/client and /oracle/MO7 as described earlier. In the following example, the storage used for this configuration is EVA based fiber storage and was already initialized on the EVA itself. clunode1: iocan -fnC disk insf -e disk 5 0/0/1/0.2.0 disk 4 0/0/1/0.3.0 disk 11 0/0/1/0.4.0 disk 10 0/0/1/0.5.0 disk 9 0/0/1/0.6.0 disk 8 0/0/1/0.7.
newfs newfs newfs newfs newfs newfs -F -F -F -F -F -F vxfs vxfs vxfs vxfs vxfs vxfs -o -o -o -o -o -o largefiles largefiles largefiles largefiles largefiles largefiles /dev/vgoramo7/rlvmo7ora /dev/vgmo7sapmnt/rlvmo7sapmnt /dev/vgmo7MDS/rlvmo7MDS /dev/vgmo7MDIS/rlvmo7MDIS /dev/vgmo7MDSS/rlvmo7MDSS /dev/vgmo7oracli/rlvmo7oracli Create the mount points for all relocateable and all local file systems mkdir mkdir mkdir mkdir mkdir mkdir mkdir mkdir mkdir mkdir -p -p -p -p -p -p -p -p -p -p /oracle/MO7 /e
mkdir mkdir mkdir mkdir mkdir mkdir mkdir mkdir mkdir -p -p -p -p -p -p -p -p -p /export/sapmnt/MO7 /export/oracle/client /sapmnt/MO7 /usr/sap /usr/sap/MO7/SYS /usr/sap/MO7/MDS01 /usr/sap/MO7/MDIS02 /usr/sap/MO7/MDSS03 /oracle/client vgimport vgimport vgimport vgimport vgimport vgimport -s -s -s -s -s -s -m -m -m -m -m -m vgchange vgchange vgchange vgchange vgchange vgchange -a -a -a -a -a -a n n n n n n /tmp/vgmo7ora.map /tmp/vgmo7sapmnt.map /tmp/vgmo7MDS.map /tmp/vgmo7MDSS.map /tmp/vgmo7MDIS.
The following summarizes parameters and settings used in this configuration: /etc/passwd — — — — user:mo7adm uid:110 home:/home/mo7adm group:sapsys shell:/bin/sh /etc/group — — — — dba::201:mo7adm oper::202:oramo7,mo7adm oinstall::203:mo7adm sapinst::105:mo7adm sapsys::106: Setup Step: MDM080 Setup Serviceguard for a modular nfsMO7 package. Create a directory /etc/cmcluster/nfsMO7 which will contain the Serviceguard configuration file for the NFS package.
vi /etc/auto.direct /sapmnt/MO7 -proto=udp nfsMO7reloc:/export/sapmnt/MO7 /oracle/client -proto=udp nfsMO7reloc:/export/oracle/client Distribute /etc/auto.direct to all cluster members with scp. scp -p /etc/auto.direct clunode2:/etc/auto.direct Restart nfs subsystem on all cluster members. /sbin/init.d/autofs stop /sbin/init.d/autofs start /sbin/init.d/nfs.client stop /sbin/init.d/nfs.
fs_directory fs_type fs_mount_opt fs_umount_opt fs_fsck_opt /usr/sap/MO7/MDS01 "" "-o rw" "" "" # Filesystem MDIS fs_name fs_directory fs_type fs_mount_opt fs_umount_opt fs_fsck_opt /dev/vgmo7MDIS/lvmo7MDIS /usr/sap/MO7/MDIS02 "" "-o rw" "" "" # Filesystem MDSS fs_name fs_directory fs_type fs_mount_opt fs_umount_opt fs_fsck_opt /dev/vgmo7MDSS/lvmo7MDSS /usr/sap/MO7/MDSS03 "" "-o rw" "" "" Apply the Serviceguard package configuration file. cmapplyconf -P /etc/cmcluster/MO7/mcsMO7.
fs_name fs_directory fs_type fs_mount_opt fs_umount_opt fs_fsck_opt /dev/vgmo7MDS/lvmo7MDS /usr/sap/MO7/MDS01 "" "-o rw" "" "" Apply the Serviceguard package configuration file. cmapplyconf -P /etc/cmcluster/MO7/mdsMO7.conf cmrunpkg mdsMO7 Setup Step: MDM206 (b) Multiple MDM Serviceguard packages - create a mdisMO7 package. clunode1: cmmakepkg > /etc/cmcluster/MO7/mdisMO7.conf vi /etc/cmcluster/MO7/mdisMO7.conf package_name mdisMO7 node_name * ip_subnet 172.16.11.0 ip_address 172.16.11.
NOTE: The Oracle installation is based on version 10.2.0.0 of Oracle. For this installation, the Database SID was set to "MO7". Replace the "MO7" string with one that is applicable to your environment. Start the installation executing the runInstaller command. The following contains a summary of the responses during the installation. See the Oracle installation guide for detailed information.
Specify Backup and Recovery Options — — — — [x] Do not enable Automated Backups [ ] Enable Automated Backups Specify Database Schema Passwords — — — — [x] Use the same password for all accounts Summary — — — — —> Install su - /tmp/orainstRoot.sh Creating Oracle Inventory pointer file (/var/opt/oracle/oraInst.loc). INVPTR=/var/opt/oracle/oraInst.
[x] Use standard port number 1521 End of Installation Setup Step: MDM222 Configure tnsnames.ora and listener.ora. Change into directory /oracle/MO7/102_64/network/admin/ and edit tnsnames.ora and listener.ora. Replace the HOST=clunode1 with the virtual address or virtual hostname for the database 172.16.11.97 or mdbreloc. clunode1: su - oramo7 cd /oracle/MO7/102_64/network/admin/ cp tnsnames.ora tnsnames.ora.ORG cp listener.ora listener.ORG vi /oracle/MO7/102_64/network/admin/tnsnames.
Connected to: Oracle Database 10g Release 10.2.0.1.0 - 64bit Production Setup Step: MDM224 Configure the Oracle client interface by editing /oracle/client/network/admin/tnsnames.ora. cd /oracle/client/network/admin cp tnsnames.ora tnsnames.ora.ORG vi tnsnames.ora MO7 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 172.16.11.
Database Settings for Oracle Database ------------------------------------Oracle Home Directory [/oracle/client] Oracle Client Libraries [[/oracle/client/lib] Oracle SQLUtility [/oracle/Client/bin/sqlplus] Parameter Summary ----------------Execution of SAP MDM Installation -> Distributes System -> MDS has been completed successfully. Setup Step: MDM226 Install the MDM MDIS server. Select if MDM installation is a Central System installation or a Distributed System installation.
[mdsreloc] Unpack Archives [mdss.sar] [shared.sar] [SAPINSTANCE.SAR] Execution of SAP MDM Installation -> Distributed System -> MDSS has been completed successfully. Setup Step: MDM234 The MDM installation and database installation are now complete. Before continuing with the configuration of the MDM Serviceguard packages, all MDM servers and the database have to be stopped and all file systems have to be unmounted.
cd /usr/sap; tar xpvf /tmp/usrsap.tar cd /home; tar xvf /tmp/home.tar The following contains some selected SGeSAP parameters relevant to an MDM database configuration. Refer to the package configuration file itself for detailed information.
Setup Step: MDM240 Start the NFS package with command: cmrunpkg nfsMO7 Setup Step: MDM241 (a) Single MDM Serviceguard package – update the mcsMO7 package with SGeSAP MDM configuration data. cmgetconf -p mcsMO7 > /etc/cmcluster/MO7/tmp_mdsMO7.conf cmmakepkg -i /etc/cmcluster/MO7/tmp_mdsMO7.conf \ -m sgesap/oracledb \ -m sgesap/sapinstance \ -m sg/service > /etc/cmcluster/MO7/mcsMO7.
cmcheckconf -P /etc/cmcluster/MO7/mdbMO7.conf cmapplyconf -P /etc/cmcluster/MO7/mdbMO7.conf cmrunpkg mdbMO7.conf Setup Step: MDM244 (b) Multiple MDM Serviceguard packages - update the mdsMO7 package with SGeSAP MDM configuration data. The MDM MDS server requires that the database is running before any of the MDM repositories can be loaded successfully. To ensure this for a MDM distributed configuration, a package dependency has to be created. cmgetconf -p mdsMO7 > /etc/cmcluster/MO7/tmp_mdsMO7.
service_restart service_fail_fast_enabled service_halt_timeout none no 0 To activate the changes in the Serviceguard configuration files run the following commands: cmcheckconf -P /etc/cmcluster/MO7/mdisMO7.conf cmapplyconf -P /etc/cmcluster/MO7/mdisMO7.conf cmrunpkg mdisMO7.conf Setup Step: MDM246 b) Multiple MDM Serviceguard packages - update the mdssMO7 package with SGeSAP MDM configuration data. cmgetconf -p mdssMO7 > /etc/cmcluster/MO7/tmp_mdssMO7.conf cmmakepkg -i /etc/cmcluster/MO7/tmp_mdssMO7.