Managing HP Serviceguard Extension for SAP for Linux Version A.06.00.10 Abstract This guide describes how to plan, configure, and administer highly available SAP Netweaver systems on Red Hat Enterprise Linux and SUSE Linux Enterprise Server systems using HP Serviceguard.
© Copyright 2013 Hewlett-Packard Development Company, L.P. Serviceguard, Serviceguard Extension for SAP, Metrocluster and Serviceguard Manager are products of Hewlett-Packard Company, L. P., and all are protected by copyright. Valid license from HP is required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S.
Contents 1 Overview..................................................................................................5 About this manual....................................................................................................................5 Related documentation..............................................................................................................5 2 SAP cluster concepts...................................................................................
Infrastructure setup, pre-installation preparation (Phase 1).............................................................47 Prerequisites......................................................................................................................47 Node preparation and synchronization.................................................................................47 Intermediate synchronization and verification of virtual hosts....................................................
1 Overview About this manual This document describes how to plan, configure, and administer highly available SAP Netweaver systems on Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) systems using HP Serviceguard high availability cluster technology in combination with HP Serviceguard Extension for SAP (SGeSAP). To use SGeSAP, you must be familiar with the knowledge of Serviceguard concepts and commands, Linux operating system administration, and SAP basics.
• HP Serviceguard A.11.20.20 for Linux Release Notes • HP Serviceguard Toolkit for NFS version A.03.03.10 on Linux User Guide The documents are available at http://www.hp.com/go/linux-serviceguard-docs.
2 SAP cluster concepts This chapter introduces the basic concepts used by SGeSAP for Linux. It also includes recommendations and typical cluster layouts that can be implemented for SAP environments. SAP-specific cluster modules HP SGeSAP extends Serviceguard failover cluster capabilities to SAP application environments. It is intended to be used in conjunction with the HP Serviceguard Linux product and the HP Serviceguard toolkit for NFS on Linux.
Server Instances, SAP Central Instances, SAP Enqueue Replication Server Instances, SAP Gateway Instances and SAP Webdispatcher Instances. The module sgesap/mdminstance extends the coverage to the SAP Master Data Management Instance types. The module to cluster SAP liveCache instances is called sgesap/livecache. SGeSAP also covers single-instance database instance failover with built-in routines.
Configuration restrictions The following are the configuration restrictions: • The sgesap/sapinstance module must not be used for Diagnostic Agents. • It is not allowed to specify a single SGeSAP package with two database instances in it. • It is not allowed to specify a single SGeSAP package with a Central Service Instance [A]SCS and its corresponding Replication Instance ERS.
Figure 1 One-package failover cluster concept SGeSAP Cluster One-Package Concept Shared Disks dbciC11 DBCI package moved and required resources freed up in the event of a failure Node 1 QAS Systems Dialog Instances Node 2 LAN Application Servers To maintain an expensive idle standby is not required as SGeSAP allows utilizing the secondary node(s) with different instances during normal operation.
Figure 2 Visualization of a one-package cluster concept in Serviceguard Manager If the primary node fails, the database and the Central Instance failover and continue functioning on an adoptive node. After failover, the system runs without any manual intervention. All redundant Application Servers and Dialog Instances, even those that are not part of the cluster, can stay up or can be restarted (triggered by a failover).
Figure 3 Two-package failover with mutual backup scenario SGeSAP Cluster Mutual Failover Shared Disks dbC11 DB and CI package can fail and recover independently Node 1 ciC11 Node 2 LAN Application Servers It is a best practice to base the package naming on the SAP instance naming conventions whenever possible. Each package name must also include the SAP System Identifier (SID) of the system to which the package belongs.
content got lost. As a reaction they cancel ongoing transactions that still hold granted locks. These transactions need to be restarted. Enqueue Replication provides a concept that prevents the impact of a failure of the Enqueue Service on the Dialog Instances. Thus transactions no longer need to be restarted.
Setting up the enqor MNP allows a protected follow-and-push behavior for the two packages that contain the enqueue and its replication. As a result, an automatism will make sure that Enqueue and its Enqueue Replication server are never started on the same node initially. Enqueue will not invalidate the replication accidentally by starting on a non-replication node while replication is active elsewhere.
1. 2. SAP self-controlled using High Availability polling with Replication Instances on each cluster node (active/passive). Completely High Availability failover solution controlled with one virtualized Replication Instance per Enqueue. SGeSAP implements the second concept and avoids costly polling and complex data exchange between SAP and the High Availability cluster software. There are several SAP profile parameters that are related to the self-controlled approach.
Figure 7 Dedicated failover server SGeSAP Cluster Dedicated Failover Server with Replicated Enqueue jdbscsC11 Shared Disks Node 1 dbascsC12 Failover paths from all primary partitions to the dedicated backup server ers00C11 ers10C12 ersnnCnn Node 2 Dialog Instances dbscsCnn Node 3 Figure 7 (page 16) shows an example configuration. The dedicated failover host can serve many purposes during normal operation.
A dedicated SAPNFS package allows access to shared filesystems that are needed by more than one SAP component. Typical filesystems served by SAPNFS would be some common SAP directories such as /usr/sap/trans or /sapmnt/ or for example the global MaxDB executable directory of MaxDB 7.7. The MaxDB client libraries are part of the global MaxDB executable directory and access to these files is needed by APO and liveCache at the same time. Beginning with MaxDB 7.
the operation succeeds. If such operations need to succeed, package dependencies in combination with SGeSAP Dialog Instance packages need to be used. Dialog Instances can be marked to be of minor importance. They will then be shut down, if a critical component fails over to the host they run to free up resources for the non-redundant packaged components. The described functionality can be achieved by adding the module sgesap/sapextinstance to the package.
3 SAP cluster administration In SGeSAP environments, SAP application instances are no longer considered to run on dedicated (physical) servers. They are wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a server virtualization layer. The virtualization is transparent in most aspects, but in some areas special considerations apply. This affects the way a system gets administered.
NOTE: Enabling package maintenance allows to temporarily disable the cluster functionality for the SAP instances of this SGeSAP package. While maintenance mode is activated, the configured SGeSAP monitoring services recognizes whether an instance is manually stopped. If maintenance mode is deactivated, failover does not occur. SAP support personnel might request or perform maintenance mode activation as part of reactive support actions.
Figure 10 sgesap/sapinstance module configuration overview for a replication instance To monitor a SGeSAP toolkit package: • Check the badges next to the SGeSAP package icons in the main view. Badges are tiny icons that are displayed to the right of the package icon. Any Serviceguard Failover Package can have Status, Alert, and HA Alert badges associated with it. In addition to the standard Serviceguard alerts, SGeSAP packages report SAP application-specific information via this mechanism.
commands can be sapcontrol operations triggered by SAP system administrators, that is, sidadm users who are logged in to the Linux operating system or remote SAP basis administration commands via the SAP Management Console (SAP MC) or commands via SAP’s plugin for Microsoft Management Console (SAP MMC). The SAP Netweaver 7.
During startup of the instance startup framework, a SAP instance with the SGeSAP HA library configured , prints the following messages in the sapstartsrv.log file located in the instance work directory: SAP HA Trace: HP SGeSAP (SG) cluster-awareness SAP HA Trace: Cluster is up and stable SAP HA Trace: Node is up and running SAP HA Trace: SAP_HA_Init returns: SAP_HA_OK ...
The HP Serviceguard Manager displays a package alert (see Figure 11 (page 21)) that lists the manually halted instances of a package. The SGeSAP software service monitoring for a halted instance is automatically suspended until the instance is restarted. An SGeSAP package configuration parameter allows blocking of administrator-driven instance stop attempts for the SAP startup framework. If a stop operation is attempted, the sapstartsrv.
Change management Serviceguard manages the cluster configuration. Among the vital configuration data are the relocatable IP addresses and their subnets, the volume groups, the logical volumes and their mountpoints. If you change this configuration for the SAP system, you have to change and reapply the cluster configuration accordingly. System level changes Do not delete secure shell setup and mutual .rhosts entries of adm on any node.
NOTE: The debug/partial package setup behavior is different from the Serviceguard package maintenance mode. In package maintenance mode, the debug file does not disable package failover or allow partial startup of the package, but allows a package in running state. Startup with debug mode starts all the SGeSAP service monitors, except the monitored application software. The monitors suspend execution until the debug file is removed.
Plan to use saplogon to application server groups instead of saptemu/sapgui to individual application servers. When logging on to an application server group with two or more application servers, the SAP user does not need a different login procedure if one of the application servers of the group fails. Also, using login groups, provides workload balancing between application servers, too. Within the CCMS you can define operation modes for SAP instances. An operation mode defines a resource configuration.
Upgrading SAP software SAP rolling kernel switches can be performed in a running SAP cluster exactly as described in the SAP Netweaver 7.x documentation and support notes. Upgrading the application version of the clustered SAP application to another supported version rarely requires changes to the cluster configuration. Usually SGeSAP detects the release of the application that is packaged automatically and treats it as appropriate.
The cmmigratepkg(1) command can be applied to SGeSAP legacy packages. The output file will lack SAP-specific package configurations of the sap*.config file, but the resulting configuration file can be used to simplify the creation of a modular SGeSAP package: cmmakepkg –i cmmigratepkg_output_file -m sgesap/all modular_sap_pkg.config The configuration of the SGeSAP specific parameters to modular_sap_pkg.config can be done manually.
4 SAP cluster storage layout planning Volume managers are tools that let you create units of disk storage known as storage groups. Storage groups contain logical volumes for use on single systems and in high availability clusters. In Serviceguard clusters, package control scripts activate the storage groups. This chapter discusses disk layout for clustered SAP components and database components of several vendors on a conception level.
are used in order to allow each node of the cluster to switch roles between serving and using NFS shares. It is possible to access the NFS file systems from servers outside of the cluster that is an intrinsic part of many SAP configurations.
System-specific volume groups get accessed from all instances that belong to a particular SAP System. Environment-specific volume groups get accessed from all instances that belong to all SAP Systems installed in the whole SAP environment. System and environment-specific volume groups are set up using NFS to provide access for all instances. They must not be part of a package that is only dedicated to a single SAP instance if there are several of them.
Table 5 Instance specific volume groups for exclusive activation with a package Mount point Access point Recommended packages setups /usr/sap//SCS Shared disk SAP instance specific For example, /usr/sap/C11/SCS10 Combined SAP instances Database plus SAP instances /usr/sap//ASCS For example, /usr/sap/C11/ASCS11 /usr/sap//DVEBMGS For example, /usr/sap/C11/DVEBMGS12 /usr/sap//D SAP instance specific /usr/sap//J Combined SAP instances For example, /u
If you have more than one system, place /usr/sap/put on separate volume groups created on shared drives. The directory must not be added to any package. This ensures that they are independent from any SAP Netweaver system and you can mount them on any host by hand if needed. All files systems mounted below /export are part of NFS cross-mounts used by the automount program. The automount program uses virtual IP addresses to access the NFS directories via the path that comes without the /export prefix.
In clustered SAP environments prior to 7.x releases, executables must be installed locally. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. The availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory.
Table 7 Availability of SGeSAP storage layout options for different Database RDBMS DB Technology SGeSAP Storage Layout Options Cluster Software Bundles Oracle Single-Instance SGeSAP NFS clusters 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit Idle standby 1. Serviceguard 2. SGeSAP 3. Serviceguard NFS toolkit (Optional) SAP MaxDB SAP Sybase ASE IBM DB2 Oracle Single Instance Oracle single instance RDBMS Single Instance Oracle databases can be used with both SGeSAP storage layout options.
The Oracle database server and SAP server might need different types of NLS files. The server NLS files are part of the database Serviceguard package. The client NLS files are installed locally on all hosts. Do not mix the access paths for ORACLE server and client processes. The discussion of NLS files has no impact on the treatment of other parts of the ORACLE client files. The following directories need to exist locally on all hosts where an Application Server might run.
• The sections [Installations], [Databases], and [Runtime] are stored in separate files Installations.ini, Databases.ini, and Runtimes.ini in the IndepData path /sapdb/data/config. • MaxDB 7.8 does not create SAP_DBTech.ini anymore. The [Globals] section is defined in /etc/opt/sdb.
dbspeed -> /sapdb/data/dbspeed diag -> /sapdb/data/diag fifo -> /sapdb/data/fifo ipc -> /sapdb/data/ipc pid -> /sapdb/data/pid pipe -> /sapdb/data/pipe ppid -> /sapdb/data/ppid The links need to exist on every possible failover node in the MaxDB for the liveCache instance to run. • /sapdb/clients (MaxDB 7.8): Contains the client files in subdirectories for each database installation. • /var/lib/sql: Certain patch level of MaxDB 7.6 and 7.
local copies is possible, though not recommended because there are no administration tools that keep track of the consistency between the local copies of these files on all the systems. Using NFS toolkit filesystems underneath or export Table 10 (page 39) is required when multiple MaxDB based components (including liveCache) are either planned or already installed. These directories are shared between the instances and must be part of an instance package.
Option 1: Simple cluster with separated packages Cluster layout constraints: • The liveCache package does not share a failover node with the SCM central instance package. • There is no MaxDB or additional liveCache running on cluster nodes. • There is no intention to install additional SCM Application Servers within the cluster.
5 Clustering SAP using SGeSAP packages Overview This chapter describes in detail how to implement a SAP cluster using Serviceguard and Serviceguard Extension for SAP (SGeSAP). Each task is described with examples. A prerequisite for clustering SAP by using SGeSAP is that the Serviceguard cluster software installation must have been completed, and the cluster setup and running.
For more information on file system configurations, see chapter 4 “SAP cluster storage layout planning” (page 30). There can also be a requirement to convert an existing SAP instance or database for usage in a Serviceguard cluster environment. For more information on how to convert an existing SAP instance or database see, “Converting an existing SAP instance” (page 79).
Table 16 SGeSAP monitors (continued) Monitor Description sapdisp.mon To monitor a SAP dispatcher that comes as part of a Central Instance or an ABAP Application Server Instance. sapwebdisp.mon To monitor a SAP Web Dispatcher that is included either as a part of (W-type) instance installation into a dedicated SID or by unpacking and bootstrapping into an existing SAP Netweaver SID. sapgw.mon To monitor a SAP Gateway (G-type instance). sapdatab.
2. 3. an easy and fully automatic deployment of SGeSAP packages belonging to the same SAP SID. A guided installation using the Serviceguard Manager GUI: A web based graphical interface, with plugins for automatic pre-filling of SGeSAP package attributes based on the currently installed SAP and DB instances.
SGeSAP easy deployment This section describes the installation and configuration of packages using easy deployment (via the deploysappkgs command, which is part of the SGeSAP product) . This script allows easy deployment of the packages that are necessary to protect the critical SAP components.
Infrastructure setup, pre-installation preparation (Phase 1) This section describes the infrastructure that is provided with the setup of a NFS toolkit package and a base package for the upcoming SAP Netweaver installation. It also describes the prerequisites and some selected verification steps. There is a one to one or one to many relationship between a Serviceguard package and SAP instances and a one to one relationship between a Serviceguard package and a SAP database.
Intermediate synchronization and verification of virtual hosts To synchronize virtual hosts: 1. Ensure that all the virtual hosts that are used later in the SAP installation and the NFS toolkit package setup are added to the /etc/hosts. If a name resolver is used instead of /etc/hosts, then ensure that all the virtual hosts resolve correctly. 2. Verify the order and entries for the host name lookups in the /etc/nsswitch.conf.
3. Select NFS toolkit and click Next >>. The Package Selection screen appears. Figure 12 Toolkit selection page 4. In the Package Name box, enter a package name that is unique for the cluster. NOTE: The name can contain a maximum of 39 alphanumeric characters, dots, dashes, or underscores. The Failover package type is pre-selected and Multi-Node is disabled. NFS does not support Multi-Node. 5. Click Next >>. The Modules Selection screen appears.
Figure 13 Configuration summary page- sapnfs package Figure 14 Configuration summary page- sapnfs package (continued) 50 Clustering SAP using SGeSAP packages
Creating NFS toolkit package using Serviceguard CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI. This section describes the CLI steps and the GUI steps are described in the “Creating NFS Toolkit package using Serviceguard Manager” (page 48) section. 1. 2. Run the cmmakepkg –n sapnfs -m tkit/nfs/nfs sapnfs.config command to create the NFS server package configuration file using the CLI. Edit the sapnfs.config configuration file.
NOTE: If a common sapnfs package already exists it can be extended by the new volume groups, file systems, and exports instead. Solutionmanager diagnostic agent file system preparations related to NFS toolkit If a dialog instance with a virtual hostname is installed initially and clustering the instance is done later, then some steps related to the file system layout must be performed before the SAP installation starts.
1. 2. 3. Check if the package will start up on each cluster node, where it is configured. Run showmount –e and verify if name resolution works. Run showmount –e on an external system (or a cluster node currently not running the sapnfs package) and check the exported file systems are shown. On each NFS client in the cluster check the following: • Run cd /usr/sap/trans command to check the read access of the NFS server directories.
1. From the Serviceguard Manager Main page, click Configuration in the menu toolbar, and then select Create a Modular Package from the drop down menu. If Metrocluster is installed, a Create a Modular Package screen for selecting Metrocluster appears. If you do not want to create a Metrocluster package, click no (default is yes). Click Next >> and another Create a Modular Package screen appears. 2. 3. 4. 5. If toolkits are installed, a Create a Modular Package screen for selecting toolkits appears.
To help in decision making, you can move the cursor over the configurable parameters, and view the tool tips that provide information about the parameter. 10. Click Next >> and another Create a Modular Package screens appears. Step 2 of X: Configure SGeSAP parameters global to all clustered SAP software (sgesap/sap_global) . Fill in the required fields and accept or edit the default settings. Click Next >>.
SAP base package with Serviceguard modules only It is possible to create a package configuration only specifying the Serviceguard modules. SGeSAP modules can be installed later. Such a package configuration requires at least the following Serviceguard modules: • volume_group • filesystem • package_ip NOTE: • Include the service module at this stage to use SGeSAP service monitors (for both SAP instance and DB) on a later stage.
Figure 17 Module selection page Click Reset at the bottom of the screen to return to the default selection. 6. After you are done with all the Create a Modular Package configuration screens, the Verify and submit configuration change screen appears. Use the Check Configuration and Apply Configuration buttons to confirm and apply your changes. Creating the package configuration file with the CLI NOTE: To create a package you can use either Serviceguard Manager GUI or the CLI.
For examples of these attributes, see “Creating NFS toolkit package using Serviceguard CLI” (page 51) section. 2. Verify the package configuration by using the cmcheckconf –P .config command, and if there are no errors, run the cmapplyconf –P .config command to apply the configuration. Verification steps A simple verification of the newly created base package is to test if the package startup succeeds on each cluster node where it is configured.
Post SAP installation tasks and final node synchronization (Phase 3a) After the SAP installation has completed in Phase 2, some SAP configuration values may have to be changed for running the instance in the cluster. Additionally, each cluster node (except the primary where the SAP installation runs) must be updated to reflect the configuration changes from the primary.
Avoid database startup as part of Dialog instance startup A dialog instance installation contains Start_Program_00 = immediate $(_DB) entry in its profile. This entry is generated by the SAP installation to start the DB before the dialog instance is started. It is recommended to disable this entry to avoid the possible conflicts with the DB startup managed by the SGeSAP database package. MaxDB/liveCache: Disable Autostart of instance specific xservers With an "isolated installation" each MaxDB/liveCache 7.
Table 19 DB Configuration Files (continued) DB type File(s) Path Fields/Description /sapmnt//profile/oracle listener.ora $ORACLE_HOME/network/admin (HOST = hostname) MaxDB .XUSER.62 /home/adm Nodename in xuser list output. If necessary recreate userkeys with xuser … -n vhost Sybase interfaces $SYBASE Fourth column of master and query entry for each server dbenv.* /home/adm Variable dbs_syb_server DB2 db2nodes.cfg /db2/db2/sqllib Second Column NOTE: The db2nodes.
When using ssh for the virtual host configuration, the public key of each physical host in the cluster must be put into the system's known hosts file. Each of these keys must be prefixed by the virtual hostname and IP. Otherwise, DB2 triggers an error when the virtual hostname fails over to another cluster node, and reports different key for the virtual hostname. In addition, passwordless ssh login must be enabled for db2 user between cluster nodes.
Table 21 Groupfile file groups (continued) Groups Remark oper Oracle database operators (limited privileges) dba Oracle database administrators db2adm db2mnt db2ctl db2mon IBM DB2 authorization groups nl nl nl NOTE: For more information of the terms local, shared exclusive, and shared nfs file systems used in this section, see chapter 4 “SAP cluster storage layout planning” (page 30).
Table 22 Services on the primary node Service name Remarks sapdp Dispatcher ports sapdps Dispatcher ports (secure) sapgw Gateway ports sapgws Gateway ports (secure) sapms Port for (ABAP) message server for installation saphostctrl SAP hostctrl saphostctrls SAP hostctrl (secure) tlistsrv Oracle listener port sql6 MaxDB sapdbni72 MaxDB sapdb2 DB2_db2 DB2_db2_1 DB2_db2_2 DB2_db2_END SAP DB2 communication ports nl nl nl
Other local file systems and synchronization There are other directories and files created during SAP installation that reside on local file systems on the primary node. This must be copied to the secondary node(s). SAP Recreate directory structure /usr/sap/SID/SYS on all secondary nodes. MaxDB files Copy the local /etc/opt/sdb to all the secondary nodes. This is required only after the first MaxDB or liveCache installation.
deploysappkgs combi C11 or deploysappkgs multi C11 This command attempts to create either a minimal (combi = combine instances) or a maximum (multi = multiple instances) number of packages. If suitable base packages are already created in Phase 1, it extends those packages with the necessary attributes found for the installed C11 instance. If necessary, the configuration file for the enqor multi-node package is also created. You must review the resulting configuration files before applying them.
Module sgesap/sap_global – SAP common instance settings This module contains the common SAP instance settings that are included by the following SGeSAP modules: • sapinstance • mdminstance • sapextinstance • sapinfra The following table describes the SGeSAP parameters and their respective values: Parameter Possible value Description sgesap/sap_global/sap_system C11 Defines the unique SAP System Identifier (SAP SID) sgesap/sap_global/rem_comm ssh Defines the commands for remote executions.
Parameter Possible value Description legacy monitoring tools are used in addition to the agent framework monitors. When the value is exclusive only the sapcontrol is used to start, stop, and monitor SAP instances When the value is disabled the sapcontrol method is not used to start, stop, and monitor SAP instances. The default is preferred. Module sgesap/sapinstance – SAP instances This module contains the common attributes for any SAP Netweaver Instance.
Module sgesap/dbinstance – SAP databases This module defines the common attributes of the underlying database. Parameter Possible example value Description sgesap/db_global/db_vendor oracle Defines the underlying RDBMS database: Oracle, MaxDB, DB2, or Sybase sgesap/db_global/db_system C11 Determines the name of the database (schema) for SAP For db_vendor = oracle sgesap/oracledb_spec/listener_name LISTENER Oracle listener name.
package. This is called a "MDM Central" or "MDM Central System" installation. Each instance can also be configured into separate packages, called a "distributed MDM" installation. All MDM repositories defined in the package configuration are automatically mounted and loaded from the database, after the MDM server processes started successfully. The following table contains some selected SGeSAP parameters relevant to a MDM Central System instance. For more information, see the package configuration file.
Table 23 Module sg/services – SGeSAP monitors parameter Parameter Value Description service_name CM2CIdisp Unique name. A combination of package name and monitor type is recommend service_cmd $SGCONF/monitors/sgesap/sapdisp.mon Path to monitor script service_restart 0 Usually, no restarts must be configured for a SGeSAP monitor to have an immediate failover if the instance fails.
Module sg/dependency – SGeSAP enqor MNP dependency SCS and ERS packages taking part in the SGeSAP follow-and-push mechanism must have same-node/up dependency with the enqor MNP. The attributes have to be set as follows: dependency_name dependency_location dependency_condition enqor_dep same_node enqor = UP Servicegurad Manager guided configuration offers the correct values preselected for the dependency screen only if a SGeSAP enqor MNP is already setup.
This will create files for the private (id.rsa) and public key (id_rsa.pub) in the user’s .ssh directory. The public key then needs to be distributed to the other hosts. This can be accomplished by running the command ssh-copy-id –i id_rsa.pub user@host. This will add the user’s public key to the authorized_keys (not authorized_keys2) on the target host. On each cluster node this has to be executed as the root user and host being one of the other cluster nodes in turn.
Attributes to define a single external instance are Module Attribute GUI Label Description sap_ext_instance External SAP Instance Instance type and number (like D01). Only D,J and SMDA types allowed. sap_ext_sytem SAP System ID SID of the external instance. If unspecified, sap_system (the SID of the package) is assumed . sap_ext_host Hostname Host were the external instance resides. Virtual hostname allowed.
This sgesap/sapextinstance module can also be used to configure diagnostic instances which failover with clustered dialog instances (They start together with the dialog instance and stop together with the dialog instance). Although technically they belong to a different SID, they can be started and stopped with the package. The hostname to be configured is the same as the virtual hostname of the instance configured in the package (which usually also is part of diagnostic instance profile name).
Table 25 Legal values for sgesap/sap_infra_sw_type (continued) Value Description sapwebdisp SAP webdispatcher (not installed as SAP instance, but unpacked and bootstrapped to /usr/sap//sapwebdisp) saprouter SAP software network routing tool You can specify the saprouter and biamaster values more than once. The attribute sap_infra_treat specifies whether the component will only be started/notified with the package startup, or whether it will also be stopped as part of a package shutdown (default).
To add an SAP infrastructure software component to the Configured SAP Infrastructure Software Components list: 1. Enter information to the Type, Start/Stop, and Parameters boxes. 2. Click
If the liveCache is version >=7.8, then the xserver structure is the same as that of the MaxDB of the corresponding version. If it is planned to run more than one liveCache or MaxDB on the system it is advisable to decouple the xserver startup (sdggloballistener and DB specific). For more information, see MaxDB section describing the decoupling of startup.
NOTE: • An SGeSAP liveCache package should not be configured with other SGeSAP modules, even though it is technically possible . • The SGeSAP easy deployment (deploysappkgs) script does not support liveCache.
• The SAPGUI also contains connect strings referencing oldhost. These must be converted to the newvirthost names. • Change properties Java instance referencing the old host using the configtool. Converting an existing database • Adapt entries in the database configuration files listed in the Table 19 (page 60) table • Adapt all SAP profile entries referring to the DB (not all entries below need to exist).