User's Guide
MDM 5.5 SP05 - Solution Operation Guide
As of the writing of this document, the following SAP notes contain the most recent information on the MDM
installation as well as corrections to the installation:
1025897 - MDM 5.5 SP05 Release Note
822018 - MDM 5.5 Release Restriction Note
Installation and Configuration Considerations
The following sections contain a step-by-step guide on the components required to install MDM in a SGeSAP
(Serviceguard extension for SAP) environment.
The MDM server components that are relevant for the installation and configuration of the SGeSAP scripts
are: MDS, MDIS, MDSS and MDB.
Prerequisites
You must having the following installed and already configured:
• HP-UX and Serviceguard
• A Serviceguard cluster with at least two nodes attached to the network. (Node names: clunode1 and
clunode2 )
• Any shared storage supported by Serviceguard. The shared storage used for this configuration is based
on (Enterprise Virtual Array) EVA - a fibre channel based storage solution.
NOTE: Refer to the latest Managing Serviceguard manual for Serviceguard installation instructions from
docs.hp.com | High Availability | Serviceguard.
The MDM SGeSAP File System Layout
The following file system layout will be used for the MDM Server components.
/oracle/MDM
For performance reasons, the MDM database (MDB) file systems will be based on local and relocatable
storage (the physical storage volume / file system can relocate between the cluster nodes, but only ONE
node in the cluster will mount the file system). The file system will be mounted by the cluster node on which
the database instance is started/running. The directory mount point is /oracle/MDM. All I/O is local to
the cluster node.
/home/mdmuser
/export/home/mdmuser
These are the file systems for the MDM server components (These are MDS, MDSS and MDIS) that are of
dynamic in nature - e.g. configuration files, log files, import files, export files). These file systems will be
based on NFS. One cluster node mounts the physical storage volume and runs as a NFS server, exporting
the file systems to NFS clients. Mount point for the NFS server file system is /export/home/mdmuser.
All nodes in the cluster mount the NFS exported files systems as NFS clients. Mount point for the NFS client
file system is /home/mdmuser. Each of the MDM server (mds, mdss and mdis) components will use it's
own directory (e.g.: /home/mdmuser/mds, /home/mdmuser/mdss, /home/mdmuser/mdis) within
/home/mdmuser.
Should the NFS server node fail - then the storage volume will be relocated to another cluster node, which
then will take over the NFS server part.
NOTE: The advantage of the NFS server/client approach from a system management viewpoint is only
one copy of all the MDM server files have to be kept and maintained on the cluster instead of creating and
distributing copies to each cluster node. Regardless of on which node any of the MDM server components
are running in the cluster - the MDM server files are always available in the/home/mdmuser directory.
The disadvantage: I/O performance might become a bottleneck. Should IO performance become an issue
then it would become necessary to split and to create local/relocatable file systems for /home/mdmuser/mds,
/home/mdmuser/mdss,/home/mdmuser/mdis.
Installation and Configuration Considerations 113