Planning and Configuring HP-UX DCE 1.9

Chapter 6
Configuring HP-UX DCE Cells
Integrating DCE Services with MC/ServiceGuard
68
Normal DCE programming practice assumes that all IP addresses on the host should be used for
endpoints for exported services.
The DCE runtime determines the available IP addresses on the node during the execution of any of the
rpc_server_use_* routines. These routines are used in every DCE server to select the protocols over which
the server will provide services. A side effect of this call is that the list of IP addresses supported by the node
is established for use later when determining the binding vector. When this vector is obtained by a server
main routine and registered in the endpoint map, the endpoint map will contain entries for every IP address
identified earlier during the rpc_server_use_* call. In addition, should this binding vector be exported to
the name space, the name space entry will also identify every IP address on the node as providing the service
associated with that entry.
While it is possible to edit the contents of the binding vector before using it to register endpoints or add
entries in the name space, few, if any, DCE server programs actually edit the binding vector. In addition, the
DCE runtime does not re-determine the list of available IP addresses during the course of server execution,
and, again, DCE servers do not, as a general rule, go through their initialization sequence a second time. As a
result, for all the DCE core servers and most known application DCE servers, the IP addresses used by the
server are set once during initialization, including all the IP addresses available on the node. The addresses
do not change once set.
In an MC/ServiceGuard environment, these characteristics might be problematic. Suppose a node had several
packages running on it, each based on a DCE service and each with its own IP address. The DCE servers in
each package would not only register endpoints using their own IP address, but will also include the IP
address of all the other packages configured on the node at the time the server started up. Since all the DCE
core services cache IP addresses and store them in their internal databases, the result is a potentially large
number of invalid entries, adversely affecting performance, causing the generation of a large number of
misleading log messages, and potentially causing the failure of the DCE infrastructure. These considerations
and their affects do not preclude the use of MC/ServiceGuard with DCE by any means; they do, however,
require that system administrators be particularly careful when planning, configuring, and operating a
DCE-MC/ServiceGuard installation.
Through an environment variable, the DCE runtime provides the means to restrict the IP addresses
identified by the rpc_server_use_* routines. Used correctly, this variable can alleviate the adverse effects of
the characteristics noted above.
Planning for a DCE-MC/ServiceGuard Installation
Planning for a package that includes one or more DCE servers is primarily a process of identifying the disk
and network resources necessary for the operation of the server. The planning process should follow the steps
outlined in Managing MC/ServiceGuard (B3936-90003).
Hardware Requirements for a DCE-MC/ServiceGuard Configuration
By their very nature, DCE and DCE applications are distributed, and therefore depend heavily on network
resources. Each node in the cluster should have multiple redundant LAN cards connected to multiple LANs.
Also, all the normal hardware configuration guidelines outlined in Managing MC/ServiceGuard
(B3936-90003) should be followed when planning for your hardware configuration.
Implementation Alternatives for a DCE-MC/ServiceGuard Installation
The basic configuration, which is supported by the templates, is the DCE host failover. In this configuration,
ServiceGuard moves an entire DCE host from one physical node to another.