HP XC System Software Administration Guide Version 3.0

Using Per-Node Service Configuration
The HP XC system configuration process uses per-node configuration scripts to achieve personalized role
configurations as necessary on each node. The per-node configuration process occurs initially during HP
XC system configuration, at the time each client node is auto-installed. The HP XC configuration and
management database (cmdb) contains the per-node role and service configuration, and is queried on each
node's initial boot to identify which roles and services to configure. See Adding a Service” (page 47) for
more information.
A per-node configuration script is associated with each configurable service, and is executed on the client
node if the cmdb identifies this client as hosting this service.
The per-node configuration process actually occurs in two phases:
Global Service Configuration
The global configuration phase is intended to set up the per-node configuration phase by globally
configuring the service for use within the HP XC system. The global configuration of a service occurs
when a global configuration script (a gconfig script) is executed during the running of the
cluster_config utility. It is here that you interact with the cluster_config utility as necessary
to configure the service. In order to configure a new service into the HP XC system using gconfig,
you must run the cluster_config utility again.
The global configuration script can store information in the following locations:
cmdb Use the database as the target for node-specific configuration data,
that is, services that are not ubiquitous in the HP XC system, but run
on only one or a handful of nodes.
Golden client file system By writing to the golden client file system, the script can add or modify
files that are propagated to all nodes of the HP XC system by using
the golden image. This is useful for configuring services that are
ubiquitous in the HP XC system.
Cluster common storage
(/hptc_cluster)
By writing to the cluster common storage, the script is able to add or
modify files that are visible to all the nodes of the cluster. Use this
method for either per-node or clusterwide services.
As mentioned previously, the database should be the destination for node-specific data. However, some
existing services may already place configuration data used by the service in files in known locations.
The golden client file system and the cluster common storage are available to support such applications.
Global service configuration scripts are located in the /opt/hptc/etc/gconfig.d directory.
Node-Specific Configuration
The node-specific service configuration step uses the results of the global service configuration step
described previously to apply to a specific node its “personality” with respect to the service. User
interaction is not permitted because this step runs on a per-node basis.
The configuration of the service is accomplished in a script called by the node-specific service
configuration controller (nconfig) script. The nconfig controller script runs at system startup, and it
reports if is executing as part of the initial boot of a client node after an installation operation. If so,
the nconfig controller script queries the cmdb to determine which node-specific service configuration
scripts to execute, thus providing the per-node personality to each node.
Per-node configuration scripts are located in /opt/hptc/etc/nconfig.d directory.
If you used the si_updateclient utility, after successfully synchronizing each client node with the golden
image, you must run the per-node configuration scripts on each node with the following commands. This
returns the per-node personality following the golden image update:
# cexec -a -x `nodename` -f128 "/sbin/service nconfig nconfigure"
# cexec -a -x `nodename` -f128 "/sbin/service nconfig nrestart"
The number of simultaneous remote commands are increased by using the cexec command's -f option.
You can monitor the execution of the per-node configuration scripts from a central location by monitoring
the /hptc_cluster/adm/logs/imaging.log file. Run the following command from the imaging server
node (currently, the head node):
82 Distributing Software Throughout the System