HP XC System Software Installation Guide Version 2.1

Before deciding whether or n ot you want to accept the default configuration, consider
the following:
•Acompute role is assigned to the head node by default. Thus, LSF users will
submit jobs that run on the head node and wil l obtain less than optim al performan ce
if interactive users are also on the head node. You may wan t to consider rem oving
the compute role from the head node to prevent it f rom being configured as a
SLURM compute node.
You must assign a login role to each node on which you expect users to be able
tologinandusethesystem.
By default only one node, the head node, is configured with the
resource_management role. The resource_management role consists
of one or more SLURM controller daemons and the LSF X C executio n host. If
there is only on e node with the resource_management role, both S L U R M and
LSF controller daemons run on that node, and n o failover of these components
is possible. HP recom mends that you c onfig ure at least two nodes with the
resource_management role to distribute the work of these components and
provide for failover configuration.
If LSF HPC is expected to be accessible from outside the XC system, all nodes with
the resource_management ro le must also be configured with the external role
and have the appropriate hardware and wiring to directly access the external network.
You must assign the I/O role to any node that is exporting SAN storag e.
Appendix I describes node roles in detail. The HP XC System Software Administration
Guide describes the services provided by each node role. Refer to those sources if you
need more inform ation about services and node roles.
5. When you are satisfied with the c ur ren t system configuratio n, enter the letter p to apply
the system configuration:
[L]ist Nodes, [M]odify Nodes, [H]elp, [P]roceed, [Q]uit: p
Do you want to apply your changes to the cluster
configuration? [y/n] y
Output will be sim ilar to the following; one by one, services will start o n the head node:
Running C05pdsh
Running C08hptc_cluster_fs
Running C10ntp
Configuring the following nodes as ntp servers for the cluster:
n16
6. When prompted, set the network time protocol (NTP) server. The head node is
automatically configured as the system’s NTP server if another server is not specified, but
you have t he option to provide up to t hree external NTP servers instead.
If your XC system will b e integrated with HP StorageWorks Scalable File Share (HP SFS),
the XC and HP SFS systems must be synchronized to a comm on time server. Therefore,
do not take the default response; instead, enter the same external time server that will
be used for the HP SFS system.
You must now specify the clock source for the server nodes.
If the nodes have external connections, you may specify up to 3
external NTP servers. Otherwise, you must use the node’s system clock.
Enter the IP address or host name of the first external NTP server
or leave blank to use the system clock on the NTP server node:
Renaming previous /etc/ntp.conf to /etc/ntp.conf.bak
7. If your system has a QsNet
II
interconnect, you will be prom pted to supply th e network ty pe:
Enter the network type of your system.
Valid choices are QMS32 or QMS64: [QMS64]:
4-16 Con figuring and Imaging the System