HP XC System Software Installation Guide Version 3.2.1

13. Supply the name or IP address of the external NIS master server and the NIS domain name
if you assigned the nis_server role to one or more nodes to configure them as NIS slave
servers. If you did not assign a nis_server role to any node, you are not asked to supply
this information.
Network Information Service (NIS) Configuration
This step sets up one or more NIS servers within the XC system
that are "slaves" to an external NIS "master". The master NIS
server provides the slaves with copies of its NIS maps.
In order to successfully complete this configuration step, the NIS
master must have been previously set to allow slaves to communicate
with it. On Linux systems, this is typically accomplished by adding
the NIS slave hostname(s) to the /var/yp/ypservers file on the NIS
master, and then running 'make'.
In addition, to complete this configuration, you will need to provide
1) the name or IP address of the NIS master, and
2) the NIS domain name hosted by the NIS master
Enter the name or IP address of the external NIS master: []
Enter the NIS domain hosted by the NIS master: [] your_NIS_domain
Executing C66ibmon gconfigure
Executing C80sfs gconfigure
Executing C90munge gconfigure
Executing C90slurm gconfigure
14. Decide whether you want to configure SLURM. SLURM is required if you installed SVA
and if you plan to install LSF-HPC with SLURM.
Do you want to configure SLURM? (y/n) [y]:
Do one of the following:
If you intend to install LSF-HPC with SLURM or if you intend to install the Maui
Scheduler, or if you have already installed SVA, enter y and proceed to step 15.
If you intend to install standard LSF do not install SLURM and enter n. Proceed to step
16.
NOTE: After cluster_config processing is complete, you have the option to modify
default SLURM compute node and partition information, as described in “Perform SLURM
Postconfiguration Tasks” (page 108).
15. Define a SLURM user name and accept all default responses. The output looks different if
you assigned the resource_management role to one or more additional nodes because
you will be prompted to assign the master and backup controller nodes.
This SLURM configuration needs a special SLURM user. The SLURM
controller daemons will be run by this user, and certain SLURM
runtime files will be owned by this user.
Enter the SLURM username [slurm]: Enter
User 'slurm' does not exist.
If this user account is created here, it will not have login
access. Do you want to create this user? (y/n) [y]: Enter
n16 is the only node with the Resource Management
role. Therefore the SLURM Master Controller daemon will be set up
on this node, and there will be no SLURM Backup Controller.
The current Compute Node configuration is:
NodeName=n[11-16] Procs=2
NOTE: The only Partition created by default is the lsf
96 Configuring and Imaging the System