HP XC System Software Installation Guide Version 3.0
Executing C10hptc_cluster_fs gconfigure
Executing C20gmmon gconfigure
Executing C30swmlogger gconfigure
Executing C30syslogng_forward gconfigure
Executing C35dhcp gconfigure
Executing C50cmf gconfigure
Executing C50nagios gconfigure
Would you like to enable web based monitoring? ([y]/n) y
Enter the password for the 'nagiosadmin' web user:
New password:
Re-type new password:
Adding password for user nagiosadmin
Executing C50nat gconfigure
Executing C50supermond gconfigure
Executing C51nagios_monitor gconfigure
Executing C50nat gconfigure
Executing C50supermond gconfigure
Executing C51nagios_monitor gconfigure
Executing C60nis gconfigure
Network Information Service (NIS) Configuration
This step sets up one or more NIS servers within the XC system
that are "slaves" to an external NIS "master". The master NIS
server provides the slaves with copies of its NIS maps.
In order to successfully complete this configuration step, the NIS
master must have been previously set to allow slaves to communicate
with it. On Linux systems, this is typically accomplished by adding
the NIS slave hostname(s) to the /var/yp/ypservers file on the NIS
master, and then running 'make'.
In addition, to complete this configuration, you will need to provide
1) the name or IP address of the NIS master, and
2) the NIS domain name hosted by the NIS master
Enter the name or IP address of the external NIS master: [] NIS_IP_address
Enter the NIS domain hosted by the NIS master: [] your_NIS_domain
Executing C90munge gconfigure
Executing C90slurm gconfigure
Do you want to configure SLURM now? (y/n) [y]:y
An existing SLURM configuration file has been detected.
Do you want to delete this file and generate a new one?
Answering 'no' means to edit the existing file. (y/n) [n]: y
This SLURM configuration needs a special SLURM user. The SLURM
controller daemons will be run by this user, and certain SLURM
runtime files will be owned by this user.
Enter the SLURM username [slurm]: Enter
n16 is the only node with the Resource Management
role. Therefore the SLURM Master Controller daemon will be set up
on this node, and there will be no SLURM Backup Controller.
The current Compute Node configuration is:
NodeName=n[11-16] Procs=2
Task 7: Configure the System and Propagate the Golden Image 85