HP XC System Software Installation Guide Version 2.1
See A pp endix J for information about how to determ ine the QsNet
II
network type.
8. Supply the name of the LVS alias if you assigned a login role to one or more nodes,
this example uses the alias penguin. This is the name by which users will log into the
system. If you did not assign a login role to any node, you wi ll not be asked to supply
an LVS alias.
Running C20gmmon
Myrinet interconnect not being used on this system.
Running C30syslogng_forward
Running C40hpasm
Running C50cmf
Running C50lvs
Enter the name of the cluster alias: penguin
9. When prompted, enable Web access to the Nagios monitoring application and create a
password for the nagiosadmin user. This password does not have to match any oth er
password on your system.
Running C50nagios
Would you like to enable web based monitoring? ([y]/n) y
Enter the password for the ’nagiosadmin’ web user:
New password: your_nagios_password
Re-type new password: your_nagios_password
Adding password for user nagiosadmin
Web services will be configured for:
https://n16/nagios
You can create additional web accounts using the command:
# /usr/bin/htpasswd /opt/hptc/nagios/etc/htpasswd.users {username}
Running C50nat
NAT servers:
n16
Running C50slurm_controller
10. When prompted, add a SLURM user; accept all default responses. Your output will look
different if you assigned the resource_management role to one or more additional
nodes because you will be prompted to assign the master and backup controllers.
This SLURM configuration needs a special SLURM user.
The SLURM controller daemons will be run by this user, and certain
SLURM runtime files will be owned by this user.
Enter the SLURM username [slurm]:
Enter
User ’slurm’ does not exist. If we create this user here, we will
create a "dummy" account with no login access.
Create?(y/n) [y]:
Enter
Configure the node assignments for the SLURM controllers.
n16 is the only node with the Resource Management
role. The SLURM Master Controller will be setup on this node,
and there will be no SLURM Backup Controller.
Press ’Enter’ to continue:
Enter
NOTE: The default number of Processors for all nodes has
been set to 2. This should be checked and corrected in
/hptc_cluster/slurm/etc/slurm.conf
to ensure SLURM is properly configured for the cluster.
Here is the current Compute Node configuration:
NodeName=n[14-16] Procs=2
NOTE: The only Partition created by default is the lsf
Configuring and Imaging the System 4-17