HP XC System Software Installation Guide Version 3.0

info: Executing C20gmmon nrestart
info: Executing C30swmlogger nrestart
info: Executing C30syslogng_forward nrestart
info: Executing C40hpasm nrestart
info: Executing C50cmf nrestart
info: Executing C50collectl nrestart
info: Executing C50gather_data nrestart
info: Executing C50hptc-lm nrestart
info: Executing C50nagios nrestart
info: Executing C50nat nrestart
info: Executing C50supermond nrestart
info: Executing C51nagios_monitor nrestart
info: Executing C51nrpe nrestart
info: Executing C90munge nrestart
info: Executing C90slurm nrestart
info: Executing C95lsf nrestart
info: Executing C30syslogng_forward crestart
info: Executing C35dhcp crestart
info: Executing C50supermond crestart
info: Executing C90munge crestart
info: Executing C90slurm crestart
info: Executing C95lsf crestart
info: nconfig shut down
5. Look at the backup copy of the slurm.conf file, which is located in the
/hptc_cluster/slurm/etc/slurm.conf.bak file. If you had previously customized this file,
you must merge those customizations into the new version of the
/hptc_cluster/slurm/etc/slurm.conf file. Otherwise, skip this step.
6. Re-enter the monitoring line card entries in the /etc/dhcpd.conf file if your system is using a QSnet
II
or Myrinet interconnect. See Appendix D (page 105) for more information about adding these entries
to the file.
Skip this step if your system is using an InfiniBand or Gigabit Ethernet interconnect.
7. Enter one of the following commands depending upon the size of your system:
On systems with fewer than 300 nodes, enter this command to image and boot all client nodes:
# startsys --image_and_boot
On systems with more than 300 nodes, enter this command to image the client nodes. Then,
proceed to step 8 to boot the nodes after they are imaged.
# startsys --image_only
8. Enter the following command to boot the client nodes on systems with more than 300 nodes because
the nodes were not booted during their imaging operation:
# startsys --boot_group_delay=240
Note
The --boot_group_delay=240 option is only used the first time nodes are booted after being
imaged; the value 240 specifies the number of seconds to wait between groups of nodes as they are
booting.
9. Make sure all nodes are up:
# power --status
10. If your system is configured with LSF-HPC with SLURM, run the SLURM postconfiguration utility to update
the slurm.conf file with compute node names and attributes:
# spconfig
11. Set up the LSF environment by sourcing the LSF profile file:
88 Upgrading Your HP XC System