HP XC System Software Administration Guide Version 4.0

Table Of Contents
[root@n9]# grep processor /proc/cpuinfo | wc -l
2
c. Determine the amount of real memory in megabytes:
[root@n9]# grep MemTotal /proc/meminfo
MemTotal: 2056364 kB
[root@n9]# expr 2056364 \/ 1024
2008
Note that the value for RealMemory for node n9 is 2008.
d. Exit the session:
[root@n9]# exit
Connection to n9 closed.
#
3. If the system has more than one partition, determine the partition to which the new node
will be added.
4. Save a backup copy of the /hptc_cluster/slurm/etc/slurm.conf file.
5. Use the editor of your choice to update the /hptc_cluster/slurm/etc/slurm.conf
file as follows:
a. Locate the line that begins with NodeName=. There may be more than one line because
the nodes with the same characteristics are described on a single line.
Choose the line that is appropriate for this node; that is, ensure that the number of
processors and the RealMemory= characteristics match the description.
If the characteristics match, add the node name of the new node to the list of nodes.
If the characteristics do not match, insert a new NodeName= line to describe the new
node.
For example:
NodeName=n[1-5] Procs=2 RealMemory=1994
NodeName=n[6-8] Procs=2 RealMemory=4032
These lines change to:
NodeName=n[1-5] Procs=2 RealMemory=1994
NodeName=n[6-8] Procs=2 RealMemory=4032
NodeName=n9 Procs=2 RealMemory=2008
NOTE: If node the value for the RealMemory characteristic of n9 were 4032 in the
example, the portion of the file would be changed to the following:
NodeName=n[1-5] Procs=2 RealMemory=1994
NodeName=n[6-9] Procs=2 RealMemory=4032
The order of NodeNames arguments listed in this file is important because SLURM uses
this to determine the contiguity of the nodes.
b. Locate the line that begins with PartitionName=. Add the node name of the new
node to the list of nodes.
PartitionName=lsf RootOnly=YES Shared=FORCE Nodes=n[1-9]
c. Save the file and exit the editor.
6. Ensure that SLURM is running by issuing the following command:
# service slurm status
slurmd (pid PID) is running...
where PID is the process identifier.
If SLURM is not running, start it with the following command:
# service slurm start
186 Managing SLURM