HP XC System Software Administration Guide Version 3.0
removing /hptc_cluster/lsf/work...
removing /var/lsf...
In this step, you remove the LSF installation from the current LSF_TOP directory, /opt/hptc/lsf/top.
4. Log out then log back in to clear the LSF environment settings.
5. Mount a new LSF_TOP tree from the non-HP XC system plain using NFS, to the HP XC system.
In this sample case, the LSF_TOP location is /shared/lsf on the non-HP XC system.
• On plain , the non-XC system, export the directory specified by LSF_TOP to the HP XC system.
For UNIX or Linux systems, see exports(5) for instructions on exporting directories. Typically
an existing Standard LSF cluster has this location exported to the other nodes, so you just need to
add the HP XC system.
• On xc (the HP XC system):
a. Mount the external file servers on a systemwide basis
See “Mounting File Systems” (page 139) and the /hptc_cluster/etc/fstab.proto
file for information on mounting external file servers systemwide.
b. Create the mount point cluster-wide
For the sample case, create the /shared/lsf directory on all the nodes:
# pdsh -a mkdir -p /shared/lsf
c. Edit the /hptc_cluster/etc/fstab.proto, specifically change the appropriate fstab
entry in the ALL section of this file
Edit the /hptc_cluster/etc/fstab.proto with the appropriate entry.
d. Restart the cluster_fstab service systemwide.
# pdsh -a service cluster_fstab restart
6. Ensure that the HP XC resource management nodes have an external connection.
The HP XC resource management node, which is configured as the LSF node, must be able to
communicate with, and receive communication from, the existing LSF cluster on the external network.
Some of the options include adding additional network hardware to current resource management
nodes and reassigning the resource management role. See the
HP XC System Software Installation
Guide
for more information on configuring and reconfiguring roles in HP XC.
Use the shownodes command to ensure that each node configured as a resource management node
during the operation of the cluster_config utility also has access to the external network:
# shownode roles --role resource_management external
resource_management: xc[127-128]
external: xc[125-128]
If this command is not available, check the role assignments by running the cluster_config command
and viewing the node configurations. Be sure to quit after you determine the configuration of the nodes.
Do not "proceed" with reconfiguring the cluster with any changes at this point. There will be another
opportunity to reconfigure the system with cluster_config utility later.
7. Modify the Head Node.
These steps modify the head node and propagate those changes to the rest of the HP XC system. The
recommended method is to use the updateimage and the updateclient commands as documented
in Chapter 8.: Distributing Software Throughout the System (page 79). Make the modifications first,
then propagate the following changes:
a. Lower the firewall on the HP XC external network.
LSF daemons communicate through pre-configured ports in the lsf.conf configuration file, but
the LSF commands open random ports for receiving information when they communicate with the
LSF daemons. Because an LSF cluster needs this "open" network environment, trying to maintain
a firewall becomes challenging. Security-aware customers are welcome to try to get LSF running
with firewalls, but those procedures are beyond the scope of this documentation.
For this procedure, open the unprivileged ports (1024-65535) and one privileged port (1023)
on the external network by adding the following lines to /etc/sysconfig/iptables.proto
on the head node:
HP XC Preparation 173