HP XC System Software Administration Guide Version 3.0
-A RH-Firewall-1-INPUT -i External -p tcp -m tcp --dport 1023:65535 -j ACCEPT
-A RH-Firewall-1-INPUT -i External -p udp -m udp --dport 1023:65535 -j ACCEPT
This file establishes the initial firewall rules for all nodes on HP XC system. These new rules open
all the unprivileged ports externally and 1 privileged port (1023). Opening the privileged port
allows LSF commands run as root on HP XC to communicate with non-XC LSF daemons, since LSF
commands executed by root use privileged ports. If necessary, opening the privileged port can
be avoided.
These new rules need to be set up on every node in the HP XC system that could be selected to
run the LSF-HPC daemons. In one of the steps that follows, we will provide instructions on how to
generate a new /etc/sysconfig/iptables file on each HP XC node from the recently modified
iptables.proto file.
b. Node-to-node communication.
LSF uses rsh and ssh (if configured) to control all the LSF daemons in the cluster. LSF expects the
selected mechanism to allow access to all nodes without a password.
The HP XC system discourages this use because it transmits unencrypted passwords through the
network, which can be received by any standard network-sniffing program. The rsh command
and its related packages are not installed on by default for this reason. Instead, HP recommends
the ssh command.
If you want to continue to use the rsh command within the Standard LSF cluster, install its RPM
packages on the head node now; these packages are available on the HP XC DVD.
c. Set Up the Expected LSF Environment
A typical LSF installation provides two environment setup files that, when sourced by the user, will
enable access to LSF binaries, man pages, and libraries by adjusting the user's environment. These
files are named profile.lsf and cshrc.lsf by default.
When LSF is installed locally on XC, two custom files are created that automatically source the
LSF environment setup files so that user has access to LSF as soon as he or she logs into the HP
XC system. These two files are /etc/profile.d/lsf.sh and /etc/profile.d/lsf.csh.
The current contents of these two files are shown below. We will be replacing the old LSF_TOP,
which was /opt/hptc/lsf/top, with the new LSF_TOP location that is shared between the
two clusters, which is /shared/lsf in our example.
# cat lsf.sh
case $PATH in
*-slurm/etc:*) ;;
*:/opt/hptc/lsf/top*) ;;
*)
if [ -f /opt/hptc/lsf/top/conf/profile.lsf ]; then
. /opt/hptc/lsf/top/conf/profile.lsf
fi
esac
# cat lsf.csh
if ( "${path}" !~ *-slurm/etc* ) then
if ( -f /opt/hptc/lsf/top/conf/cshrc.lsf ) then
source /opt/hptc/lsf/top/conf/cshrc.lsf
endif
endif
The goal of these custom files is to source (only once) the appropriate LSF environment file:
$LSF_ENVDIR/cshrc.lsf for csh users, and $LSF_ENVDIR/profile.lsf for users of sh,
bash, and other shells based on sh.
Create /etc/profile.d/lsf.sh and /etc/profile.d/lsf.csh on the HP XC system to
set up the LSF environment on HP XC. Using /shared/lsf for the value of LSF_TOP as an
example, the new files would resemble these:
# cat lsf.sh
case $PATH in
*-slurm/etc:*) ;;
*:/shared/lsf/*) ;;
*)
if [ -f /shared/lsf/conf/profile.lsf.xc ]; then
174 Installing LSF-HPC for SLURM into an Existing Standard LSF Cluster