Installation guide
5.2.3 Hosts file
The /etc/hosts file for each cluster member should contain an entry defining localhost. If the
external host name of the system is defined on the same line, the host name reference should
be removed.
Additionally, each /etc/hosts file should define the local interconnect of each cluster member.
5.3 Storage Configuration
5.3.1 Multipathing
Storage hardware vendors offer different solutions for implementing a multipath failover
capability. This document focuses on the generic multipath device mapper approach. Please
consult your storage hardware vendor for the correct and supported multipath configuration.
5.3.2 Device Mapper Multipath
The device mapper multipath plugin (DM multipath) provides greater reliability and
performance by using path failover and load balancing. In HA scenarios, cluster servers can
use multiple paths to the shared storage devices. Normally these devices are presented as
multiple device files (/dev/sdXX)
DM-Multipath creates a single device that routes I/O to the underlying devices according to
the multipath configuration. It creates kernel block devices (/dev/dm-*) and corresponding
block devices (with persistent names) in the /dev/mapper directory.
The multipath configuration file can also be used to set storage specific attributes. These
multipath specific settings are usually obtained from the storage vendor and typically
supersede the default settings.
5.3.3 CLVM
In RHCS, LVM managed shared storage must be controlled by High Availability resource
manager agents for LVM (HA-LVM) or the clustered logical volume manager daemon (clvmd/
CLVM). Single instance LVM must not be used for shared storage, as it is not cluster aware
and can result in data corruption. In this document, CLVM will be used as it allows
active/active storage configurations. This allows the use of GFS and secondly failover
scenarios are handled more easily.
Note that the qdisk partition cannot be managed by CLVM as this would overwrite the quorum
label assigned to the device.
When using CLVM, the clvmd must be running on all nodes. This can be accomplished by
enabling the clvmd init script. Please note, that the core cluster must be up and running,
before clvmd can be started.
# chkconfig clvmd on
# service clvmd start
www.redhat.com | 21