Installation guide

Only single site clusters are fully supported at this time. Clusters spread across multiple
physical locations are not formally supported. For more details and to discuss multi-site
clusters, please speak to your Red Hat sales or support representative.
G FS2
Although a GFS2 file system can be implemented in a standalone system or as part of a
cluster configuration, Red Hat does not support the use of GFS2 as a single-node file
system. Red Hat does support a number of high-performance single-node file systems that
are optimized for single node, which have generally lower overhead than a cluster file
system. Red Hat recommends using those file systems in preference to GFS2 in cases where
only a single node needs to mount the file system. Red Hat will continue to support single-
node GFS2 file systems for existing customers.
When you configure a GFS2 file system as a cluster file system, you must ensure that all
nodes in the cluster have access to the shared file system. Asymmetric cluster
configurations in which some nodes have access to the file system and others do not are
not supported.This does not require that all nodes actually mount the GFS2 file system
itself.
No - sin g le- p o in t - o f - f ailu re h ard ware co n f ig u rat io n
Clusters can include a dual-controller RAID array, multiple bonded network channels,
multiple paths between cluster members and storage, and redundant un-interruptible power
supply (UPS) systems to ensure that no single failure results in application down time or
loss of data.
Alternatively, a low-cost cluster can be set up to provide less availability than a no-single-
point-of-failure cluster. For example, you can set up a cluster with a single-controller RAID
array and only a single Ethernet channel.
Certain low-cost alternatives, such as host RAID controllers, software RAID without cluster
support, and multi-initiator parallel SCSI configurations are not compatible or appropriate
for use as shared cluster storage.
Dat a in t eg rit y assu ran ce
To ensure data integrity, only one node can run a cluster service and access cluster-
service data at a time. The use of power switches in the cluster hardware configuration
enables a node to power-cycle another node before restarting that node's HA services
during a failover process. This prevents two nodes from simultaneously accessing the
same data and corrupting it. Fence devices (hardware or software solutions that remotely
power, shutdown, and reboot cluster nodes) are used to guarantee data integrity under all
failure conditions.
Et h ern et ch an n el b o n d in g
Cluster quorum and node health is determined by communication of messages among
cluster nodes via Ethernet. In addition, cluster nodes use Ethernet for a variety of other
critical cluster functions (for example, fencing). With Ethernet channel bonding, multiple
Ethernet interfaces are configured to behave as one, reducing the risk of a single-point-of-
failure in the typical switched Ethernet connection among cluster nodes and other cluster
hardware.
As of Red Hat Enterprise Linux 6.4, bonding modes 0, 1, and 2 are supported.
IPv4 an d IPv6
Chapt er 2 . Before Configuring t he Red Hat High Availabilit y Add- O n
19