Installation guide
Host hardware should be provisioned such that they are capable of absorbing relocated guests
from multiple other failed hosts without causing a host to overcommit memory or severely
overcommit virtual CPUs. If enough failures occur to cause overcommit of either memory of virtual
CPUs this can lead to severe performance degradation and potentially cluster failure.
Directly using the xm or libvirt tools (virsh, virt-manager) to manage (live migrate, stop. start)
virtual machines that are under rgmanager control is not supported or recommended since this
would bypass the cluster management stack.
Each VM name must be unique cluster wide, including local-only / non-cluster VMs. Libvirtd only
enforces unique names on a per-host basis. If you clone a VM by hand, you must change the
name in the clone's configuration file.
7.2. Guest Clust ers
This refers to RHEL Cluster/HA running inside of virtualized guests on a variety of virtualization
platforms. In this use-case RHEL Clustering/HA is primarily used to make the applications running
inside of the guests highly available. This use-case is similar to how RHEL Clustering/HA has always
been used in traditional bare-metal hosts. The difference is that Clustering runs inside of guests
instead.
The following is a list of virtualization platforms and the level of support currently available for
running guest clusters using RHEL Cluster/HA. In the below list, RHEL 6 Guests encompass both the
High Availability (core clustering) and Resilient Storage Add-Ons (GFS2, clvmd and cmirror).
RHEL 5.3+ Xen hosts fully supports running guest clusters where the guest operating systems are
also RHEL 5.3 or above:
Xen guest clusters can use either fence_xvm or fence_scsi for guest fencing.
Usage of fence_xvm/fence_xvmd requires a host cluster to be running to support fence_xvmd
and fence_xvm must be used as the guest fencing agent on all clustered guests.
Shared storage can be provided by either iSCSI or Xen shared block devices backed by either
host block storage or by file backed storage (raw images).
RHEL 5.5+ KVM hosts do not support running guest clusters.
RHEL 6.1+ KVM hosts support running guest clusters where the guest operating systems are either
RHEL 6.1+ or RHEL 5.6+. RHEL 4 guests are not supported.
Mixing bare metal cluster nodes with cluster nodes that are virtualized is permitted.
RHEL 5.6+ guest clusters can use either fence_xvm or fence_scsi for guest fencing.
RHEL 6.1+ guest clusters can use either fence_xvm (in the fence-vi rt package) or
fence_scsi for guest fencing.
The RHEL 6.1+ KVM Hosts must use fence_virtd if the guest cluster is using fence_virt or
fence_xvm as the fence agent. If the guest cluster is using fence_scsi then fence_virtd on the
hosts is not required.
fence_virtd can operate in three modes:
Standalone mode where the host to guest mapping is hard coded and live migration of
guests is not allowed
Using the Openais Checkpoint service to track live-migrations of clustered guests. This
requires a host cluster to be running.
Red Hat Ent erprise Linux 6 High Availabilit y Ad d- O n O verview
32