Installation guide
The iSCSI server software shipped with RHEL does not support SCSI 3 persistent reservations,
therefore it cannot be used with fence_scsi. It is suitable for use as a shared storage solution in
conjunction with other fence devices like fence_vmware or fence_rhevm, however.
If using fence_scsi on all guests, a host cluster is not required (in the RHEL 5 Xen/KVM and RHEL
6 KVM Host use cases)
If fence_scsi is used as the fence agent, all shared storage must be over iSCSI. Mixing of iSCSI
and native shared storage is not permitted.
7.2.2. General Recommendat ions
As stated above it is recommended to upgrade both the hosts and guests to the latest RHEL
packages before using virtualization capabilities, as there have been many enhancements and
bug fixes.
Mixing virtualization platforms (hypervisors) underneath guest clusters is not supported. All
underlying hosts must use the same virtualization technology.
It is not supported to run all guests in a guest cluster on a single physical host as this provides
no high availability in the event of a single host failure. This configuration can be used for
prototype or development purposes, however.
Best practices include the following:
It is not necessary to have a single host per guest, but this configuration does provide the
highest level of availability since a host failure only affects a single node in the cluster. If you
have a 2 to 1 mapping (two guests in a single cluster per physical host) this means a single
host failure results in two guest failures. Therefore it is advisable to get as close to a 1 to 1
mapping as possible.
Mixing multiple independent guest clusters on the same set of physical hosts is not supported
at this time when using the fence_xvm/fence_xvmd or fence_virt/fence_virtd fence agents.
Mixing multiple independent guest clusters on the same set of physical hosts will work if using
fence_scsi + iSCSI storage or if using fence_vmware + VMware (ESX/ESXi and vCenter).
Running non-clustered guests on the same set of physical hosts as a guest cluster is
supported, but since hosts will physically fence each other if a host cluster is configured, these
other guests will also be terminated during a host fencing operation.
Host hardware should be provisioned such that memory or virtual CPU overcommit is avoided.
Overcommitting memory or virtual CPU will result in performance degradation. If the
performance degradation becomes critical the cluster heartbeat could be affected which may
result in cluster failure.
Red Hat Ent erprise Linux 6 High Availabilit y Ad d- O n O verview
34