6.7

Table Of Contents
Verify that you have sets of four or more ESXi hosts that are hosting fault tolerant virtual machines that
are powered on. If the virtual machines are powered off, the Primary and Secondary VMs can be
relocated to hosts with different builds.
Note This upgrade procedure is for a minimum four-node cluster. The same instructions can be followed
for a smaller cluster, though the unprotected interval will be slightly longer.
Procedure
1 Using vMotion, migrate the fault tolerant virtual machines off of two hosts.
2 Upgrade the two evacuated hosts to the same ESXi build.
3 Suspend Fault Tolerance on the Primary VM.
4 Using vMotion, move the Primary VM for which Fault Tolerance has been suspended to one of the
upgraded hosts.
5 Resume Fault Tolerance on the Primary VM that was moved.
6 Repeat Step 1 to Step 5 for as many fault tolerant virtual machine pairs as can be accommodated on
the upgraded hosts.
7 Using vMotion, redistribute the fault tolerant virtual machines.
All ESXi hosts in a cluster are upgraded.
Best Practices for Fault Tolerance
To ensure optimal Fault Tolerance results, you should follow certain best practices.
The following recommendations for host and networking configuration can help improve the stability and
performance of your cluster.
Host Configuration
Hosts running the Primary and Secondary VMs should operate at approximately the same processor
frequencies, otherwise the Secondary VM might be restarted more frequently. Platform power
management features that do not adjust based on workload (for example, power capping and enforced
low frequency modes to save power) can cause processor frequencies to vary greatly. If Secondary VMs
are being restarted on a regular basis, disable all power management modes on the hosts running fault
tolerant virtual machines or ensure that all hosts are running in the same power management modes.
Host Networking Configuration
The following guidelines allow you to configure your host's networking to support Fault Tolerance with
different combinations of traffic types (for example, NFS) and numbers of physical NICs.
n
Distribute each NIC team over two physical switches ensuring L2 domain continuity for each VLAN
between the two physical switches.
vSphere Availability
VMware, Inc. 60