Using HP Serviceguard for Linux with VMware virtual machines - Technical white paper

3
Scope
This document describes how to configure Serviceguard for Linux clusters using physical machines and VMware virtual
machines running on ESX server, so as to provide high availability for applications. As new versions of ESX server or
Linux distributions are certified, they will be listed in the Serviceguard for Linux certification matrix at
hp.com/info/sglx
HP Serviceguard for Linux Certification Matrix.
Note: Serviceguard is certified on VMware ESX guests, not on ESX hosts, and provides high availability for applications, not for the
virtual machines themselves.
A reasonable expertise in the installation and configuration of ESX server, and familiarity with its capabilities and
limitations, is assumed. This document explains how to deploy and configure Serviceguard for Linux in this environment.
There is no special expertise required for HP Serviceguard personnel to install and configure Serviceguard in the VMware
Virtual environment. However, there can be issues with heartbeat links unless these are configured properly. These
issues are discussed in the section on NIC teaming.
Note: Except as noted in this white paper, all the Serviceguard configuration options documented in the Managing HP Serviceguard for
Linux manual are supported for VMware guests, and all the documented requirements apply. You can find the latest version of the
manual at hp.com/go/linux-serviceguard-docs
User guide Managing Serviceguard A.11.20.00 for Linux.
Considering virtual machine configuration
Refer to the VMware document Server Configuration Guide (see document 1 and 2, in page 15) for details on configuring
virtual machines. The resources to be allocated to virtual machines depend on the complexity of the applications deployed
on them. For limitations with VMware, refer to the document Configuration Maximums for VMware Infrastructure 3
(
see document 3) and vSphere 5 (see document 4).
VMware documents describe how to manage performance in a virtual machine environment. How many virtual machines
you can deploy on a given server depends on the capacity of that server and the resource requirements of the
applications running on it.
Timer: The “timer” (see document 6
) function of virtual machines is implemented in software, whereas physical
machines implement it in hardware. It is possible for timer interrupts to be missed if too many virtual machines, along
with their applications, are run on a single physical machine, and this could result in Serviceguard missing heartbeats. In
that case, the heartbeat interval needs to be increased on all clusters running on that virtual machine node. While there
is no specific limit to the number of virtual machines running on a physical machine, administrators should be aware of
this behavior. The limits can then be set on a case-by-case basis.
Logical NICs: There could be practical difficulties in allocating more than certain number of logical NICs in a virtual
machine.
3
This varies depending on the VMware ESX version, please refer to document 3 and 4 for more details.
Serviceguard configuration requires at least two heartbeat links, so if the applications need multiple data networks, you
may have to share the logical NICs for data and heartbeats.
vMotion is not supported: VMware vMotion allows a virtual machine to move between physical platforms while the
virtual machine is running, as part of scheduled maintenance. Depending on the configuration of the virtual machine, the
time it takes for vMotion to complete varies, and this can lead to unforeseen interactions. For this reason HP does not
support vMotion on virtual machine nodes of a Serviceguard cluster.
Using VMware NIC teaming to avoid single point of failure
VMware virtual machines use virtual network interfaces. As HP Serviceguard does not support channel bonding of virtual
NICs, one should use VMware NIC teaming instead.
How does NIC teaming work? VMware NIC teaming at the host level provides the same functionality as Linux channel
bonding, allowing you to group two or more physical NICs into a single logical network device called a bond.
4
After a
logical NIC is configured, the virtual machine will not know about the underlying physical NICs. Packets sent to the logical
NIC are dispatched to one of the physical NICs in the bound interfaces and packets arriving at any of the physical NICs are
automatically directed to the appropriate logical interface. NIC teaming can be configured in load-balancing or
fault-tolerant mode. You should use fault-tolerant mode to get the benefit of HA.
3
At the time of writing, vSphere 5 allows up to 10 NICs to be configured in virtual machines.
4
Bond generated by NIC teaming is different from bonds created by channel bonding.