Datasheet
Lowe c01.indd V3 - 08/11/2011 Page 10
10 
|
 CHAPTER 1  INTRODUCING VMWARE VSPHERE 5
vMotion Enhancements
vSphere 5 enhances vMotion’s functionality, making VM migrations faster and enabling more 
concurrent VM migrations than were supported in previous versions of vSphere or VMware 
Infrastructure 3. vSphere 5 also enhances vMotion to take advantage of multiple network inter-
faces, further improving live migration performance.
vMotion moves the execution of a VM, relocating the CPU and memory footprint between 
physical servers but leaving the storage untouched. Storage vMotion builds on the idea and 
principle of vMotion by providing the ability to leave the CPU and memory footprint untouched 
on a physical server but migrating a VM’s storage while the VM is still running.
Deploying vSphere in your environment generally means that lots of shared storage—Fibre 
Channel or iSCSI SAN or NFS—is needed. What happens when you need to migrate from an 
older storage array to a newer storage array? What kind of downtime would be required? Or 
what about a situation where you need to rebalance utilization of the array, either from a capac-
ity or performance perspective?
vSphere Storage vMotion directly addresses these situations. By providing the ability to move 
the storage for a running VM between datastores, Storage vMotion enables administrators to 
address all of these situations without downtime. This feature ensures that outgrowing datastores 
or moving to a new SAN does not force an outage for the affected VMs and provides administra-
tors with yet another tool to increase their fl exibility in responding to changing business needs.
VSPHERE DISTRIBUTED RESOURCE SCHEDULER
vMotion is a manual operation, meaning that an administrator must initiate the vMotion opera-
tion. What if VMware vSphere could perform vMotion operations automatically? That is the 
basic idea behind vSphere Distributed Resource Scheduler (DRS). If you think that vMotion 
sounds exciting, your anticipation will only grow after learning about DRS. DRS, simply put, 
leverages vMotion to provide automatic distribution of resource utilization across multiple ESXi 
hosts that are confi gured in a cluster.
Given the prevalence of Microsoft Windows Server in today’s datacenters, the use of the 
term cluster often draws IT professionals into thoughts of Microsoft Windows Server clusters. 
Windows Server clusters are often active-passive or active-active-passive clusters. However, 
ESXi clusters are fundamentally different, operating in an active-active mode to aggregate and 
combine resources into a shared pool. Although the underlying concept of aggregating physical 
hardware to serve a common goal is the same, the technology, confi guration, and feature sets 
are quite different between VMware ESXi clusters and Windows Server clusters.
Aggregate Capacity and Single Host Capacity
Although I say that a DRS cluster is an implicit aggregation of CPU and memory capacity, it’s impor-
tant to keep in mind that a VM is limited to using the CPU and RAM of a single physical host at any 
given time. If you have two ESXi servers with 32 GB of RAM each in a DRS cluster, the cluster will 
correctly report 64 GB of aggregate RAM available, but any given VM will not be able to use more 
than approximately 32 GB of RAM at a time.
c01.indd 10c01.indd 10 9/13/2011 11:46:54 AM9/13/2011 11:46:54 AM










