Datasheet
Chapter 1: Overview of Virtualization
5
  Low - level development environments, where developers may want or need to work with 
specific versions of tools, an operating system kernel, and a specific operating system 
distribution. Server virtualization makes it easy to be able to run many different operating 
systems and environments without requiring dedicated hardware for each.    
 For more information about specific uses for server virtualization and its possible organizational 
advantages, see the section “ Advantages of Virtualization, ”  later in this chapter. 
 Server and machine virtualization technologies work in several different ways. The differences between 
the various approaches to server or machine virtualization can be subtle, but are always significant in 
terms of the capabilities that they provide and the hardware and software requirements for the underlying 
system. The most common approaches to server and machine virtualization today are the following: 
 Guest OS:  Each virtual server runs as a separate operating system instance within a 
virtualization application that itself runs on an instance of a specific operating system. Parallels 
Workstation, VMWare Workstation, and VMWare GSX Server are the most common examples of 
this approach to virtualization. The operating system on which the virtualization application is 
running is often referred to as the “ Host OS ”  because it is supplying the execution environment 
for the virtualization application.  
 Parallel Virtual Machine:  Some number of physical or virtual systems are organized into a 
single virtual machine using clustering software such as a Parallel Virtual Machine (PVM) 
( 
www.csm.ornl.gov/pvm/pvm_home.html ). The resulting cluster is capable of performing 
complex CPU and data - intensive calculations in a cooperative fashion. This is more of a 
clustering concept than an alternative virtualization solution, and thus is not discussed 
elsewhere in this book. See the PVM home page ( 
www.csm.ornl.gov/pvm/ ) for detailed 
information about PVM and related software.  
 Hypervisor - based:  A small virtual machine monitor (known as a hypervisor) runs on top of 
your machine ’ s hardware and provides two basic functions. First, it identifies, traps, and 
responds to protected or privileged CPU operations made by each virtual machine. Second, it 
handles queuing, dispatching, and returning the results of hardware requests from your virtual 
machines. An administrative operating system then runs on top of the hypervisor, as do the 
virtual machines themselves. This administrative operating system can communicate with the 
hypervisor and is used to manage the virtual machine instances. 
   The most common approach to hypervisor - based virtualization is known as paravirtualization, 
which requires changes to an operating system so that it can communicate with the hypervisor. 
Paravirtualization can provide performance enhancements over other approaches to server and 
machine virtualization, because the operating system modifications enable the operating system 
to communicate directly with the hypervisor, and thus does not incur some of the overhead 
associated with the emulation required for the other hypervisor - based machine and server 
technologies discussed in this section. Paravirtualization is the primary model used by Xen, 
which uses a customized Linux kernel to support its administrative environment, known as 
domain0. As discussed later in this section, Xen can also take advantage of hardware 
virtualization to run unmodified versions of operating systems on top of its hypervisor.  
 Full virtualization:  Very similar to paravirtualization, full virtualization also uses a hypervisor, 
but incorporates code into the hypervisor that emulates the underlying hardware when 
necessary, enabling unmodified  operating systems to run on top of the hypervisor. Full 
virtualization is the model used by VMWare ESX server, which uses a customized version of 
Linux (known as the Service Console) as its administrative operating system.  
❑
❑
❑
❑
❑
c01.indd 5c01.indd 5 12/14/07 3:57:19 PM12/14/07 3:57:19 PM










