Reference Guide

vConverter v4.1 Chapter 1 Introduction 8
lightweight as possible. The data transfer occurs over TCP port 422 into the
ESX host.
After the job is executed and the DCT starts sending data to ESX, it is detected by the
xinetd service running. This starts the server component particular to that job. Multiple
jobs can run at the same time and each job will use its own server component instance.
1. To transfer data to the target, a new VMDK file is created. It is the same size as
the original source volume. The VMDK file is pre-allocated to avoid file growth.
You can specify a unique VMDK file and datastore for each Windows volume.
2. The server component receives block-level data from the DCT and places it in the
VMDK file. If a block of zeroed data is detected by the DCT, it is ignored. The
zeroes already exist in the VMDK, so there is no reason to overwrite them.
3. Within 50 seconds of the server component executing, the vzBoost module
activates to enable high speed data writes into the VMFS. vzBoost monitors the
server component and terminates itself upon completion of the data transfer.
4. (Optional) After all data has been transferred to the VMDK file, the server
component can resize the files based on job configuration. In this case, the NTFS
partition is modified as well.
5. The server component performs the conversion and creates a bootable instance of
the VM with the proper drivers. This should prevent blue screen errors on system
startup.
6. A VM is created and registered on the ESX host based on job configuration
settings that include assigned memory, VHD, and virtual network assignment.
7. The server component terminates and enters a wait state—controlled through
xinetd—to anticipate the next job.
What Is Changing on the ESX Host?