Maximizing File Transfer Performance Using 10Gb Ethernet and Virtualization
8
CASE 2: One Virtual Machine with
Eight vCPUs and VMDirectPath
in this case was similar to that of the
previous case except that the 10G was
The test team started with a synthetic
benchmark netperf and then ran eight
12 compare the performance numbers
from the VM with DirectPath I/O to the
performance numbers of the VM with no
As Figure 11 illustrates, VMDirectPath
(VT-d direct assignment) of the 10G
NIC to the VM increases performance
to a level that is close to the native
the trade-offs associated with using
A number of features are not available
record/replay, fault tolerance, high
these limitations, the use of VMDirectPath
will continue to be a niche solution
awaiting future developments that
be practical to use today for virtual
security-appliance VMs since these VMs
VMDirectPath may also be useful for other
applications that have their own clustering
Figure 10.
Figure 11.
0
10
20
30
40
50
60
70
80
90
100
NETPERF SCP
(SSH)
RSYNC
(SSH)
SCP
(HPN-SSH)
RSYNC
(HPN-SSH)
SCP
(HPN-SSH +
No Crypto)
RSYNC
(HPN-SSH +
No Crypto)
BBCP
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
Avg. CPU (%Util)
Receive Throughput (Mbps)
ESX 4.0* GA (1VM with 8 vCPU & VMDirectPath I/O: Various FileVarious File Copy Tools (8 streams)
Receive Throughput – Native Receive Throughput – VM w/o VMDirectPath I/O
Receive Throughput – VM w/VMDirectPath I/O Avg. CPU (%Util) – Native
Avg. CPU (%Util) – VM w/o VMDirectPath I/O Avg. CPU (%Util) – VM w/ VMDirectPath I/O
SOURCE SERVER
VM1 (8 vCPU) VM1 (8 vCPU)
File Transfer
Direction
Directly connected
back-to-back
File Transfer
Applications
File Transfer
Applications
VMware® ESX® 4.0 VMware ESX 4.0
DESTINATION SERVER
8