A couple of recent posts from VMWare and Citrix seemed to have an interesting correlation. Both of these posts have a very similiar context - dealing with their hypervisor performance with 10 Gig Ethernet Adapters.
The final analysis from the VMWare camp:
Two virtual machines can easily saturate a 10Gbps link (the practical limit is 9.3 Gbps for packets that use the standard MTU size because of protocol overheads), and the thoughput remains constant as we add more virtual machines.
Using Jumbo frames, a single virtual machine can saturate a 10Gbps link on the transmit path and can recieve netwrk traffic at rates up to 5.7 Gbps.
Citrix also announced very similiar results:
we have basically maxxed out bidirectional I/O on a single 10Gb/s link, with only 4 guests. The exact figures are 18.50 Gbps (bidirectional) using 4 guest machines.
I can read the question in your mind: How can the simple Linux Networking stack in XenServer match the virtualization optimized I/O Stack of the ESX VMKernel? Well you got a very valid question and I would like to point you to the dates when the two performance tests were done. ESX performance was done in Nov 2008 and the XenServer results have been declared in April 2009. So what is it that has drastically changed in the 5 months?
The answer is the new Nehelam processors from Intel. The Citrix tests were conducted using the Nehelam processors and IOV enhanced 10Gbps NICs with Solarflare IO acceleration. This offers a powerful direct hardware-to-guest acceleration path that avoids the necessity for the hypervisor to process I/O on behalf of the guests. When the same tests were conducted without the IO acceleration, the XenServer hypervisor CPU got maxxed out at 8.45 Gbps (bidirectional)!!!.
This result leads to an interesting conclusion: Though the optimized VMKernel is far superior to the Xen Kernel, a lot of these features are being offset by the hardware virtualization capabilities built into the processors and NIC adapters by the hardware vendors. This has helped XenServer match or sometimes outperform ESX in terms of hyppervisor performance with both CPU and IO intensive workloads. Believe the IOV Ethernet Adapters should also help XenServer overcome the 6 supported physical NICs limitation.
For a customer, I believe it is the final numbers that matter more than how the performance is achieved. I would rather go with a hardware optimized platform than rely on the s/w stacks to drive performance. What do you say?
1 comment:
Like you said pal, Intel Nehelam makes all the difference... Intel Rocks!!!
Post a Comment