Virtualization Performance: Linux VMs on ESXi

We introduced our new vApus FOS (For Open Source) server workloads in our review of the Facebook "Open Compute" servers. In a nutshell, it a mix of four VMs with open source workloads: two PhpBB websites (Apache2, MySQL), one OLAP MySQL "Community server 5.1.37" database, and one VM with VMware's open source groupware Zimbra 7.1.0. Zimbra is quite a complex application as it contains the following components:

  • Jetty, the web application server
  • Postfix, an open source mail transfer agent
  • OpenLDAP software, user authentication
  • MySQL is the database
  • Lucene full-featured text and search engine
  • ClamAV, an anti-virus scanner
  • SpamAssassin, a mail filter
  • James/Sieve filtering (mail)

All VMs are based on a minimal CentOS 5.6 setup with VMware Tools installed. All our current virtualization testing is on top of the hypervisor which we know best: ESXi (5.0). CentOS 5.6 is not ideal for the Interlagos Opteron, but we designed the benchmark a few months ago. It took us weeks to get this benchmark working and repeatable (especially the latter is hard). For example it was not easy to get Zimbra fully configured and properly benchmarked due to the complex usage patterns and high I/O usage. Besides, the reality is that VMs often contain older operating systems. We hope to show some benchmarks based on Linux kernel version 3.0 or later in our next article.

We tested with five tiles (one tile = four VMs). Each tile needs seven vCPUs, so the test requires 35 vCPUs.

vApus FOS

The Opteron 6276 stays close to the more expensive Xeons. That makes the Opteron server the one with the best performance per dollar. Still, we feel a bit underwhelmed as the Opteron 6276 fails to outperform the previous Opteron by a tangible margin.

The benchmark above measures throughput. Response times are even more important. Let us take a look at the table below, which gives you the average response time per VM:

vApus FOS Average Response Times (ms), lower is better!
CPU PhpBB1 PHPBB2 MySQL OLAP Zimbra
AMD Opteron 6276 737 587 170 567
AMD Opteron 6174 707 574 118 630
Intel Xeon X5670 645 550 63 593
Intel Xeon X5650 678 566 102 655

The Xeon X5670 wins a landslide victory in MySQL. MySQL has always scaled better with clock speed than with cores, so we expect that clock speed played a major role here. The same is true for our first VM: this VM gets only one CPU and as result runs quicker on the Xeon. In the other applications, the Opteron's higher (integer) core count starts to show. However, AMD cannot really be satisfied with the fact that the old Opteron 6174 delivers much better MySQL performance. We suspect that the high latency L2 cache and higher branch misprediction penalty (20 vs 12) is to blame. MySQL performance is characterized by a relatively high amount of branches and a lot of accesses to the L2. The Bulldozer server does manage to get the best response time on our Zimbra VM, however, so it's not a complete loss.

Performance per watt remains the most important metric for a large part of the server market. So let us check out the power consumption that we measured while we ran vApus FOS.

vApus FOS Power Consumption

The power consumption numbers are surprising to say the least. The Opteron 6174 needs quite a bit less energy than the two other contenders. That is bad news for the newest Opteron. We found out later that some tinkering could improve the situation, as we will see further.

Benchmark Configuration Measuring Real-World Power Consumption, Part One
Comments Locked

106 Comments

View All Comments

  • JohanAnandtech - Thursday, November 17, 2011 - link


    1) Niagara is NOT a CMT. It is interleaved multipthreading with SMT on top.

    I haven't studied the latest Niagaras but the T1 was a fine grained mult-threaded CPU. It switched like a gatling gun between threads, and could not execute two threads at the same time.
  • Penti - Thursday, November 17, 2011 - link

    SPARC T2 and onwards has additional ALU/AGU resources for a half physical two thread (four logically) solution per core with shared scheduler/pipeline if I remember correctly. That's not when CMT entered the picture according to SUN and Sun engineers any way. They regard the T1 as CMT as it's chip level. It's not just a CMP-chip any how. SMT is just running multiple threads on the cpus, CMP is working the same as SMP on separate sockets. It is not the same as AMDs solution however.
  • Phylyp - Tuesday, November 15, 2011 - link

    Firstly, this was a very good article, with a lot of information, especially the bits about the differences between server and desktop workloads.

    Secondly, it does seem that you need to tune either the software (power management settings) or the chip (CMT) to get the best results from the processor. So, what advise is AMD offering its customers in terms of this tuning? I wouldn't want to pony up hundreds of dollars to have to then search the web for little titbits like switching off CMT in certain cases, or enabling High-performance power management.

    Thirdly, why is the BIOS reporting 32 MB of L2 cache instead of 8 MB?
  • mino - Wednesday, November 16, 2011 - link

    No need for tuning - turbo is OS-independent (unless OS power management explicitly disables it aka Windows).
    Just disable the power management on the OS level (= high performance fro Windows) and you are good to go.
  • JohanAnandtech - Thursday, November 17, 2011 - link

    The BIOS is simply wrong. It should have read 16 MB (2 orochi dies of 8 MB L3)
  • gamoniac - Tuesday, November 15, 2011 - link

    Thanks, Johan. I run HyperV on Windows Server 2008 R2 SP1 on Phonem II X6 (my workstation) and have noticed the same CPU issue. I previously fixed it by disabling AMD's Cool'n'Quiet BIOS setting. After switching to high performance increase my overall power usage by 9 watts but corrected the CPU capping issue you mentioned.

    Yet another excellent article from AnandTech. Well done. This is how I don't mind spending 1 hour of my precious evening time.
  • mczak - Tuesday, November 15, 2011 - link

    L1 data and instruction cache are swapped (instruction is 8x64kB 2-way data is 16x16kB 4-way)
    L2 is 8x2MB 16-way
  • JohanAnandtech - Thursday, November 17, 2011 - link

    fixed. My apologies.
  • hechacker1 - Tuesday, November 15, 2011 - link

    Curious if those syscalls for virtualization were improved at all. I remember Intel touting they improved the latency each generation.

    http://www.anandtech.com/show/2480/9

    I'm guessing it's worse considering the increased general cache latency? I'm not sure how the latency, or syscall, is related if at all.

    Just curious as when I do lots of compiling in a guest VM (Gentoo doing lots of checking of packages and hardware capabilities each compile) it tends to spend the majority of time in the kernel context.
  • hechacker1 - Tuesday, November 15, 2011 - link

    Just also wanted to add: Before I had a VT-x enabled chip, it was unbearably slow to compile software in a guest VM. I remember measuring latencies of seconds for some operations.

    After getting an i7 920 with VT-x, it considerably improved, and most operations are in the hundred or so millisecond range (measured with latencytop).

    I'm not sure how the latests chips fare.

Log in

Don't have an account? Sign up now