Benchmark Configuration

Since AMD sent us a 1U Supermicro server, we had to resort to testing our 1U servers again. That is why we went back to the ASUS RS700 for the Xeon. It is a bit unfortunate as on average 1U servers have a relatively worse performance/watt ratio than other form factors such as 2U and blades. Of course, 1U still makes sense in low cost, high density HPC environments.

Supermicro A+ server 1022G-URG (1U Chassis)

CPU Two AMD Opteron "Bulldozer" 6276 at 2.3GHz
Two AMD Opteron "Magny-Cours" 6174 at 2.2GHz
RAM 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0
Motherboard SuperMicro H8DGU-F
Internal Disks 2 x Intel SLC X25-E 32GB or
1 x Intel MLC SSD510 120GB
Chipset AMD Chipset SR5670 + SP5100
BIOS version v2.81 (10/28/2011)
PSU SuperMicro PWS-704P-1R 750Watt

The AMD CPUS have four memory channels per CPU. The new Interlagos Bulldozer CPU supports DDR3-1600, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth.

Asus RS700-E6/RS4 1U Server

CPU Two Intel Xeon X5670 at 2.93GHz - 6 cores
Two Intel Xeon X5650 at 2.66GHz - 6 cores
RAM 48GB (12x4GB) Kingston DDR3-1333 FB372D3D4P13C9ED1
Motherboard Asus Z8PS-D12-1U
Chipset Intel 5520
BIOS version 1102 (08/25/2011)
PSU 770W Delta Electronics DPS-770AB

To speed up testing, we tested with the Intel Xeon and AMD Opteron system in parallel. As we didn't have more than eight 8GB DIMMs, we used our 4GB DDR3-1333 DIMMs. The Xeon system only gets 48GB, but this is no disadvantage as our benchmark with the highest memory footprint (vApus FOS, 5 tiles) uses no more than 36GB of RAM.

We measured the difference between 12x4GB and 8x8GB of RAM and recalculated the power consumption for our power measurements (note that the differences were very small). There is no alternative as our Xeon has three memory channels and cannot be outfitted with the same amount of RAM as our Opteron system (four channels).

We chose the Xeons based on AMD's positioning. The Xeon X5649 is priced at the same level as the Opteron 6276 but we didn't have the X5649 in the labs. As we suggested earlier, the Opteron 6276 should reach the performance of the X5650 to be attractive, so we tested with the X5670 and X5650. We only tested with the X5670 in some of the tests because of time constraints.

Common Storage System

For the virtualization tests, each server gets an adaptec 5085 PCIe x8 (driver aacraid v1.1-5.1[2459] b 469512) connected to six Cheetah 300GB 15000 RPM SAS disks (RAID-0) inside a Promise JBOD J300s. The virtualization testing requires more storage IOPs than our standard Promise JBOD with six SAS drives can provide. To counter this, we added internal SSDs:

  • We installed the Oracle Swingbench VMs (vApus Mark II) on two internal X25-E SSDs (no RAID). The Oracle database is only 6GB large. We test with two tiles. On each SSD, each OLTP VM accesses its own database data. All other VMs (web, SQL Server OLAP) are stored on the Promise JBOD (see above).
  • With vApus FOS, Zimbra is the I/O intensive VM. We spread the Zimbra data over the two Intel X25-E SSDs (no RAID). All other VMs (web, MySQL OLAP) get their data from the Promise JBOD (see above).

We monitored disk activity and phyiscal disk adapter latency (as reported by VMware vSphere) was between 0.5 and 2.5 ms.

Software configuration

All vApus testing was done one ESXi vSphere 5--VMware ESXi 5.0.0 (b 469512 - VMkernel SMP build-348481 Jan-12-2011 x86_64) to be more specific. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless indicated otherwise. All other testing was done on Windows 2008 R2 SP1.

Other notes

Both servers were fed by a standard European 230V (16 Amps max.) powerline. The room temperature was monitored and kept at 23°C by our Airwell CRACs.

We used the Racktivity ES1008 Energy Switch PDU to measure power. Using a PDU for accurate power measurements might same pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.

Inside Our Interlagos Test System Virtualization Performance: Linux VMs on ESXi
Comments Locked

106 Comments

View All Comments

  • DigitalFreak - Tuesday, November 15, 2011 - link

    Good to see that CPU-Z correctly reports the 6276 as 8 core, 16 thread, instead of falling for AMD's marketing BS.
  • N4g4rok - Tuesday, November 15, 2011 - link

    If each module possess two integer cores to a shared floating point core, what's to say that it can't be considered as a practical 16 core?
  • phoenix_rizzen - Tuesday, November 15, 2011 - link

    Each module includes 2x integer cores, correct. But the floating point core is "shared-separate", meaning it an be used as two separate 128-bit FPUs or as a single 256 FPU.

    Thus, each Bulldozer module can run either 3 or 4 threads simultaneously:
    - 2x integer + 2x 128-bit FP threads, or
    - 2x integer + 1x 256-bit FP threads

    It's definitely a dual-core module. It's just that the number of threads it can run is flexible.

    The thing to remember, though, is that these are separate hardware pipelines, not mickey-moused hyperthreaded pipelines.
  • JohanAnandtech - Tuesday, November 15, 2011 - link

    You can get into a long discussion about that. The way that I see it, is that part of the core is "logical/virtual", the other part is real in Bulldozer . What is the difference between an SMT thread and CMT thread when they enter the fetch-decode stages? Nothing AFAIK, both instructions are interleaved, and they both have a "thread tag".

    The difference is when they are scheduled, the instructions enters a real core with only one context in the CMT Bulldozer. With SMT, the instructions enter a real core which still interleave two logical contexts. So the core still consists of two logical cores.

    It is gets even more complicated when look at the FP "cores". AFAIK, the FP cores of Interlagos are nothing more than 8 SMT enabled cores.
  • alpha754293 - Tuesday, November 15, 2011 - link

    I think that Johan is partially correct.

    The way I see it, the FPU on the Interlagos is this:

    It's really a 256-bit wide FPU.

    It can't really QUITE separate the ONE physical FPUs into two 128-bit wide FPUs, but it more probably in reality, interleaves them (which is really just code for "FPU-starved").

    Intel's original HTT had this as a MAJOR problem, because the test back then can range from -30% to +30% performance increase. Floating-point intensive benchmarks have ALWAYS suffered mostly because suppose you're writing a calculator using ONLY 8-byte (64-bit) double precision.

    NORMALLY, that should mean that you should be able to crunch through four DWORDs at the same time. And that's kinda/sorta true.

    Now, if you are running two programs, really...I don't think that the CPU, the compiler (well..maybe), the OS, or the program knows that it needs to compile for 128-bit-wide FPUs if you're going to run two instances or two (different) calculators.

    So it's resource starved in trying to do the calculation processes at the same time.

    For non-FPU-heavy workloads, you can get away with that. For pretty much the entire scientific/math/engineering (SME) community; it's an 8-core processor or a highly crippled 16-core processor.

    Intel's latest HTT seems to have addressed a lot of that, and in practical terms, you can see upwards of 30% performance advantage even with FPU-heavy workloads.

    So in some cases, the definition of core depends on what you're going to be doing with it. For SME/HPC; it's good cuz it can do 12-actual-cores worth of work with 8 FPUs (33% more efficient), but sucks because unless they come out with a 32-thread/16-core monolithic die; as stated, it's only marginally better than the last. It's just cheaper. And going to get incrementally faster with higher clock speeds.
  • alpha754293 - Tuesday, November 15, 2011 - link

    P.S. Also, like Anand's article about nVidia Optimus:

    Context switching even at the CPU level, while faster, is still costly. Perhaps maybe not nearly as costly as shuffling data around; but it's still pretty costly.
  • Samus - Wednesday, November 16, 2011 - link

    Ouch, this is going to be AMD's Itanium. That is, it has architecture adoption problems that people simply won't build around. Maybe less substantial than IA64, but still a huge performance loss because of underutilized integer units.
  • leexgx - Wednesday, November 16, 2011 - link

    think they way CPU-z reporting it for BD cpus is correct each core has 2 FP, so 8 cores and 16 threads is correct

    to bad windows does not understand how to spread the load correctly on an amd cpu (windows 7 with HT cpus Intel works fine, spreads the load correctly, SP1 improves that more but for Intel cpus only)

    windows 7 sp1 makes biger use of core parking and gives better cpu use on Intel cpus as i have been seeing on 3 systems most work loads now stay on the first 2 cores and the other 2 stay parked, on amd side its still broke with cool and quite enabled
  • Stuka87 - Tuesday, November 15, 2011 - link

    So, what is your definition of a core?

    Bulldozers do not utilize hyper threading, which takes a single integer core and can at times put two threads into that single integer core. A Bulldozer core has actual hardware two run two threads at the same time. This would suggest there are two physical cores.

    Does it perform like an intel 16 core (if there was such a thing), no. But that does not mean that it is not in fact a 16 core device. As the hardware is there. Yes they share an FPU, but that doesn't mean they are not cores.
  • Filiprino - Tuesday, November 15, 2011 - link

    Actually, Bulldozer is 16 cores. It has two dedicated integer units and a float point unit which can act as two 128 bit units or one 256 bit unit for AVX. So, you can have 2 and 2 per module.
    Bulldozer does not use hyperthreading.

Log in

Don't have an account? Sign up now