• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY

Introduction to Server Benchmarking

Each time we publish a new server platform review, several of our readers inquire about HPC and rendering benchmarks. We're always willing to accommodate reasonable requests, so we're going to start expanding beyond our usual labor intensive virtualization benchmarks. This article is our first attempt. It was a bumpy ride, but this first attempt produced some very interesting insights.

The core counts of modern servers have increased at an incredible pace, making many benchmarks useless if we want to assess the maximum throughput. Just three years ago, we could still run benchmarks like Fritz Chess, Winrar, and zVisuel to satisfy our curiosity. We also performed real-world benchmarks like MySQL OLAP on our octal-core servers. All these benchmarks are pretty useless now on our 48-core Magny-Cours and 80-thread Westmere-EX systems. The number of applications that can really take advantage of the core counts found in quad- and even dual-socket servers continues to get lower and lower.

Most servers are now running hypervisors and virtualization of some form, so we naturally focused on virtualized environments. However, many of our readers are hardware enthusiasts, so while we wait for the new server platforms such as Intel's Romley-EP (Sandy Bridge EP) and AMD's Interlagos (Bulldozer) to appear, we decided to expand our benchmark suite. Our first attempt is not very ambitious: we'll tackle Cinebench (rendering) and Stars Euler 3D CFD (HPC). Both are quick and easy benchmarks to perform... or at least that'ss what we expected going in. On the plus side, our testing results are a lot more interesting than we imagined they would be.

Benchmark Setup
POST A COMMENT

52 Comments

View All Comments

  • mino - Saturday, October 01, 2011 - link

    Memory channel count has nothing to do with coherency traffic. Reply
  • mino - Saturday, October 01, 2011 - link

    Exactly. Actually the optimized way would normally be to split the workload into 12-thread chunks on Opterons and 20-thread chunks on Xeons. That is also a reason why 4S machines rarely seen in HPC.

    They just do not make sense for 99% of the workloads there.
    Reply
  • lelliott73181 - Friday, September 30, 2011 - link

    For those of us out there that are seriously into doing distributed computing projects, it'd be cool to see a bit of information on how these systems scale in terms of programs like BOINC, Folding@home, etc. Reply
  • MrSpadge - Friday, September 30, 2011 - link

    Scaling is pretty much perfect there, not very interesting. It may have been different back in the days when these big iron systems were starved for memory bandwidth.

    MrS
    Reply
  • fic2 - Friday, September 30, 2011 - link

    Was hoping for some Bulldozer server benchmarks since the server chips are "released". ;o)
    Didn't really think that I would see them though.
    Reply
  • rahvin - Friday, September 30, 2011 - link

    Have you considered that the Opteron problem could because the software is compiled with the Intel compiler which is disabling advanced features if it doesn't detect an Intel processor? This is a common problem in that the ICC compiler sets flags that if the processor doesn't find an Intel processor it turns off SSE and all the processor extensions and runs the code in x86 compatibility mode (very slow). Any time I see results that drastically off it reads to me that the software in question is using the Intel complier. Reply
  • Chibimyk - Friday, September 30, 2011 - link

    Ifort 10 is from 2007 and is not aware of the architectures of any of these machines. It doesn't support the latest sse instructions and likely doesn't know the levels of sse supported by the cpus. You have no idea which math libraries it is linked to. It won't be using the latest Intel MKL which supports the newest chips. It isn't using the AMD optimized ACML libraries either.

    What you are comparing using these compiled binaries is the performance of both systems when running intel optimized code.

    You also have no idea of the levels of optimization used when compiling. Some of the highest optimization speed increases with the Intel compilers drop ANSI accuracy, or at least used to. Whether this impacts results is application specific.

    Generally speaking:
    Intel chips are fastest with Intel compilers and Intel MKL.
    AMD chips are fastest with the Portland Group compilers and AMD ACML.
    Some code runs faster with the Goto BLAS libraries.

    Ideally you want to compare benchmarks with each system under ideal conditions.
    Reply
  • eachus - Saturday, October 01, 2011 - link

    Definitely true about AMD chips and the Portland Group. I get slightly better results with GCC than the Intel compiler, partly because I know how to get it to do what I want. ;-) But Portland is still better for Fortran.

    Second, there is a way to solve the NUMA problem that all HPC programmers know. Any (relatively static) data should be replicated to all processors. Arrays that will be written to by multiple threads can be duplicated with a "fire and forget" strategy, assuming that only one processor is writing to a particular element (well cache line)* in the array between checkpoints. In this particular case, you would use (all that extra) memory to have eight copies of the (frequently modified) data.

    Next, if your compiler doesn't use non-temporal memory references for random access floating-point data, you are going to get clobbered just like in the benchmark. (I'm fairly sure that the Portland Group compilers use PrefetchNTA instructions by default. I tend to do my innermost loops by hand on the GCC back end, which is how I get such good results. You can too--but you really need to understand the compiler internals to write--and use--your own intrinsic routines.) What PrefetchNTA does is two things, first
    it prefetches the data if it is not already in a local cache. This can be a big win. What kills you with Opteron NUMA fetches is not the Hypertransport bandwidth getting clogged, it is the latency. AMD CPUs hate memory latency. ;-)

    The other thing that PrefetchNTA does is to tell the caches not to cache this data. This prevents cache pollution, especially in the L1 data cache. Oh, and don't forget to use PrefetchNTA before writing to part of a cache line. This is where you can really get hit. The processor has to keep the data to be stored around until the cache line is in a local cache. (Or in the magic zeroth level cache AMD keeps in the floating point register file.) Running out of space in the register file can stall the floating point unit when no more registers are available for renaming purposes.

    Oh, and one of those "interesting" features of Bulldozer for compiler gurus is that it strongly prefers to have only one NT write stream at a time. (Reading from multiple data streams is apparently not an issue.) Just another reason we have to teach programmers to cache line aligned records for data, rather than many different arrays with the same dimensions. ;-)

    * This is another of those multi-processor gotchas that eats up address space--but there is plenty to go around now that everyone is using 64-bit (actually 48-bit) addresses. You really don't want code on two different CPU chips writing to the same cache line at about the same time, even if the memory hardware can (and will) go to extremes to make it work.

    It used to be that AMD CPUs used 64-byte cache lines and Intel always used 256-byte lines. When the hardware engineers got together for I think the DDR memory standard, they found that AMD fetched the "partner" 64 byte line if there were no other request waiting, and Intel cut fetches at 128 bytes if there was a waiting memory request. So it turned out that the width of the cache line inside the CPUs were different, but in practice most of the main memory accesses were 128-bytes wide no matter whose CPU you had. ;-) Anyway a data point for fluid flow software tends to have 48 bytes or so per data point. (Six DP values x,y, and z, and x',y' and z'. Aligning to 64-byte boundaries is good, 128-bytes is better, and you may want to try 256-bytes on some Intel hardware...)
    Reply
  • mino - Saturday, October 01, 2011 - link

    You deserve the paycheck for this article!

    Howgh.
    Reply
  • UrQuan3 - Monday, October 03, 2011 - link

    I'd like to add one to the request for a compiler benchmark. It might go well with the HPC study. The hardest part would, of course, be finding an unbiased way to conduct it. There's just so many compiler flags that add their own variables. Then you need source code.

    If you do decide to give it a try, Visual Studio, GCC, Intel, and Portland would be a must. I don't know how Anandtech would do it, but I've been impressed before.
    Reply

Log in

Don't have an account? Sign up now