What Makes Server Applications Different?

The large caches and high integer core (cluster) count in one Orochi die (four CMT module Bulldozer die) made quite a few people suspect that the Bulldozer design first and foremost was created to excel in server workloads. Reviews like our own AMD FX-8150 launch article have revealed that single-threaded performance has (slightly) regressed compared to the previous AMD CPUs (Istanbul core), while the chip performs better in heavy multi-threaded benchmarks. However, high performance in multi-threaded workstation and desktop applications does not automatically mean that the architecture is server centric.

A more in depth analysis of the Bulldozer architecture and its performance will be presented in a later article as it is out of the scope of this one. However, many of our readers are either hardcore hardware enthusiasts or IT professionals that really love to delve a bit deeper than just benchmarks showing if something is faster/slower than the competition, so it's good to start with an explanation of what makes an architecture better suited for server applications. Is the Bulldozer architecture a “server centric architecture”?

What makes a server application different anyway?

There have been extensive performance characterizations on the SPEC CPU benchmark, which contains real-world HPC (High Performance Computing), workstation, and desktop applications. The studies of commercial web and database workloads on top of real CPUs are less abundant, but we dug up quite a bit of interesting info. In summary we can say that server workloads distinguish themselves from the workstation and desktop ones in the following ways.

They spend a lot more time in the kernel. Accessing the network stack, the disk subsystem, handling the user connections, syncing high amounts of threads, demanding more memory pages for expending caches--server workloads make the OS sweat. Server applications spend about 20 to 60% of their execution time in the kernel or hypervisor, while in contrast most desktop applications rarely exceed 5% kernel time. Kernel code tends to be very low IPC  (Instructions Per Clockcycle) with lots of dependencies.

That is why for example SPECjbb, which does not perform any networking and disk access, is a decent CPU benchmark but a pretty bad server benchmark. An interesting fact is that SPECJBB, thanks to the lack of I/O subsystem interaction, typically has an IPC of 0.5-0.9, which is almost twice as high as other server workloads (0.3-0.6), even if those server workloads are not bottlenecked by the storage subsystem.

Another aspect of server applications is that they are prone to more instruction cache misses. Server workloads are more complex than most processing intensive applications. Processing intensive applications like encoders are written in C++ using a few libraries. Server workloads are developed on top of frameworks like .Net and make of lots of DLLs--or in Linux terms, they have more dependencies. Not only is the "most used" instruction footprint a lot larger, dynamically compiled software (such as .Net and Java) tends to make code that is more scattered in the memory space. As a result, server apps have much more L1 instruction cache misses than desktop applications, where instruction cache misses are much lower than data cache misses.

Similar to the above, server apps also have more L2 cache misses. Modern desktop/workstation applications miss the L1 data cache frequently and need the L2 cache too, as their datasets are much larger than the L1 data cache. But once there, few applications have significant L2 cache misses. Most server applications have higher L2 cache misses as they tend to come with even larger memory footprints and huge datasets.

The larger memory footprint and shrinking and expanding caches can cause more TLB misses too. Especially virtualized workloads need large and fast TLBs as they switch between contexts much more often.

As most server applications are easier to multi-thread (for example, a thread for each connection) but are likely to work on the same data (e.g. a relational database), keeping the caches coherent tends to produce much more coherency traffic, and locks are much more frequent.

Some desktop workloads such as compiling and games have much higher branch misprediction ratios than server applications. Server applications tend to be no more branch intensive than your average integer applications.

Quick Summary

The end result is that most server applications have low IPC. Quite a few workstation applications achieve 1.0-2.0 IPC, while many server applications execute 3 to 5 times fewer instructions on average per cycle. Performance is dominated by Memory Level Parallelism (MLP), coherency traffic, and branch prediction in that order, and to a lesser degree integer processing power.

So is "Bulldozer" a server centric architecture? We'll need a more in-depth analysis to answer this question properly, but from a high level perspective, yes, it does appear that way. Getting 16 threads and 32MB of cache inside a 115W TDP power consumption envelope is no easy feat. But let the hardware and benchmarks now speak.

Introducing AMD's Opteron 6200 Series Inside Our Interlagos Test System
Comments Locked

106 Comments

View All Comments

  • neotiger - Tuesday, November 15, 2011 - link

    Most of the benchmarks are for rendering: Cinebench, 3DSMax, Maxwell, Blender, etc.

    How many enterprises actually do 3D rendering?

    Far more common enterprise applications would be RDBMS, data warehouse, OLTP, JVM, app servers, etc.

    You touched on some of that in just one virtualization benchmark, vApus. That doesn't make sense either - how many enterprises you know run database servers on VM?

    A far more useful review would be running separate benchmarks for OLTP, OLAP, RDBMS, JVM, etc. tppc, tpce, tpch would be a good place to start
  • JohanAnandtech - Tuesday, November 15, 2011 - link

    I definitely would like to stay close to what people actually use.
    In fact we did that:
    http://www.anandtech.com/show/2694

    But the exploding core counts made it as good as impossible.

    1. For example, a website that scales to 32 cores easily: most people will be amazed how many websites have trouble scaling beyond 8 cores.

    2. Getting an OLTP database to scale to 32 cores is nothing to sneeze at. If your database is small and you run most of it in memory, chances are that you'll get a lot of locks and that it won't scale anyway. If not, you'll need several parallel RAID cards which have a lot of SSDs. We might pull that one off (the SSDs), but placing several RAID cards inside a server is most of the time not possible. once you solve the storage bottleneck, other ones will show up again. Or you need an expensive SAN... which we don't have.

    We had an OLAP/ OLTP and Java benchmarks. And they were excellent benchmarks, but between 8 and 16 cores, they started to show decreasing CPU utilization despite using SSDs, tweaking etc.

    Now puts yourself in our place. We can either spend weeks/months getting a database/website to scale (and we are not even sure it will make a real repeatable benchmark) or we can build upon our virtualization knowledge knowing that most people can't make good use of a native 32 core database anyway (or are bottlenecked by I/O and don't care anyway), and buy their servers to virtualize.

    At a certain point, we can not justify to invest loads of time in a benchmark that only interest a few people. Unless you want to pay those people :-). Noticed that some of the publications out there use geekbench (!) to evaluate a server? Noticed how many publication run virtualization benchmarks?

    "That doesn't make sense either - how many enterprises you know run database servers on VM?"

    Lots of people. Actually besides a few massive Oracle OLTP databases, there is no reason any more not to virtualized your databases. SQL server and MySQL are virtualized a lot. Just googling you can find plenty of reports of MySQL and SQL server on top of ESX 4. Since vSphere 4 this has been common practice.

    "etc. tppc, tpce, tpch would be a good place to start "

    No not really. None of the professional server buyers I know cares about TPC benches. The only people that mentione them are the marketing people and hardware enthusiast that like to discuss high-end hardware.

    So you prefer software that requires 300.000$ of storage hardware over a very realistic virtualization benchmarks which are benchmarked with real logs of real people?

    Your "poor benchmark choice" title is disappoing after all the time that my fine colleagues and me have spend on getting a nice website + groupware virtualization benchmark running which is stresstested by vApus which uses real logs of real people. IMHO, the latter is much more interesting than some inflated TPC benchmarks with storage hardware that only the fortune 500 can afford. Just HMO.
  • neotiger - Tuesday, November 15, 2011 - link

    While scaling to 32 cores can be problematic for some software, it's worth keeping in mind that the vast majority of dual-socket servers don't have 32 cores.

    In fact, a dual-CPU Intel server only has *at most* 12 cores, that's a far cry from 32-cores. Postgresql & MySQL has no problem at all to scale to 12 cores and beyond.

    Now if AMD decided to make a CPU with crappy per-core performance but has so many cores that most software can't take full advantage of, that's their own fault. It's not like they haven't been warned. Sun tried and failed with the same approach with T2. If AMD is hellbent on making the same mistake, they only have themselves to blame.

    My post title is a bit harsh. But it is disappointing to see a review that devotes FOUR separate benchmarks to 3D rendering, an application that the vast majority of enterprises have no use for at all. Meanwhile, the workhorse applications for most enterprises, OLTP, OLAP, and such, received far too little attention.
  • tiro_uspsss - Wednesday, November 16, 2011 - link

    "In fact, a dual-CPU Intel server only has *at most* 12 cores..."

    Incorrect. There is s1567. This allows 2-8 CPUs, with a max. of 8C/16T per CPU......... which I'm wondering why Anandtech failed to include in this review?

    s1567 CPUs also have quad channel memory...

    I really wish s1567 was included in this review..
  • Photubias - Wednesday, November 16, 2011 - link

    Intel's S1567?
    You mean the E7-8830 CPU from the E7-8800 series which has prices *starting* at $2280?

    -> http://ark.intel.com/products/series/53672
  • bruce24 - Wednesday, November 16, 2011 - link

    "You mean the E7-8830 CPU from the E7-8800 series which has prices *starting* at $2280?"

    I'm not sure what he meant, but there are E7-2xxx processors for dual socket servers, which are priced much lower than the E7-8xxx processors which are for 8+ socket servers.
  • Photubias - Thursday, November 17, 2011 - link

    You mean the E7-28xx series
    http://ark.intel.com/products/series/53670 ?

    They are priced a bit lower, is there a comparison you suggest?
  • Sabresiberian - Wednesday, November 16, 2011 - link

    I have trouble understanding why people think a review should include research into every other similar product that might be used for the same purpose.

    I mean, I can understand ASKING for a review of another specific product, particularly if you've actually done some research on your own and haven't found the information you want, but to imply a review isn't complete because it didn't mention or test another piece of hardware is a bit - unrealistic.

    ;)
  • JohanAnandtech - Thursday, November 17, 2011 - link

    Sabresiberian, a very sincere thank you for being reasonable. :-)

    Frankly I can't imagine a situation where someone would have trouble to decide between a Westmere-EX and an AMD CPU. Most people checking out the Westmere-EX go for the RAS features (dual) or RAS + ultimate high thread performance (Quad). In all other cases dual Xeon EP or Opterons make more sense power and pricewise.
  • JustTheFacts - Thursday, November 17, 2011 - link

    Really? Is it that much trouble to understand that people want to see the latest AMD cpu's compared to the most current generation of Intel hardware? Especially when the previous Intel processor review posted on this site reported on Westmere-EX performance? I have trouble understanding why people wouldn't expect it.

Log in

Don't have an account? Sign up now