Two of the previous three posts I've made about our upgraded server infrastructure have focused on performance. In the second post I talked about the performance (and reliability) benefits of going with our all-SSD architecture, while in the third post I talked about the increase in CPU performance between our old and new infrastructures. Today however, it's time to focus on power consumption.

Our old server infrastructure came from a time where power consumption mattered, but it hadn't yet been prioritized. This was before Nehalem's 2:1 rule (2% perf increase for every 1% power increase), and it was before power gating. Once again I turned to our old HP DL585 server with four AMD Opteron 880s (8-cores total) as an example of just how things have changed.

As a recap, we moved from the DL585 (and over 20 other 1U, 2U and 4U machines with similar, or slightly newer class processors) to an array of 6 Intel SR2625s (dual-socket 6-core Westmere based platforms), with another 6 to be deployed this year. All of our previous servers used hard drives, while all of our new servers use SSDs. The combination resulted in more than a doubling of peak CPU performance, and an increase in IO performance of anywhere between a near tripling to over an order of magnitude.

Everything got better, but the impressive part is that power consumption went down dramatically:

AnandTech Forums DB Server Power Consumption (2006 vs 2013)
  Off Idle 7-Zip Bench Cinebench Heavy IO
HP DL585 (2006) 29.5W 524W 675W 655W 693.1W
Intel SR2625 (2013) 12.8W 105.6W 267W 247W 170W

With both machines plugged in to a power outlet but both completely off, the new server already draws considerably less power. The difference at idle however is far more impressive. Without power gating and without a clear focus on minimizing power consumption, our old DL585 pulled over 500W when completely idle. It shocked me at first, but remembering back to how things used to be back then it stopped being so surprising. There was a time when even our single socket CPU testbeds would pull over 200W at idle.

Under heavy integer (7-zip) and FP (Cinebench) workloads, the difference is still staggering. You could run 2.5 of the new servers in the same power envelope as a single one of the old machines.

The power consumption under heavy IO needs a bit of explaining. We were still on an all 3.5-inch HDD architecture back then, so we had to rely on a combination of internal drives as well as an external Promise Vtrak J310s chassis to give us enough spindles to deliver the performance we needed. The 693.1W I report above includes the power consumption of the vTrak chassis (roughly 150W). In reality, all of the other tests here (idle, 7-zip, Cinebench) should include the vTrak's power consumption as well since the combination of the two were necessary to service the needs of the Forums alone. With the new infrastructure everything can be handled by this one tiny 2U box. So whereas under a heavy IO load our old setup would pull nearly 700W, the new server only needs 170W.

Datacenter power pricing varies depending on the size of the customer and the location of the datacenter, but if you were to assume roughly $0.10 per kWh you'd be talking about $459 per year (assuming 100% idle workload) for our old server compared to $92.50 per year for the new one. That's a considerable savings per year, just for a single box - and assuming the best case scenario (also not including the J310s external chassis). For workloads that don't necessarily demand huge increases in performance, modernizing your infrastructure can come with significant power and space savings (not to mention a positive impact to reliability).

Keep in mind that we're only looking at a single machine here. While the DL585 was probably the worst example from our old setup, there over a dozen other offenders in our racks (e.g. dual socket Pentium 4 based Xeons). It's no wonder that power consumption in datacenters became a big issue very quickly.

Our old infrastructure at our old datacenter was actually at the point where we were power limited. Although we only used a rack and a half of space we had to borrow power from adjacent racks because our requirements were so high. The new setup not only allows us better performance, but it gives us headroom on the power consumption side as well.

As I mentioned in my first post, we went down this path back in 2010 - there have been further power (and performance) enhancements since then. A move to 22nm based silicon could definitely help further improve things. For some workloads, this is where the impact of microservers can really be felt. While I don't see us moving to a microserver environment for our big database servers, it's entirely possible that the smaller, front-end application servers could see a power benefit. The right microprocessor architectures aren't available yet, but as Intel moves to its new 22nm Atom silicon and as ARM moves to 20nm Cortex A57/A53 things could be different.

POST A COMMENT

10 Comments

View All Comments

  • DanNeely - Monday, March 18, 2013 - link

    Have your new servers been running long enough to compare the aggregate amount of power your old and new servers consume in an average month? Your old p4 servers should have consolidated many to one into the new servers (or if still 1:1 operate much closer to idle); so I suspect the total savings are much larger than the 2.5-5:1 that benching a single new server vs a single old server shows; the latter number would be even more convincing from a "this is why we should spend money on new servers even though our six year old ones haven't died yet" perspective. Reply
  • Hrel - Monday, March 18, 2013 - link

    "The difference at idle however is far more impressive however"

    I think you have too many "howevers" however if you just really like to say however I can however understand however silly it may sound you just want to really drive home the point:)
    Reply
  • MrSpadge - Monday, March 18, 2013 - link

    You're saying you'll deploy 6 more of the Westmere boxes this year. Is this a good idea? Having similar hardware is surely nice, but there's newer stuff out by now. Depends on what your sponsor is willing to give you, though. Reply
  • marc1000 - Monday, March 18, 2013 - link

    they deployed it on 2010...

    "As I mentioned in my first post, we went down this path back in 2010"
    Reply
  • Gigaplex - Monday, March 18, 2013 - link

    They're also looking to deploy 6 more this year.

    "with another 6 to be deployed this year"
    Reply
  • DanNeely - Tuesday, March 19, 2013 - link

    My assumption would be they're setting up a redundant copy of the site in a different data center. Using identical hardware in both locations would make things simpler for that purpose. Reply
  • mayankleoboy1 - Tuesday, March 19, 2013 - link

    I would hate to compare the performance/watt difference between these two systems. Reply
  • bwanaaa - Tuesday, March 19, 2013 - link

    why did anandtech decide to roll their own and not use amazon's infrastructure? is it cheaper to roll your own? Reply
  • pensive69 - Wednesday, May 15, 2013 - link

    we faced much the same quandry last year.
    the decisions were made to roll our own with much the same
    performance increases and power reductions as the Anandtech
    article shows.
    1 - we're sort of gear heads and wanted to make this ours
    2 - there is a loss of control when you push stuff to the cloud, even to Amazon.
    3 - there may be security and data issues which mandate keeping the
    systems under closer control.
    4 - it's fun to do this! this is their business model and how they make
    the revenue side work.
    5 - they can always push data backup and replication of software or
    hardward and data to Amazon if they want to do that llater.
    Reply
  • alacard - Saturday, March 23, 2013 - link

    Great article. Of the three (ssd/cpu/power) i thought for sure i'd like be most impressed by your SSD architecture and the improvements they bring in IO. But after reading all three it turns out i found this to be the most fascinating and thrilling example of technological improvement.

    More performance for less energy. I can't wait to see what the future brings.
    Reply

Log in

Don't have an account? Sign up now