Power Consumption

With Vishera, AMD was in a difficult position: it had to drive performance up without blowing through its 125W TDP. As the Piledriver cores were designed to do just that, Vishera benefitted. Remember that Piledriver was predominantly built to take this new architecture into mobile. I went through the details of what makes Piledriver different from its predecessor (Bulldozer) but at as far as power consumption is concerned, AMD moved to a different type of flip-flop in Piledriver that increased complexity on the design/timing end but decreased active power considerably. Basically, it made more work for AMD but resulted in a more power efficient chip without moving to a dramatically different architecture or new process node.

In mobile, AMD used these power saving gains to put Piledriver in mobile APUs, a place where Bulldozer never went. We saw this with Trinity, and surprisingly enough it managed to outperform the previous Llano generation APUs while improving battery life. On desktops however, AMD used the power savings offered by Piledriver to drive clock speeds up, thus increasing performance, without increasing power consumption. Since peak power didn't go up, overall power efficiency actually improves with Vishera over Zambezi. The chart below illustrates total system power consumption while running both passes of the x264 HD (5.0.1) benchmark to illustrate my point:

In the first pass Vishera actually draws a little less power, but once we get to the heavier second encode pass the two curves are mostly indistinguishable (Vishera still drops below Zambezi regularly). Vishera uses its extra frequency and IPC tweaks to complete the task sooner, and drive down to idle power levels, thus saving energy overall. The picture doesn't look as good though if we toss Ivy Bridge into the mix. Intel's 77W Core i5 3570K is targeted by AMD as the FX-8350's natural competitor. The 8350 is priced lower and actually outperforms the 3570K in this test, but it draws significantly more power:

The platforms aren't entirely comparable, but Intel maintains a huge power advantage over AMD. With the move to 22nm, Intel dropped power consumption over an already more power efficient Sandy Bridge CPU at 32nm. While Intel drove power consumption lower, AMD kept it constant and drove performance higher. Even if we look at the FX-8320 and toss Sandy Bridge into the mix, the situation doesn't change dramatically:

Sandy Bridge obviously consumes more than Ivy Bridge, but the gap between a Vishera and any of the two Intel platforms is significant. As I mentioned earlier however, this particular test runs quicker on Vishera however the test would have to be much longer in order to really give AMD the overall efficiency advantage.

If we look at average power over the course of the two x264 encode passes, the results back up what we've seen above:

Power Consumption - Load (x264 HD 5.0.1)

As more client PCs move towards smaller form factors, power consumption may become just as important as the single threaded performance gap. For those building in large cases this shouldn't be a problem, but for small form factor systems you'll want to go Ivy Bridge.

Note that idle power consumption can be competitive, but will obviously vary depending on the motherboard used (the Crosshair Formula V is hardly the lowest power AM3+ board available):

Power Consumption - Idle

3D Gaming Performance Projected Performance: Can AMD Catch up with Intel?
Comments Locked

250 Comments

View All Comments

  • wwwcd - Tuesday, October 23, 2012 - link

    But the consensus is that if you're still running a Phenom 2 x6, and you don't need 8-threads and mostly play video games, it really is throwing money into the fire in order to upgrade to the FX line, Piledriver or not, unless you intend to overclock the chips to 4.8ghz+ which the Phenom 2's can't reach on air


    Yes, we don't neeed of 8 core/threads for gaming today, but do You have prognosis for near future?
  • Kisper - Tuesday, October 23, 2012 - link

    Why would you upgrade for no reason other than speculation?
    If an advantage arises in heavily threaded games in the future, upgrade at that time. You'll get more processing power / $ spent in the future than you will at present.
  • CeriseCogburn - Tuesday, October 30, 2012 - link

    amd fanboys are pennywise and pound foolish, so buying the amd crap now, and telling everyone it has the deranged amd furuteboy advantage, works for them !

    I mean really, it sucks so freaking bad, they cannot help themselves, like a crack addict they must have and promote, so heck, the last hope of the loser is telling everyone how bright they are and how on down the line in the imaginary years ahead their current pileofcrap will "truly shine!"

    LOL - oh man, funny but so true.
  • Spunjji - Tuesday, October 23, 2012 - link

    Prognosis for the near future is that having that many threads will still not be a whole lot of use for gaming. See Amdahl's law for why.
  • Samus - Tuesday, October 23, 2012 - link

    It's safe to say all programs/games going forward will take advantage of four cores or more. Battlefield 3 released LAST year and basically requires 4 cores in order to be GPU-limited (as in the game is CPU limited with just about any videocard unless you have 4 cores.
  • c0d1f1ed - Tuesday, October 23, 2012 - link

    Prognosis for the near future is that having that many threads will still not be a whole lot of use for gaming. See Amdahl's law for why.

    Amdahl's Law is not a reason. There is plenty of task parallelism to exploit. The real issue is ROI, and there's two aspects to that. One is that multi-threaded development is freakishly hard. Unlike single-threaded development, you cannot know exactly what each thread is doing at any given time. You need to synchronised to make certain actions deterministic. But even then you can end up with race conditions if you're not careful. The current synchronization methods are just very primitive. Intel will fix that with Haswell. The TSX technology enables hardware lock elision and hardware transactional memory. Both will make the developer's life a lot easier, and also make synchronization more efficient.

    The second aspect isn't about the costs but about the gains. It has taken quite a while for more than two cores to become the norm. So it just wasn't worth it for developers to go through all the pain of scalable fine-grained multi-threaded development if the average CPU is still only a dual-core. Haswell's TSX technology will come right in time as quad-core becomes mainstream. Also, Haswell will have phenomenal Hyper-Threading performance thanks to two nearly symmetrical sets of two integer execution units.

    AMD needs to implement TSX and AVX2 sooner rather than later to stay in the market.
  • CeriseCogburn - Tuesday, October 30, 2012 - link

    Nice post. Appreciate it.

    And ouch for amd once again.
  • surt - Tuesday, October 23, 2012 - link

    No, gaming won't need that many threads in the near future either. Nobody is going to make a game demand more than 4 threads because that's what common gamer systems support.
  • AnnihilatorX - Wednesday, October 24, 2012 - link

    I disagree. Say we have a hypothetical game that support 8 threads. The overhead of over-threading in a quad core system is frankly, not very much, while it may provide improvements on people with octocore or Intel processors with hyper-threading.
  • AnnihilatorX - Wednesday, October 24, 2012 - link

    In fact, there are many games nowadays that split workload into many threads for economic simulation, background AI planning in user phase, physics, audio, graphics subthread, network management, preloading and resources management. It is just that even with the parallelism, there bound to be bottlenecks in single threading that a 8 core may not benefit at all compared to 4 cores.

    So I disagree, it is not about people not spending resources in making parallelism or not supporting it. It is the nature of the workload that is the determining factor.

Log in

Don't have an account? Sign up now