Cache and Memory Performance

I mentioned earlier that cache latencies are higher in order to accommodate the larger caches (8MB L2 + 8MB L3) as well as the high frequency design. We turned to our old friend cachemem to measure these latencies in clocks:

Cache/Memory Latency Comparison
  L1 L2 L3 Main Memory
AMD FX-8150 (3.6GHz) 4 21 65 195
AMD Phenom II X4 975 BE (3.6GHz) 3 15 59 182
AMD Phenom II X6 1100T (3.3GHz) 3 14 55 157
Intel Core i5 2500K (3.3GHz) 4 11 25 148

Cache latencies are up significantly across the board, which is to be expected given the increase in pipeline depth as well as cache size. But is Bulldozer able to overcome the increase through higher clocks? To find out we have to convert latency in clocks to latency in nanoseconds:

Memory Latency

We disable turbo in order to get predictable clock speeds, which lets us accurately calculate memory latency in ns. The FX-8150 at 3.6GHz has a longer trip down memory lane than its predecessor, also at 3.6GHz. The higher latency caches play a role in this as they are necessary to help drive AMD's frequency up. What happens if we turn turbo on and peg the FX-8150 at 3.9GHz? Memory latency goes down. Bulldozer still isn't able to get to main memory as quickly as Sandy Bridge, but thanks to Turbo Core it's able to do so better than the outgoing Phenom II.

L3 Cache Latency

L3 access latency is effectively a wash compared to the Phenom II thanks to the higher clock speeds enabled by Turbo Core. Latencies haven't really improved though, and Bulldozer has a long way to go before it reaches Sandy Bridge access latencies.

The Impact of Bulldozer's Pipeline Windows 7 Application Performance
Comments Locked

430 Comments

View All Comments

  • vtohthree - Wednesday, October 12, 2011 - link

    ..but because Intel doesn't have to try hard to compete, here we were sitting as consumers waiting for a proper response from AMD so that Intel would be on their toes to unleash more potential in their chips or lower prices, but this is sort of sad...
  • ET - Wednesday, October 12, 2011 - link

    I recently bought a Phenom II X6 1090T to upgrade an X3 710. Looks like a better deal now. BD's power draw in particular is disappointing. No doubt Intel is what I'd recommend to others. I've have been using AMD CPU's for several years now (Athlon XP 2100+ was my first), and I still like AMD, but I'm disappointed by BD, especially after the long wait.
  • Loki726 - Wednesday, October 12, 2011 - link

    Thanks Anand for the compiler benchmarks.

    It seems like performance on bulldozer is highly application dependent, better at data-parallel and worse (even than Phenom) on irregular and sequential applications.

    I'll probably skip this one.

    I don't mind this tradeoff, but the problem is that AMD already has a good data-parallel architecture (their GPU). I'n my opinion they are moving their CPU in the wrong direction. No one wants an x86 throughput processor. They shouldn't be moving the CPU towards the GPU architecture.

    AMD: Don't pay the OOO/complex decoder penalty for all of your cores. If your app is running on multiple cores, it is obviously parallel. Don't add hardware that wastes power trying to rediscover it. Then, throw all your power budget at a single or a few x86 cores.
    Beat Intel at single threaded perf and then use your GPU for gaming, and throughput processing (video, encryption, etc).

    I'm not a fan of Intel, but they got this right. If they get over their x86 obsession and get their data-parallel act together they are going to completely dominate the desktop and high end.
  • dubyadubya - Wednesday, October 12, 2011 - link

    Care to share which tests are 64 bit? Each bench program used must specify if its 32 or 64 bit. Why do all review sites forget to includet this critical info? From the limited results I can find on the net AMD see's a large performance increase running 64 bit code over 32 bit code while Intel see's little if any increase.
  • HangFire - Wednesday, October 12, 2011 - link

    I've got an Asus board that promises to support BD, and I've holding off upgrading my unlocked/overclocked 550BE for literally months, and for this? I might as well just get a Phenom II quad or 6-core.

    I've said all along that AMD needs to address their clock versus instruction efficiency to be competitive. To do that they need to redesign their cores and stop dragging along their old K8 cores.

    So here we are with Bulldozer, Wider front end, TurboCore now works, floating point decoupled, 8 (int) cores, and... still flogging the same instruction efficiency as the old K8 cores (at least, the integer portion of them).

    Oh, yeah, I'm sure at the right price point some server farms will be happy with them, and priced low enough, they can hold on to the value portion of the marketplace. To do both they'll have to compete aggressively on price, and be prepared to lose money, both of which they seem to be good at.

    Like Anand said, we need to see someone actually compete with Intel, but it appears that AMD has lost the ability to invent new processor cores, it can only manipulate existing designs. Instead of upgrading the CPU, it looks like I'll go for a full Intel upgrade, unless I can find an 1100T real cheap. Hmm, that's probably a real possibility. I'm sure a lot of AMD fans are going to be trading them in now that they see what their AM3 upgrade path is(n't).
  • alpha754293 - Wednesday, October 12, 2011 - link

    I think that you should clarify the difference between what you call "server" workloads (i.e. OLTP/virtualization vs. HPC).

    I suspect that with one shared FP between two ALUs; HPC performance is going to suffer.

    The somewhat-computationally intensive benchmarks that you've shown today already gives an early indication that the Bulldozer-based Opterons are going to suffer greatly with HPC benchmarks.

    On that note: I would like to see you guys run the standard NCAC LS-DYNA benchmarks or the Fluent benchmarks if you can get them. They'll give me a really good idea as to how server processors perform in HPC applications (besides the ones that you guys have been running already). Johan has my email, so feel free to ask if you've got questions about them.
  • Ananke - Wednesday, October 12, 2011 - link

    Bulldozer reminds me of Sun's (Oracle) Niagara architecture. It seems AMD aimed the server and professional market. It makes business sense. The profit margins there are net 50-60% (this is AFTER the marketing, support, etc overhead costs) and along with the high performance work stations is the only growing market as of now. Hence, the stock market lifted the stocks of AMD. Gaming and enthusiast market is around 0.7% of CPU revenue - yep, that is, I work with this kind of statistics data, guys.

    This is a promising architecture (despite the fact that is not good for home enthusiasts). AMD should focus on providing more I/O lanes through the CPU - aka PCI lanes on cheaper boards without requirement of additional chips. It will allow placing more GPUs using overall cheaper infrastructure - exactly the way HPC and server market is evolving. Then, they should really get a good software team and make/support/promote SDK for general GPU computing in line of what NVidia did with CUDA.

    For anything mainstream / aka Best Buy, Walmart, etc./ Llano is good enough.

    As I said, this Bulldozer chip apparently is not good for enthusiasts, and Anandtech is an enthusiast site, but unfortunately this is just a very small niche market. People should not bash a product, because it doesn't fit only their needs. It is OK for the vast market.
  • GatorLord - Wednesday, October 12, 2011 - link

    Thanks for the VERY interesting stats. I had a hunch it made good sense, but since I don't work with these data it was just a hunch in the end. Now it's better...maybe a hunch plus. We should feel lucky that they even pay any attention to this segment...I suspect they do b/c a lot of decision influencers are also computer racers at home.
  • alpha754293 - Thursday, October 13, 2011 - link

    One thing that I will also say/add is that while people are perhaps grossly disappointed with the results; think about it this way:

    What you've really got is a quad-core (I don't count their ALUs as cores, just FPUs) processor doing a 6-core job.

    So if they went to a 6-module chip, the benefits can actually be substantial.

    And on the 8-module server processor, it can be bigger even still.

    And yes, this is very much like the UltraSPARC T-series (which, originally was designed by UltraDense as a network switching chip), but even they eventually added a FPU per core, rather than just one FPU per chip.

    The downside to the 8-module chip is a) it's going to be massive, and b) it won't be at the clock speeds it NEEDs to be to compete.
  • Icehawk - Wednesday, October 12, 2011 - link

    I quickly ran the Rage vt_benchmark and got ~.64 @ 1thread and .25 for 2-6 threads which is what your Intel #s line up with - BUT I'm running a Q6600, 4gb, and a GTS 250... shouldn't I see much worse scores compared to a i7/current get video card? Is this something to do with Rage's *awesome* textures or?

Log in

Don't have an account? Sign up now