Inside Our Interlagos Test System

When a new server arrives, we cannot resist to check out the hardware of course.

The Supermicro A+ server 1022G-URF offers 16 DIMM slots, good for a maximum of 256GB of RAM.

Supermicro's motherboard are L-shaped, allowing you to add an extra "Supermicro UIO" PCIe card on top of the "normal" horizontal PCIe 2.0 x16 slot. Two redundant 80Plus Gold PSUs are available.

The board reports a 5.2 GT/s HT link to the chipset. The interconnect between the NUMA nodes runs at 6.4 GT/s.

We configured the C-state mode to C6 as this is required to get the highest Turbo Core frequencies. Also note that you can cap the CPU to a lower clock speed (P-state) by setting a PowerCap.

What Makes Server Applications Different? Benchmark Configuration
Comments Locked

106 Comments

View All Comments

  • Kevin G - Tuesday, November 15, 2011 - link

    I'm curious if CPU-Z polls the hardware for this information or if it queries a database to fetch this information. If it is getting the core and thread count from hardware, it maybe configurable. So while the chip itself does not use Hyperthreading, it maybe reporting to the OS that does it by default. This would have an impact in performance scaling as well as power consumption as load increases.
  • MrSpadge - Tuesday, November 15, 2011 - link

    They are integer cores, which share few ressources besides the FPU. On the Intel side there are two threads running concurrently (always, @Stuka87) which share a few less ressources.

    Arguing which one deserves the name "core" and which one doesn't is almost a moot point. However, both designs are nto that different regarding integer workloads. They're just using a different amount of shared ressources.

    People should also keep in mind that a core does not neccessaril equal a core. Each Bulldozer core (or half module) is actually weaker than in Athlon 64 designs. It got some improvements but lost in some other areas. On the other hand Intels current integer cores are quite strong and fat - and it's much easier to share ressources (between 2 hyperthreaded treads) if you've got a lot of them.

    MrS
  • leexgx - Wednesday, November 16, 2011 - link

    but on Intel side there are only 4 real cores with HT off or on (on an i7 920 seems to give an benefit, but on results for the second gen 2600k HT seems less important)

    where as on amd there are 4 cores with each core having 2 FP in them (desktop cpu) issue is the FPs are 10-30% slower then an Phenom cpu clocked at the same speed
  • anglesmith - Tuesday, November 15, 2011 - link

    which version of windows 2008 R2 SP1 x64 was used enterprise/datacenter/standard?
  • Lord 666 - Tuesday, November 15, 2011 - link

    People who are purchasing SB-E will be doing similar stuff on workstations. Where are those numbers?
  • Kevin G - Tuesday, November 15, 2011 - link

    Probably waiting in the pipeline for SB-E base Xeons. Socket LGA-2011 based Xeon's are still several months away.
  • Sabresiberian - Tuesday, November 15, 2011 - link

    I'm not so sure I'd fault AMD too much because 95% of the people that their product users, in this case, won't go through the effort of upgrading their software to get a significant performance increase, at least at first. Sometimes, you have to "force" people to get out of their rut and use something that's actually better for them.

    I freely admit that I don't know much about running business apps; I build gaming computers for personal use. I can't help but think of my Father though, complaining about Vista and Win 7 and how they won't run his old, freeware apps properly. Hey, Dad, get the people that wrote those apps to upgrade them, won't you? It's not Microsoft's fault that they won't bring them up to date.

    Backwards compatibility can be a stone around the neck of progress.

    I've tended to be disappointed in AMD's recent CPU releases as well, but maybe they really do have an eye focused on the future that will bring better things for us all. If that's the case, though, they need to prove it now, and stop releasing biased press reports that don't hold up when these things are benched outside of their labs.

    ;)
  • JohanAnandtech - Tuesday, November 15, 2011 - link

    The problem is that a lot of server folks buy new servers to run the current or older software faster. It is a matter of TCO: they have invested a lot of work into getting webapplication x.xx to work optimally with interface y.yy and database zz.z. The vendor wants to offer a service, not a the latest technology. Only if the service gets added value from the newest technology they might consider upgrading.

    And you should tell your dad to run his old software in virtual box :-).
  • Sabresiberian - Wednesday, November 16, 2011 - link

    Ah I hadn't thought of it in terms of services, which is obvious now that you say it. Thanks for educating me!

    ;)
  • IlllI - Tuesday, November 15, 2011 - link

    amd was shooting to capture 25% of the market? (this was like when the first amd64 chips came out)

Log in

Don't have an account? Sign up now