There are two important trends in the server market: it is growing and it is evolving fast. It is growing fast as the number of client devices is exploding. Only one third of the world population has access to the internet, but the number of internet users is increasing by 8 to 12% each year. 

Most of the processing happens now on the server side (“in the cloud”), so the server market is evolving fast as the more efficient an enterprise can deliver IT services  to all those smartphones, tablets and pcs, the higher the profit margins and the survival chances.

And that is why there is so much interest in the new star, the “micro server”.  

The Demand for Micro-Servers

The demand for very low power server is, although it is high on the hype curve, certainly not imaginary. When you run heterogeneous workloads, for example a mailserver, an OLAP database and several web and fileservers, some of these workloads will be demanding heavy processing power, others will be close to idle. In that case it is best to buy a dual "large core" CPU (or better) server, install a hypervisor and cut the machine up in resource pools. As one workload demands high processing power, your hardware and hypervisor will deliver it. Single thread performance will determine for a large part whether a complex SQL query is responded in a fraction of a second or several seconds.  The virtualization layer gives you extra bonuses such as high availability, no more unplanned downtime etc.

If your web based workloads are very homogenous and you know how much horsepower your webapplication needs, things are very different. The virtualization layer is just adding complexity and cost now. In this case it is a lot more efficient to scale out than to divide your heavy server into virtual machines.  The single thread performance has to be good enough to respond to a request quickly enough. But throughput demands can be handled by adding a load balancer in front of low power servers. It is much easier to scale this way.

The problem is that the your average server is not well suited for these kind of homogenous workloads. Yes, servers have become a lot more efficient by including advanced power management features. The introduction of CPU C-states and more efficient PSUs are among the top of technologies that saved a lot of power. However, even the most modern servers are still needlessly complex with lots of disk, network and other interfaces. A useless interface wastes a few hundred of mwatt and a less efficient PHY (Copper 10 Gbit Ethernet for example) wastes a few Watt, but in the end it adds up. 

Low power server CPUs
Comments Locked

80 Comments

View All Comments

  • name99 - Tuesday, June 18, 2013 - link

    "AMD says that using the graphics core for the heavy scalar floating point will get as easy as C++ programming and as a result, Berlin should make a few heads turn in the HPC world. It even looks like SSE-x will get less and less important over time in that market. "

    Ahh, yes, the old "New compilers will make our weird CPU architecture invisible to the programmer" gambit. How's that worked out in the past, guys?
    Trimedia? Cell? iTanium?

    But there's sucker born every minute. Good luck to anyone foolish enough to invest today on the assumption that this magical compiler will be available tomorrow.

    [I'm not claiming this breakthrough --- compiler-transparent GPGPU --- will NEVER happen. I am claiming it ain't gonna happen during the relevant lifetime of this product.]
  • Alberto - Wednesday, June 19, 2013 - link

    This roadmap is a disaster.
    No new high margin SKUs since 2015++ (excavator). No medium margin SKUs to beat Intel single socket offerings since excavator core will born in an unknown year.
    The low margin segment is dominated by a NON-X86 core...vanilla from Arm and not custom ala Qualcomm.
    The process side of the things is even worse. The Arm core (H2 2014 from a more accurate Amd official slide reported by xbit) is stuck on 28nm, funny thing !! considering that Qualcomm will be on 20nm in Q1/2014; Amd has not even the money to work with TSMC to deliver a competitive Arm Soc !!!
    Seattle is just now a failure looking the specs, the process do not allow eight cores with a decent TDP to mach Intel Avoton in 22nm Trigate. Recent impementations of A15 say that 28nm node is not the best thing around not even to do a decent quadcore low power device...you figure an eight core one.
    Anyway Seattle is late, aka in the same time frame of 14nm Airmont.

    The last part of the article is stunning: "It looks like the Intel Avoton will have a very potent challenger in Q1 2014".....too bad Seattle that is an H2 2014 device.

    "So there is good chance that AMD will make a big comeback in 2014 in the server market"

    What server market??? microserver market ??? with a NON-x86 core ??? a x86 Company ???
    I have said: there is good chances that Amd will do a so so New Entry in 2014 in a 10% low margin nice on the server market, along with many other contenders some of them with custom and optimized x86/Arm cores.
    I love our articles Johan, still this seem very very strange to me
  • PCpowerman - Wednesday, June 19, 2013 - link

    You guys on here sound like incompetent investors on Wall Street that no noting about technology. Let me give you investors some advice: If you do not know the field very well that you invest in, then you should refrain from commenting like you know who will be more competitive.

    When AMD goes all HuMa aware with their new generation APU's then SSE instructions, AXV instructions and other such floating point instructions will be utterly destroyed by a program that takes advantage of the GCN cores on these APU's. That is an UNDENAIBLE FACT!! A program written to take full advantage of the best floating point instructions that X86 has to offer will not come ANYWHERE near that of the same program written take advantage of the GCN cores on the next generation APU,s.

    That is why the server Kaveri variant CPU does not need to be 2P or 4P. Database programs that leverage GCN cores will outperform Intel's floating point instructions in their processors, even in 2P or 4P configs. It takes a whole lot of CPU's to equal the floating point computation power of the GCN architecture. CPU's are only great at serial code and branch prediction. We need more programmers to comment on here rather than you investor types. I feel like the only technical person on here. Geez...
  • Alberto - Wednesday, June 19, 2013 - link

    Too bad most common server workload is not floating point only based and a rude and layout repetitive GPU can be a substitute of a CPU. The bulk of the SW is optimized serial.
    Kaveri can be nice in low end HPC, still you forget that Intel is shipping nicely powerful integrated GPU in these days, so Amd is not alone anymore in this segment.

    And yes Kavery need to be 2P, but it is not.
  • andrewaggb - Wednesday, June 19, 2013 - link

    There will be certain operations that can be made potentially many times faster. But not everything. Databases are interesting, but at least in my use cases they are more limited by memory capacity and disk/Storage I/O than cpu performance.

    Most of the things I code are office and management/ordering/billing systems. Nothing particularly cpu intensive (other than video compression). Just lots of business rules and interop.
  • Klimax - Wednesday, June 19, 2013 - link

    See Iris Pro and what it does to GCN...
  • Alberto - Wednesday, June 19, 2013 - link

    Moreover Intel graphics are now fully OpenGL 4 e OpenCL 1.2 capable.......
  • Calinou__ - Thursday, June 20, 2013 - link

    ...on Windows.
  • 1008anan - Wednesday, June 19, 2013 - link

    PCpowerman,

    Please comment on Intel's Broadwell integrated graphics and 14 nm tock (maybe Goldstone?) integrated graphics.

    Intel is closer to truly fusion application processors with many different types of cores working together (both fixed function and general function.)
  • wumpus - Friday, July 5, 2013 - link

    Wake me up when those microserver GPUs can use protected memory. As far as I know, all that memory is wide open to any process on the server. I can't imagine many uses of a microserver that could accept that (google and other single owner datacenters, maybe. But I tend to see these things as something you would want for VPS hosting).

    GPUs appear perfect for cryptographic uses, but are completely unacceptable as long as they can't protect their own memory (just sift through the entire GPU looking for keys, you will find them quickly). I suppose there exist the odd ECC format you might want to run on your server, but that is sufficiently exotic to simply justify adding a PCIe card.

Log in

Don't have an account? Sign up now