This week, Intel is hosting a datacenter event in San Francisco. The basic message is that the datacenter should be much more flexible and that the datacenter should be software defined. So when a new software service is launched,  storage, network and compute should all be adapted in a matter of minutes instead of weeks.

One example is networking. Configuring the network for a new service takes a lot of time and manual intervention: think of router access lists, gateway/firewall configurations and so on. It requires a lot of very specialized people: the Netfilter expert does not necessarily master the intricacies of Cisco's IOS. Even if you master all the skills it takes to administer a network, it still takes a lot of time to log in to all those different devices.

Intel wants the propietary network devices to be replaced by software, running on top of its Xeons. That should allow you to administer all your network devices from one centralized controller. And the same method should be applied to storage and the proprietary SANs.

If this "software defined datacenter" sounds very familiar to you, you have been paying attention to the professional IT market. That is also what VMWare, HP and even Cisco have been preaching.  We all know that, at this point in time, it is nothing more than a holy grail, a mysterious and hard to reach goal. Intel and others have been showing a few pieces of the puzzle, but the puzzle is not complete at all. We will get into more detail in later articles.

But there were some interesting news tidbits we like to share with you. 

First of all, the announcement of the new Broadwell SoC. Broadwell is the successor to Haswell, but Intel also decided to introduce a highly integrated SoC version. So we get the "brawny" Broadwell cores inside a SoC that integrates Network, storage etc. just like the Avoton SoC. As this might be a very powerful SoC for microservers, it will be interesting to see how much room is still left for the Denverton SoC - the successor of the atom based Avoton SoC -  and the ARM server SoCs.

Jason Waxman, General Manager of the Cloud Infrastructure Group, also showed a real Avoton SoC package.

A quick recap: the Atom Avoton is the 22 nm successor of the dualcore Atom S1260 Centerton.

The Avoton SoC has up to 8 cores and integrates SATA, Gigabit Ethernet, USB and PCIe.

Intel promises up to 4x better performance per watt, but no details were given at the conference. The interesting details that we hardware enthusiasts love can be found at the end of the PDF though. Performance per Watt was measured with SPEC CPU INT rate 2006. The dualcore Atom S1260 (2 GHz, HT enabled) scored 18.7 (base) while the Atom C2xxx (clockspeed 1.5 GHz?, Turbo disabled)  on an alpha motherboard (Intel Mohon) reached 69. Both platforms included a 250 GB harddisk and a small motherboard. The Atom "Avoton" had twice as much memory (16 vs 8 GB) but the whole platform needed 19 W while the S1260 platform needed 20W. Doubling the amount of memory is not unfair if you have four times as much cores (and thus SPEC CPU INT instances). So from these numbers it is clear that Intel's Avoton is a great step forward. The SPEC numbers tell us that Intel is able to get four times more cores in the same power envelop without (tangibly) lowering the single threaded performance (the lower clock speed is compensated by the IPC improvements in Silvermont). 

Intel does not stop at integrating more features inside a SoC. Intel also wants to make the server and rack infrastructure more efficient. Today, several vendors have racks with shared cooling and power. Intel is currently working on servers with a rack fabric with optical interconnects. And in the future we might see processors with embedded RAM but without a memory controller, placed together inside a compute node and with a very fast interconnect to a large memory node. The idea is to have very flexible, centralized pools of compute, memory and storage. 

The Avoton server at the conference was showing some of these server and rack based innovations. Not only did it have 30 small compute nodes....

... it also did not have any PSU, drawing power from a centralized PSU.

In summary, it looks like the components in the rack will be very different in the near future. Multi-node servers without PSUs, SANs replaced by storage pools and proprietary network gear by specialized x86 servers running networking software.

Comments Locked

18 Comments

View All Comments

  • yun - Monday, July 22, 2013 - link

    When they doing a home version for this? Have 70TB month to shift on fios!
  • A5 - Tuesday, July 23, 2013 - link

    How/why are you moving 70TB on a home connection? That's just dumb.
  • Hrel - Tuesday, July 23, 2013 - link

    your dumb, we should all be able to move a billion PT/month if we want.
  • Hrel - Tuesday, July 23, 2013 - link

    you're*
  • p1esk - Tuesday, July 23, 2013 - link

    Networking software could always be run on x86 servers, yet there are many good reasons specialized networking hardware from Cisco, Juniper, or similar vendors is the standard in any decent datacenter.
    I don't see anything from Intel that can change that in the nearest future.
  • watersb - Tuesday, July 23, 2013 - link

    Ah. So Thunderbolt isn't just an Apple thing, then. Still a very long way to go before memory can go in a dedicated, external box. Wow.
  • iwod - Tuesday, July 23, 2013 - link

    Surely those node looks like to have lots of wasted space. May be they could double those node count to 60?

    How much would these be? I would sure love to Avoton as my NAS/HomeServer.

    I can see there will be lots of VPS running on 8 Core Avoton with 16GB Ram. 1 Core 2 GB RAM. These Node will even replace most of those Dedicated Servers where CPU performance isn't critical.

    Cant wait to see this come out.
  • Rocket321 - Wednesday, July 24, 2013 - link

    The server cards and backbone appear to be non-functional "display only" units. In a "real" product there will be transistors and other things on there.

    So it look like a lot of wasted space, but only because these are not fully built units.
  • ShieTar - Tuesday, July 23, 2013 - link

    Am I the only one who cringes when reading "Re-Architecting"? I think Intel need to re-design their vocabulary.
  • A5 - Tuesday, July 23, 2013 - link

    Intel aren't the only ones who use that term.

Log in

Don't have an account? Sign up now