Consolidating to the Rescue

The solution is “I/O convergence” or "I/O consolidation", the latest buzz words for combining all the I/O streams into one cable, the result being the use of one I/O infrastructure (Ethernet cards, cables and switches) to support all I/O streams. Instead of using many different physical interfaces and cables you consolidate all VMotion, console, VM traffic and storage traffic on a single card (dual card for fail-over). This should significantly lower complexity, power, management, and thus cost. If that sounds like marketing speak making it seem a lot easier than it is, you are right: it is indeed hard to accomplish this.

If all that traffic runs through the same cable, the I/O traffic of VM migration or backup routines kicking in could choke the life out of your storage traffic. And once that happens, the whole virtualized cluster comes to a grinding stop as storage traffic is the beginning and end of every operation in that cluster. So it is critical that you reserve some bandwidth for the storage I/O, and thankfully that is pretty easy to do in modern virtualization platforms. VMware calls this traffic shaping, and it allows you to limit the peak and average bandwidth that a certain group of VMs can get. Simply add the VMs to a portgroup and limit the traffic of that portgroup. The same can be done for VMotion traffic: just “shape” the traffic of the vSwitch that is linked to the VMotion kernel port group.

 

Traffic shaping is very usefull for outbound traffic. Outbound traffic orginates from the memory space that is being managed by and under the control of the hypervisor. It is an entirely different story when it comes to receive/inbound traffic. That kind of traffic is under control of the NIC hardware first. If the NIC drops packets before the hypervisor even sees them, "Ingress Traffic Shaping" won't do any good. There is more.

Outbound traffic shaping is available in all versions of VMware vSphere; it is a feature of the standard vSwitch. Seperate Ingress and Egress traffic shaping is only available on the newly introduced vNetwork Distributed Switch. This advanced virtual switch can only be used if you have the expensive Enterprise Plus license of VMware's vSphere.

 

Still, if we combine 10G Ethernet with the right (virtualization) software and configuration, we can consolidate both console, storage (iSCSI, NFS), and “normal” network traffic into two high performance 10GbE NICs. Let us see what other options are available.

Cleaning Up the Cable Mess Solving the Virtualization I/O Puzzle
Comments Locked

38 Comments

View All Comments

  • fr500 - Wednesday, November 24, 2010 - link

    I guess there is LACP or PAGP and some propietary solution.

    A quick google told me it's called cross-module trunking.
  • mlambert - Wednesday, November 24, 2010 - link

    FCoE, iSCSI (*not that you would, but you could), FC, and IP all across the same link. Cisco offers VCP LACP with CNA as well. 2 links per server, 2 links per storage controller, thats not many cables.
  • mlambert - Wednesday, November 24, 2010 - link

    I meant VPC and Cisco is the only one that offers it today. I'm sure Brocade will in the near future.
  • Zok - Friday, November 26, 2010 - link

    Brocade's been doing this for a while with the Brocade 8000 (similar to the Nexus 5000), but their new new VDX series takes it a step further for FCoE.
  • Havor - Wednesday, November 24, 2010 - link

    Do these network adapters are real nice for servers, don't need a manged NIC, i just really want affordable 10Gbit over UTP ore STP.

    Even if its only 30~40M / 100ft because just like whit 100Mbit network in the old days my HDs are more then a little out preforming my network.

    Wondering when 10Gbit will become common on mobos.
  • Krobar - Thursday, November 25, 2010 - link

    Hi Johan,

    Wanted to say nice article first of all, you pretty much make the IT/Pro section what it is.

    In the descriptions of the cards and conclusion you didnt mention Solarflares "Legacy" Xen netfront support. This only works for paravirt Linux VMs and requires a couple of extra options at kernal compile time but it run like a train and requires no special hardware support from the motherboard at all. None of the other brands support this.
  • marraco - Thursday, November 25, 2010 - link

    I once made a resume of total cost of the network on the building where I work.

    Total cost of network cables was far larger than the cost of the equipment (at least with my country prices). Also, solving any cable related problem was a complete hell. The cables were hundreds, all entangled over the false roof.

    I would happily replace all that for 2 of tree cables with cheap switches at the end. Selling the cables would pay for new equipment and even give a profit.

    Each computer has his own cable to the central switch. A crazy design.
  • mino - Thursday, November 25, 2010 - link

    IF you go 10G for cable consolidation, you better forget about cheap switches.

    The real saving are in the manpower, not the cables themselves.
  • myxiplx - Thursday, November 25, 2010 - link

    If you're using a Supermicro Twin2, why don't you use the option for the on board Mellanox ConnectX-2? Supermicro have informed me that with a firmware update these will act as 10G Ethernet cards, and Mellanox's 10G Ethernet range has full support for SR-IOV:

    Main product page:
    http://www.mellanox.com/content/pages.php?pg=produ...

    Native support in XenServer 5:
    http://www.mellanox.com/content/pages.php?pg=produ...
  • AeroWB - Thursday, November 25, 2010 - link

    Nice Article,

    It is great to see more test around virtual environments. What surprises me a little bit is that at the start of the article you say that ESXi and Hyper-V do not support SR-IOV yet. So I was kind of expecting a test with Citrix Xenserver to show the advantages of that. Unfortunately it's not there. I hope you can do that in the near future.
    I work with both Vmware ESX and Citrix XenServer we have a live setup of both. We started with ESX and later added a XenServer system, but as XenServer is getting more mature and gets more and more features we probably replace the ESX setup with XenServer (as it is much much cheaper) when maintenance runs out in about one year so I'm really interested in tests on that platform.

Log in

Don't have an account? Sign up now