Consolidating to the Rescue

The solution is “I/O convergence” or "I/O consolidation", the latest buzz words for combining all the I/O streams into one cable, the result being the use of one I/O infrastructure (Ethernet cards, cables and switches) to support all I/O streams. Instead of using many different physical interfaces and cables you consolidate all VMotion, console, VM traffic and storage traffic on a single card (dual card for fail-over). This should significantly lower complexity, power, management, and thus cost. If that sounds like marketing speak making it seem a lot easier than it is, you are right: it is indeed hard to accomplish this.

If all that traffic runs through the same cable, the I/O traffic of VM migration or backup routines kicking in could choke the life out of your storage traffic. And once that happens, the whole virtualized cluster comes to a grinding stop as storage traffic is the beginning and end of every operation in that cluster. So it is critical that you reserve some bandwidth for the storage I/O, and thankfully that is pretty easy to do in modern virtualization platforms. VMware calls this traffic shaping, and it allows you to limit the peak and average bandwidth that a certain group of VMs can get. Simply add the VMs to a portgroup and limit the traffic of that portgroup. The same can be done for VMotion traffic: just “shape” the traffic of the vSwitch that is linked to the VMotion kernel port group.

 

Traffic shaping is very usefull for outbound traffic. Outbound traffic orginates from the memory space that is being managed by and under the control of the hypervisor. It is an entirely different story when it comes to receive/inbound traffic. That kind of traffic is under control of the NIC hardware first. If the NIC drops packets before the hypervisor even sees them, "Ingress Traffic Shaping" won't do any good. There is more.

Outbound traffic shaping is available in all versions of VMware vSphere; it is a feature of the standard vSwitch. Seperate Ingress and Egress traffic shaping is only available on the newly introduced vNetwork Distributed Switch. This advanced virtual switch can only be used if you have the expensive Enterprise Plus license of VMware's vSphere.

 

Still, if we combine 10G Ethernet with the right (virtualization) software and configuration, we can consolidate both console, storage (iSCSI, NFS), and “normal” network traffic into two high performance 10GbE NICs. Let us see what other options are available.

Cleaning Up the Cable Mess Solving the Virtualization I/O Puzzle
Comments Locked

38 Comments

View All Comments

  • Kahlow - Friday, November 26, 2010 - link

    Great article! The argument between fiber and 10gig E is interesting but from what I have seen it is extremely application and workload dependant that you would have to have a 100 page review to be able to figure out what media is better for what workload.
    Also, in most cases your disk arrays are the real bottleneck and max’ing your 10gig E or your FC isn’t the issue.

    It is good to have a reference point though and to see what 10gig translates to under testing.

    Thanks for the review,
  • JohanAnandtech - Friday, November 26, 2010 - link

    Thanks.

    I agree that it highly depends on the workload. However, there are lots and lots of smaller setups out there that are now using unnecessarily complicated and expensive setups (several physical separated GbE and FC). One of objective was to show that there is an alternative. As many readers have confirmed, a dual 10GbE can be a great solution if your not running some massive databases.
  • pablo906 - Friday, November 26, 2010 - link

    It's free and you can get it up and running in no time. It's gaining a tremendous amount of users because of the recent Virtual Desktop licensing program Citrix pushed. You could double your XenApp (MetaFrame Presentation Server) license count and upgrade them to XenDesktop for a very low price, cheaper than buying additonal XenApp licenses. I know of at least 10 very large organizations that are testing XenDesktop and preparing rollouts right now.

    What gives. VMWare is not the only Hypervisor out there.
  • wilber67 - Sunday, November 28, 2010 - link

    Am I missing something in some of the comments?
    Many are discussing FCoE and I do not believe any of the NICs tested were CNAs, just 10GE NICs.
    FCoE requires a CNA (Converged Network Adapter). Also, you cannot connect them to a garden variety 10GE switch and use FCoE. . And, don't forget that you cannot route FCoE.
  • gdahlm - Sunday, November 28, 2010 - link

    You can use software initiators on switches which support 802.3X flow control. Many web managed switches do support 802.3X as do most 10GE adapters.

    I am unsure how that would effect performance at in a virtualized shared environment as I believe it pauses on the port level.

    If you workload is not storage or network bound it would work but I am betting that when you hit that hard knee in your performance curve that things get ugly pretty quick.
  • DyCeLL - Sunday, December 5, 2010 - link

    To bad HP virtual connect couldn't be tested (a blade option).
    It splits the 10GB nics in a max of 8 Nics for the blades. It can do it for fiber and ethernet.
    Check: http://h18004.www1.hp.com/products/blades/virtualc...
  • James5mith - Friday, February 18, 2011 - link

    I still think that 40Gbps Infiniband is the best solution. By far it seems to be the best $/Gbps ratio of any of the platforms. Not to mention it can pass pretty much any traffic type you want.
  • saah - Thursday, March 24, 2011 - link

    I loved the article.

    I just reminded myself that VMware published official drivers for the ESX4 recently: http://downloads.vmware.com/d/details/esx4x_intel_...
    The ixgbe version is 3.1.17.1.
    Since the post says that "enables support for products based on the Intel 82598 and 82599 10 Gigabit Ethernet Controllers." I would like to see the test redone with an 82599-based card and recent drivers.
    Would it be feasible?

Log in

Don't have an account? Sign up now