"New" Virtualization vs. "Old" Virtualization

The recent buzz around the word "virtualization" may give anyone the impression that it is something relatively new. Nothing is further from the truth however, since virtualization has been an integral part of server and personal computing, almost from the very beginning. To keep using the single term "virtualization" for each of its countless branches and sprouted technologies does end up being quite confusing, so we'll try to shed some light on those.

How to Define Virtualization

To define it in a general sense, we could state that virtualization encompasses any technology - either software or hardware - that adds an extra layer of isolation or extra flexibility to a standard system. Typically, while increasing the amount of steps a job takes to complete, the slowdown is made up for with increased simplicity or flexibility for the part of the system affected. To clarify, the overall system complexity increases, in turn allowing the manipulation of certain subsystems to become a lot easier. In many cases, virtualization has been implemented to make a software developer's job a lot less aggravating.

Most modern day software has become dependent on this, making use of virtual memory for vastly simplified memory management, virtual disks to allow for partitioning and RAID arrays, sometimes even using pre-installed "virtual machines" (think of Java and .net) to allow for better software portability. In a sense, the entire point of an Operating System is to allow software a foolproof use of the computer's hardware, taking control of almost every bit of communication with the actual machinery, in an attempt to reduce complexity and increase stability for the software itself.

So if this is the general gist behind virtualization (and we can tell you it has been around for almost 50 years), what is this recent surge in popularity all about?

Index Baby Steps Leading to World-Class Innovations
POST A COMMENT

16 Comments

View All Comments

  • steveyballme - Sunday, November 16, 2008 - link

    Two licenses, one machine!
    I could cry!


    http://fakesteveballmer.blogspot.com">http://fakesteveballmer.blogspot.com
    Reply
  • Ralphik - Wednesday, October 29, 2008 - link

    Hello everybody,

    I have installed a virtual Win98 on my computer, which is running WinXP. The problem I have is that there are no GeForce7 and higher drivers available for such old Windows platforms - has anyone got a tip or a cracked driver that I could use? It now has a completely useless S3 Virge driver installed . . .
    Reply
  • Jovec - Friday, October 31, 2008 - link

    Unless I'm missing something (new), your Win98 running in your VM will not see your GeForce video card, or indeed any of the actual hardware in your computer. It just sees the virtual hardware provided by your VM software - typically an emulated basic VGA video adapter and AC'97 sound. VM software emulates an emulates an entire virtual computer on your host PC, but does not use the physical hardware natively.

    In short, you are not going to get Geforce level graphics power in your Win98 VM.
    Reply
  • stmok - Wednesday, October 29, 2008 - link

    "Could it be that these two pieces of software are using related techniques for their 3D acceleration? Stay tuned, as we will definitely be looking into this in further research!"

    => Parallels took Wine's 3D acceleration component. More specifically, they took the translator that allowed one to translate OpenGL calls to DirectX and vice versa.

    There was a minor issue about this when Parallels are not compliant with the open source license of Wine. But that was settled when Parallels complied with the LGPL two weeks later.
    => http://parallelsvirtualization.blogspot.com/2007/0...">http://parallelsvirtualization.blogspot...2007/07/...
    => http://en.wikipedia.org/wiki/Parallels_Desktop_for...">http://en.wikipedia.org/wiki/Parallels_Desktop_for...

    What annoys me, is that they never bothered with adding 3D Acceleration support in the Linux version of Parallels. The only option is the very current release of VMware Workstation. (Version 6.5 has technology implemented from their VMware Fusion product).
    Reply
  • duploxxx - Tuesday, October 28, 2008 - link

    btw is this a teaser for the long announced virtualization performance review? Reply
  • steveyballme - Tuesday, October 28, 2008 - link

    We get to sell multiple licenses of the same software on the same hardware!
    Life is beautiful!

    http://fakesteveballmer.blogspot.com">http://fakesteveballmer.blogspot.com
    Reply
  • Vidmo - Tuesday, October 28, 2008 - link

    I was hoping this article would get into some of the latest hardware technologies designed for better virtualization. It's still quite confusing trying to determine which hardware platforms and CPUs support VT-d for example.

    The article is a nice software overview, but seems incomplete without getting into the hardware side of the issues.
    Reply
  • solusstultus - Tuesday, October 28, 2008 - link

    Hardware support for VT is not used by most/any? commercial hypervisors (VMware doesn't use it) and has been shown to actually have lower performance in many cases than binary translation:

    http://www.vmware.com/pdf/asplos235_adams.pdf">http://www.vmware.com/pdf/asplos235_adams.pdf
    Reply
  • duploxxx - Tuesday, October 28, 2008 - link

    unfortunately your link is 2 years old.

    Current statement for Vmware ESX is that you should use the hardware virtualization layer when you have 64bit OS at any time and when virtualization layer 2 aka NPT from amd (ept when intel launches nehalem next year) at any time.
    Reply
  • solusstultus - Wednesday, October 29, 2008 - link

    While I don't claim to be an expert, that's the most recent study that I have seen that actually lists performance results from both techniques.

    If you have seen more recent results, do you have a link? I would be interested in reading it.

    From what I have seen, NPT addresses overheads associated with switching from the Guest to the VMM during page table updates (which can occur frequently when using small pages). However, the other main source of overhead cited in the paper that I referenced were traps into the VMMs on system calls which could be replaced by less expensive direct links to VMM routines in translated code. So unless the newer hardware support virtualization implementations address this (they might, I haven't looked at the documentation), it seems translation could still be potentially faster for some apps, and that an ideal implementation would make use of both in different situations.
    Reply

Log in

Don't have an account? Sign up now