Uses of Container-Based OS Virtualization

When would we generally suggest containers to be a viable solution? Looking at the situations in which the technology is most often applied, we see the importance of density coming up. Density can be seen as the amount of containers we are able to place on a single hardware platform. Many hosting companies are making use of containers in order to provide their customers with a heavily customizable environment in which to run their web servers. Hardware platforms running over 100 environments are no exception here.

We need to keep in mind that the total memory and CPU footprint of a container is little more than that of the sum of its processes, so density scales inversely proportional to the "weight" of all containers bundled together. In common terms, we could say that the lighter the containers' workload, the more we are able to place on a single machine. When high density is a priority for whatever reason, containers can provide a solid solution as opposed to a hypervisor-based one, which despite its high flexibility is still quite limited in terms of scalability.

Compare the memory footprint of a simple process group to that of a full-fledged OS along with that process group. Considering that Windows Server 2003 can easily fill up about 350MB of RAM by itself, on a fairly standard configuration, the below graph would barely allow for two systems on the same memory configuration, without even adding an actual workload!


A graph from the OpenVZ wiki, depicting the amount of containers that, when running Apache, init, syslogd, crond and sshd, keep giving reasonable response times on a 768MB RAM box: they put the number at 120. The system's limitation beyond that point is RAM.

We have to admit that the comparison is not entirely fair: the OS used in the graph is undoubtedly a very light-weight Linux-based system. Nonetheless, it is able to provide a sense of scale. At the absolute minimum of allowed configurations for Windows Server 2003 (128MB of RAM), 120 hypervisor-based systems would still consume about 15GB of RAM together. The use of containers here would theoretically reduce the OS footprint by 14.8GB.

On the other hand, containers can be very interesting when performance is prioritized. Naturally, as the workload per container is increased, the maximum amount of containers on a single hardware system is decreased, but even then they provide a solid solution. Since all containers' processes have direct access to the kernel, native I/O subsystems remain intact, allowing the full power of the hardware platform to be used. This gives the technology another edge over its "rivals". Another fun fact here is that in Virtuozzo (OpenVZ's "bigger brother") applications can actually share their libraries over several containers. What this means is that the application is loaded into the memory only once, while still allowing different users to use it concurrently, albeit using a small amount of extra memory for their respective unique data.

Another situation where containers can come in handy is where many user space environments need to be rolled out in a very short time span. What we are thinking of here are mostly classrooms and hands-on sessions where quick and simple deployment is of the essence. Demos of certain pieces of software often require careful planning and a controlled environment, so applications like VirtualPC or VMWare Server/Workstation are used with "temporary" full-sized virtual machines to make sure everybody is able to run them correctly. Since the creation of a container takes about as long as the actual decompression of its template environment and the startup of its processes, a single server can easily be used for participants or students to log into through SSH or even VNC, giving each user an easily administrated environment while still allowing them to experiment with high-level permissions.

In this case, it is also beneficial that everything happens on a single system that is completely accessible to the root user of the "host" container. In this sense, containers reduce both server sprawl and OS sprawl, allowing for high manageability. For example, using a single kernel reduces the amount of time spent patching every individual virtual machine separately, since a plain shell script can take care of each container automatically. "Broken" containers are easily removed and recreated on the fly, as making use of templates removes the need to go through the steps of installing a fresh OS every time a new environment is needed.

Thinking further along this line brings us to the next point of interest: using containers as a base for VDI purposes. Combining all of the above factors, it is easy to see the technology's merits as a means to bundle desktop environments onto a central server. Since Virtuozzo is able to provide high container-density on Windows, it is possible to allow a very large number of users a full-fledged environment to work with, which is still perfectly manageable. At a time when providing desktop virtualization tempts many a system administrator but still seems out of reach due to its impractical resource demands, containers seem to provide the ideal solution.

Let's Rewind The Big Trade-Off
Comments Locked

3 Comments

View All Comments

Log in

Don't have an account? Sign up now