We would like to answer this question in the following blog post. To illustrate the difference between KVM- and container visualisation, we’ll start by explaining the topic of virtualisation in general.


Why is virtualisation used in IT?

These days, the virtualisation of servers, network components, storage solutions and applications is unavoidable. With the use of virtualisation, server hardware resources should be optimised through the use of high-performance hardware components. Energy, data centre and other costs can therefore be reduced.

However, as virtualisation technology represents another level in the technical landscape, additional specialists are required for this new level. While the introduction of virtualisation does reduce operational costs, it also increases complexity at the same time.


When did virtualisation technology start to become popular?

Around 10 years ago the first companies began to run productive systems on virtual hosts. While some large IT corporations may have already been using virtualisation solutions, these solutions only worked with special server types, expensive operating systems and licence packages.

With the integration of virtualisation instruction sets (VT-X, AMD-V, etc.) into server and workstation processors, as well as the newly-acquired hypervisor stability, virtualisation was able to finally prevail in IT.

What is a hypervisor?

A hypervisor is the technical logical component that enables communication between the virtual machines and the hardware resources. In server virtualisation, we speak of “hypervisor types”.

1. Type 1 hypervisor

This hypervisor is mainly known through the VMWare ESX and XEN products. With a type 1 hypervisor, no full-fledged operating system is required. The virtual machines need to know the ESX or XEN driver in order to guarantee operation. Type 1 hypervisors are also commonly referred to as “bare metal” hypervisors.

2.Type 2 hypervisor

A type 2 hypervisor requires an operating system for installation, e.g. Ubuntu, Debian or another Linux distribution. In the Unix/Linux environment, only the KVM (Kernel-based Virtual Machine) product is used. There are other products, such as the Linux-VServer or LXC, but the difference from KVM must be noted, as LXC offers only a virtual environment and no virtual machine.



How can the usage of containers support you and what possibilities does this technology offer you for process optimization? Find the answers in our new whitepaper.

Download Whitepaper now


What technology does nine use and how does the provisioning work?

Our Managed VServer and Root VServer run on a type 2 hypervisor model. For the product palette of our virtual server we use the following technologies:

  • Host operating system
  • Ubuntu Xenial 16.04 LTS
  • Hypervisor technology: KVM

It’s important to mention that this is a full virtualisation. The kernel of the host operating system and the virtual machines are separate and fully isolated from the other VMs.

How does the provisioning of the VMs work?

In nine’s internal virtualisation management tool, the maximum resources (number of processors and amount of memory and storage) for the different products are stored as a template. Depending on the product, the management tool loads the necessary template and creates the new VM with the corresponding resource parameters.

But what happens when a VM is up- or downgraded? Here, nine changes the configuration file of the virtual machine. The hypervisor allocates the new resource values only after a clean Stop/Start command of the virtual machine.

<memory unit='KiB'>12582912</memory> = memory reservation for the host 12GB
<currentMemory unit='KiB'>12582912</currentMemory>
<vcpu placement='static'>6</vcpu> = number of processors
  <type arch='x86_64' machine='pc-i440fx-trusty'>hvm</type>#
  <boot dev='network'/>
  <boot dev='hd'/>


KVM virtualisation (full virtualisation) vs. container virtualisation

Following the general explanations we have just provided, we are now going to turn to the actual subject of our blog post.

What is a container?

A container represents a seperate environment that depicts a software or application. The container is based on an image which includes the alrady installed and configured software. However, kernel and driver are missing from this image, as these components are provided by the host kernel. These images are kept relatively lean, as they contain only the necessary binaries, libraries, configurations and application data. The term application virtualisation is also associated with container virtualisation.

Thanks to the clearly-defined container software layer, the porting of an image from a development system to a productive system is simpler in several respects, with fewer associated dependencies than with VMs. In a container management solution such as OpenShift, Kubernetes or Docker-Swarm the customer takes on the creation of the container image and the corresponding application landscape and initiates the container creation process. This way, the customer can create the desired application environment and infrastructure with the previously constructed images.

Difference in architecture between the two environments

In the following image you can see the differences between the two technologies.

Architektur VM vs. Container
Architektur VM vs. Container


Areas of application for the different technologies

KVM Container
Systems that require dedicated resources Application environments that often need to be scaled
Systems that require an own running kernel local Application Development
  Microservice architecture


Advantages and disadvantages of the virtualisation technologies


Advantages Disadvantages
Better security thanks to the isolation of system resources Administration of the whole virtual entity
Clear allocation of resources Longer set-up time with a new entity, even if automation tools are being used
  Higher operational costs (virtual machine, system administration and application development)



Advantages Disadvantages
Faster scalability of the environment Container isolation is not at the same level as VM isolation
Quick creation and distribution of the application   Increased complexity
Low storage requirement (Images) Handling of persistent data and logs needs to be thought through (a storage pool where the container has writing and reading permissions is needed for data security)
Portability of the development machine (desktop & server) for testing or production  
Lower operational costs (container procurement & container image development)  



Choosing the most suitable virtualisation solution depends on security, preparation time and resources.

For applications or systems that have to fulfil high security standards, we would recommend a VM, as the existence of an extra virtualisation layer increases the security.

The porting of an application from the development to the productive environment, on the other hand, can be implemented much more quickly and cheaply with a container solution. This virtualisation option is also good for applications that must be deployed quickly and scalably and therefore cannot be dependent on other system resources. There is thus no need to provide VMs and configuration.

From a technical viewpoint, a mix of container and VM solutions is the optimum set-up, although this is more expensive and complex than a purely container version (set-up, monitoring, and lifecycle management). The replication of the final applications and environments is most stable and secure with a mixed landscape. Databases and particularly sensitive applications or systems should be operated on dedicated systems or VMs.

How can the usage of containers support you and what possibilities does this technology offer you for process optimization? Find the answers in our new whitepaper.


Download Whitepaper now