The impact of virtualization on modern computing is profound. It enables businesses to dramatically improve the profitability and flexibility of IT resources.
Why containers rather than VMs?
But virtualization comes at a cost, especially at the hypervisor and guest operating systems, which each require memory and expensive licenses. This results in an increase in the size of each virtual machine, which limits the number of VMs that a server can host. The resurgence of containerization aims to virtualize applications without overburdening the system.
The idea is not new: For several years now, operating systems such as OpenVZ , FreeBSD , Solaris Containers, and Linux-VServer have been supporting this feature as a core element of cloud scalability. But it’s the recent introduction of open platforms, such as Docker, that has put containerization and its potential for scalable distributed applications back in the limelight.
Thus, the Linux 2.6.24 kernel integrates basic containerization support to provide OS-level virtualization and allow a single host to run multiple isolated Linux instances, called Linux containers, or LXC (LinuX Containers).
LXC is based on the notion of Linux control groups, cgroups. Here, each control group provides applications with total resource isolation (including processor, memory, and I / O access) without the need for full virtual machines.
Linux containers also provide full isolation of their namespace. Functions such as file systems, network IDs, and user IDs, as well as any other element typically associated with operating systems, can therefore be considered “unique” from the point of view of each container.
Docker: an increased LXC
Application containerization platforms, such as Docker , do not replace Linux containers. The idea is to use LXC as a base, and then add higher-level capabilities.
For example, a platform like Docker allows portability between machines (which also run Docker) and allows an application and its components to exist as a single mobile object. LXC alone allows mobility, but the build is tied to the system configuration. So moving it to another machine can introduce differences that might prevent the application’s container from running identically (or even running at all).
Other examples, Docker offers automated build generation tools. These tools help developers to move more easily from source code to containerized applications, or to use companion tools, such as Chef, Maven, Puppet, and others, to automate or streamline the build process.
Versioning allows developers to track container version changes, understand differences, and even revert back to earlier versions as needed. And knowing that any container can serve as a basic image to another, it is all the easier to reuse easily shared components via a public (or private) registry .
Therefore, the goal of platforms such as Docker is to promote the rapid integration of applications into containers, as well as to maintain and update these containers, and not to contribute to the creation of the same. containers; this role is the responsibility of the Linux kernel .