Last week, I gave you an overview of the spectrum of Linux virtualization technologies – Part 1: Linux Virtualization, and explained the simplest and highest-performance end. The problem with doing everything with chroot
is that it takes a lot of work to set things up and there is only limited separation between the virtualized environment and the host OS.
Containers are easy to use and they provide more isolation. Docker is a system that makes container management even more efficient in terms of the work you must do and the storage space your virtualized systems occupy.
The users and groups within a container are isolated to that environment. You can define users and groups within the container that don’t exist out in the host environment, and you can (and often should) assign a unique root
password within the container.
Unlike chroot
, containers isolate processes, at least in one direction. Within a container you see a tree of processes starting with init
running as PID #1. But all you see are the processes running within the container, you can’t see or send signals to processes out in the host. Going the opposite direction, the host OS can see and control the processes running within the container. The kernel remaps container process IDs, so you see two init
processes running. The one with PID 1 is the real init
started by the kernel, while the one with some high-numbered PID is the one within the container.
Pro tip: Use either of these commands to examine the tree of processes with init
highlighted:
ps fax | egrep '^|init' pstree -p | egrep '^|init'
Let’s consider human performance first: Instead of going through a complete Linux installation, you can simply create a container from a template with a single command. You specify the distribution and release. Give the disk about a minute to copy data, and it’s ready to start!
Containers run amazingly fast. Until you understand what’s going on, you are convinced that there is no way it’s really working. But it is!
Containers start and stop so quickly because they don’t have to start and stop an operating system. Containers share the running kernel. All that has to happen is start init
within the container and get it to start whatever you want it to do. Maybe start a web server, and start login
and getty
so you could connect a terminal to its console and log in, and that’s it.
Here’s a test we have you run in Learning Tree’s new course on Linux virtualization: How long does it take to create and start a Linux container with a full CentOS installation, print the classic “hello world
” message, tear down the running container, and exit?
You can run that test with one simple command, and it finishes in less than 2 seconds!
Consider this sequence of steps:
web-server
site-1
That do-list seems practical, but it’s going to take a while and occupy a lot of storage: web-server
plus site-1
through site-12
, so 13 complete Linux system images, right?
Not with Docker!
When you commit images into Docker, they are stored as methods for building them out of pre-existing pieces. While web-server
is stored as a complete container image, the site-specific servers are just that image plus the web site data.
Simple one-line commands create, start, stop, and connect to containers, on their own and through Docker. Plus, as I will explain later, there is a nice graphical tool to monitor and manage containers plus fully virtualized systems running on the local host OS or on remote platforms.
But first I need to explain about full system virtualization — check back next time for that!