Using Linux Containers and Docker for Reliable Service

From time to time, I need to access the web server belonging to a company I do some work for. It’s only used by their employees plus consultants like me, so it’s pretty basic. Just enough to get the job done. The main page carries this warning:

This web site will be down for maintenance every Sunday from 10 AM to 11 AM ET

I have no idea if that’s really true, as I’ve never tried to access it during that hour. But the warning always catches my eye as it’s right below the user name and password fields used to authenticate.

The message looks rather outdated in this always-on and always-up era. It suggests a single server, quite complex, with a mesh of dependencies requiring the entire system to be taken down in order to update or modify any component.

Their main web server is used for marketing. It starts with an overview of their products and services and leads to detailed catalog pages. They don’t want to have any down time on that server, as they can’t predict the time zones or personal schedules of potential customers.

They pay a web hosting company to provide high availability for the marketing site. How can you make that magic happen for services you run?

shipping-1078102_640

Linux containers may be part of the solution. Converting your legacy architecture into a container-based model lets you easily split functions into lightweight modules. Docker is a container management system that allows you to easily customize and quickly replace container services. Learning Tree’s Linux virtualization course teaches you everything you need to know in order to get started.

Is It Secure?

Nothing is perfect, but if done right, distributing the tasks across multiple containers should lead to no significant increase in risk. Let’s say you have a web front end for a database. You should already be blocking direct access to the database from outside. Keep that same firewall logic in place.

Yes, you will start with a public container images. But once you have customized them for your use, you will use only your own trusted container images.

What About Efficiency?

Containers make things happen faster than you probably expect. You can start up an entire containerized operating system environment, carry out some task, and then discard the container within about one second. It becomes very reasonable to spawn a unique container to do the work for a single user web page click. A fresh container generates the report, the web server running in another container sends it to the user’s browser.

A container runs so fast by avoiding a huge amount of operating system infrastructure. It shares the already running kernel and then starts only those processes it needs. The complete process tree within the container has only a handful of processes.

I Was Talking About Availability, Wasn’t I?

That’s what got me thinking about controlling containers with Docker!

Security people like to talk about the CIA triad (possibly because the jargon sounds like an intelligence agency and nuclear weaponry at the same time, but over-the-top security jargon is another topic).

Well, don’t obsess over the C for Confidentiality — remember the crucial A for Availability!

Containers have to run somewhere, so make sure you have enough physical platforms. It is very easy to copy containers onto another physical server. Modify the new copy’s XML definition for a unique identity and appropriate network configuration. Bring up the new modules, transfer the processing to those on the new platform, and you can take down the first physical server for maintenance. NFS or Network File System makes this even smoother and faster.

Check out the Linux virtualization course, we show you how to do all of this and more!

Type to search blog.learningtree.com

Do you mean "" ?

Sorry, no results were found for your query.

Please check your spelling and try your search again.