Skip to main content

Docker: Putting things together and pulling them apart

Why microservices is a good idea

My favourite fictional scenes involve groups of people eating around a table, behaving badly. I think of Margaret Atwood's Edible Woman, or the movie "August: Osage County". Or the tea party in Alice in Wonderland.

What works well in narrative is often the opposite of what works for computers - the worst computer mess-ups often involve a collection of badly behaving pieces that manage to make a mess much larger than any of the badly behaving pieces could do on their own.

As a timely example, a client of mine that ill-advisedly used a generic host for their WordPress site recently had the site go down a few times. The host first told them they were a victim of a Denial of Service attack, but now thinks it was due to an incompatibility between their server and some backup software that was generating an unexpectedly high load. At the same time, the contact at the service provider was trying to fly down to Brazil for a family emergency and had gotten stuck in an airport due to weather.

This is of course one of the hardest problems with complexity - the unexpected interaction between different pieces, each of which can break under the wrong circumstances.

Which brings me to Docker, and Dockerfiles, and my second installment about diving into the world of Linux containers. My previous post detailed how I came to embrace Docker and containers as the solution (in theory) to my goal of providing managed Drupal/CiviCRM hosting for non-profits.

The Dockerfile is a text file that tells the Docker engine how to build a container "image". An "image" is a file on disk that is used to launch the container, which is then the live copy of the image. It's live in the sense that code is running in the container, and it's a virtualised operating system running inside a "host" operating system.

After running the usual hello world type scenarios, my first serious Dockerfile was an attempt to replicate my current old server. So I started with a CentOS base image, and then replicated some of the steps I'd normally go through to build a server - i.e. using yum to install various packages, adding some additional repositories, and some configuration customization. The simplest way to do that is just to use the RUN command. My mental model of what's happening here is: it creates a container from the base image, runs all the RUN commands, and then writes the result back down as an image.

If I used my new image in place of my current server, it would be a nice improvement - I could implement each site in a separate container from this image and gain the efficiency benefits of the shared container image.

But the reason I started this blog post with my narrative examples is because Docker can and should provide more than this. The "Docker Way" is microservices, which means that instead of having one container that does everything an individual server used to do, each container is only responsible for one microservice, and these are assembled together into the application. There are lots of good arguments for this on the Internet, but what helped me embrace this idea was to think of a docker container as a more sophisticated analogue to a Unix "process". To go sideways for a second: recall that in programming, there was an upheaval in coding when people started to embrace object orientation. That change allowed us to use a more sophisticated mental model for bits of code that might previously have been implemented as a procedure or function - now we could bundle together multiple functions and properties into a single object that exposed only specific pieces to the rest of the application. In a similar way, a Docker container is now a more sophisticated version of what used to be a program that you'd install onto your server. Only now you can restrict the ways that it might unexpectedly interact with the rest of the things that make up your application.

So instead of a big Docker image that replicates a complex mini-server to power the website, the Docker Way is to create multiple Docker images that do the things the individual programs used to do. A minimal version of that is in fact two containers: a container for apache, and a container for mysql. A simple development environment for Drupal can happily use just that. Of course, in a production environment, it gets more complicated.

In fact, as you go down the rabbit hole of micro-services, another holy grail is disposability - i.e. you're not depending on any one container instance for your application to continue working. If you can do that, then you can start thinking of your containers as a layer of abstraction that doesn't need to think about the host it's on. If that seems over the top for your small website, then you're right, but remember that people are using containers to solve much different problems. To give you an idea about this: consider how when you log into gmail, your service is not tied to any specific machine (google has about a zillion machines, which are constantly rotating, undergoing maintenance, breaking down, etc.).

But even for our smaller scale, it's a nice benefit that allows us to do other cool stuff which we can talk about later. The obvious question is - if your container is disposed of, what happens to the stuff that you were working on, ah, like your website contents? That turns out to be a challenge. One tool that simplifies the problem a bit is a thing called a "volume" container - a special container that holds all the stuff you don't want to loose, and that doesn't get disposed of when you dispose of the container that uses it. That allows you to only worry about the persistence of the volume, but creates the new problem of managing the volume - not only keeping track of it, but also being able to share it with multiple containers if you're doing that sort of thing.

The other big issue that arises with containers and microservices is the "connecting them together" - not just the ability for them to talk to each other, but to handle the disposing of some of them, etc. This is pleasantly called "orchestration", and is really where the hard work starts.

Popular posts from this blog

The Tyee: Bricolage and Drupal Integration

The Tyee is a site I've been involved with since 2006 when I wrote the first, 4.7 version of a Drupal module to integrate Drupal content into a static site that was being generated from bricolage. About a year ago, I met with Dawn Buie and Phillip Smith and we mapped out a number of ways to improve the Drupal integration on the site, including upgrading the Drupal to version 5 from 4.7. Various parts of that grand plan have been slowly incorporated into the site, but as of next week, there'll be a big leap forward that coincides with a new design [implemented in Bricolage by David Wheeler who wrote and maintains Bricolage] as well as a new Drupal release of the Bricolage integration module . Plans Application integration is tricky, and my first time round had quite a few issues. Here's a list of the improvements in the latest version: File space separation. Before, Drupal was installed in the apache document root, which is where bricolage was publishing it's co

Refactoring My Backup Process

A couple of weeks ago, I decided to spend a few hours on a Friday afternoon improving my backup process for my Blackfly managed hosting service . Two weeks later, I've published my ongoing work as an update to my backup-rsync project and have decided to share it with you. You might think I'm trying to compete for "least click-bait like title ever", but I'm going to claim this topic and project might be of interest to anyone who likes to think about refactoring , or who is implementing backups for container-based hosting (like mine ). Definition "Backup" is one of those overloaded words in both vernacular and computer-specific use, so I want to start with definitions. Since "a backup" is amongst the least interesting objects (unless it contains what you absolutely need in that moment), I think it's more interesting and useful to define backups functionally, i.e. A "backup process" is a process that 1. provides a degree of insuranc

drupal, engagement, mailing lists, email

I lived, worked and studied in Costa Rica from 1984 to 1989. Ostensibly, I was there to study Mathematics at the University, and indeed I graduated with an MSc. in Mathematics supervised by Ricardo Estrada (check that page, he even advertises me as one of his past students). And yes, I do have a nine page thesis that I wrote and defended in Spanish somewhere in my files, on a proof and extension of one of Ramanujan's theories. But mathematics is a pretty lonely endeavour, and what drew me back to Central America (after the first visit, which was more of an accident), was the life and politics. The time I lived there was extremely interesting (for me as an outsider, though also painful and tragic for it's inhabitants) because of the various wars that were largely fuelled by US regional hegemonic interests (of the usual corporate suspects and individuals) and neglect (of the politicians and public) - the Contra war in Nicaragua, the full-scale guerrilla wars in El Salvador and