Updating docker containers in production

Docker brings new ways to manage infrastructure and applications, with it's toolbox to mash and re-use building blocks of all kinds and all at levels. 

It seems the standardisation, decoupling of dependencies and lightweight abstraction that docker provides has a lot appeal and fits the bill of the new continuous delivery  way of doing IT. 

A core concept of Docker is the way image layers are used to assemble container images, and the ability (and almost a requirement) to reuse/cache "unchanged" parent layers. This behaviour might lead to some misunderstandings about how updated your the container images really are.  

In this post I'll try to shed some light into aspects of container images, the build process and the consequences of docker's cache-by-default-policy during build. 

 

During a talk at the Blackhat Europe 2015 Anthony Bettini talks about security challenges with Docker[1]. Some of the claims are more relevant than others, and Docker is constantly working towards improved security both in the Docker toolbox and in their hosted public container image hub: [2] https://hub.docker.com/. One of the more relevant aspects (IMHO), is the "updatedness" of containerness and the (in-)ability to keep them updated. 

According to Bettini 90% of official images on Dockerhub has vulnereabilities. Why would it be so? Well an obvious reason is that images do not self-update. They are composed of dead files (or file layers) on some persistent storage made available through an API [3]. So unless someone or something  is actively updating images by means of the API, a steady increase of vulnerabilities in the images is inevitable. Also, according to Bettini, the amount vulnerabilities in a static image over time is closely correlating with the size of the image.

So given that we want to limit our exposure to attack by having an acceptable likelihood of vulnerabilities being present in our application containers, images need to be updated within certain intervals. The frequency of updates depend on image-size, other security mechanisms in the infrastructure and ultimately on the cost of container/application breach. Your milage will vary, but it is probably be wise to make a conscious judgement of these aspects before rushing your new production app into the "wonderful world of containers".  In addition to updating the image, the container using the image also need to be restarted with the new image version. 

Docker containers are often compared to shipping containers and the ability to plan physical resources (ship brokering) based on the standard physical outside measures of the containers. Maybe the analogy to shipping containers fits better with the concept of container images rather than the containers themselves. An application container is something that takes the container image as a starting point using it's contents to create "life": a "living" application/service(es). Compare it to opening a physical container of ingredients to produce  new consumables, and that the ingredients have an expiry date....if it lies in the container to long it will rot. And so it is with container images, they have an expiry date that varies with how much rot you are willing to accept and the durability of the image-contents. 

So having established that there is a gap between what is offered as part of the Docker toolbox and the need for using Docker containers in production, how to close the gap? In a rapid 

I've used the last couple of days to experiment with just that, and did not find a really neat solution. Ideally we would be able to have layers up to certain level self-update in images; a kind of rebasing the application layer in the image onto an updated operating system stack. So the first I went looking for was tools and techniques to achieve this goal. Turns out that there is few attempts to make this possible. Some not focused directly on the update problem, but rather combining image layers with different applications without rebuild [4], and some attempts to implement rebasing of docker images [5,6]. The design of docker really seems to demote updating of single layers within docker images, a design choice that might promote software rot depending on how frequently you rebuild. Mind you it is not enough to rebuild an image on the same Docker host unless you use the --no-cache option to the "docker build command ". The build operation has no means to check wether your "yum" or "apt-get upgrade" command in fact needs an update. As long as the command itself in the Dockerfile has not changed, docker will happily reuse previous layers created earlier with the same command ,hence the binary layer from X weeks back is what you will build upon unless you use --no-cache.

 Ok, so what about the following method?

  • pull image to build/update host
  • run yum/apt upgrade
  • docker commit the changes from the OS-update into the update image
  • push back to registry
  • pull down to prod 
  • restart prod-container with updated image

This actually works, however due to the additive nature of image layers the image grows (quite a lot) every time it is updated, and you'll end up having huge images and in the end you'll run out of available layers in the image [7]. This might be mended by adding docker-squash [8] into the update loop, however this was not yet tested (since I came across it during the writing of this post :-) ) . I'll try it out and give an update. 

What ended becoming the solution for us (until something better comes along) is the following. 

  • build all images from scratch using the smallest possible FROM statement that you trust in your Dockerfile
  • have all Dockerfiles (and build deps) in predictable name-spaced directory structure (from VCS) 
  • have yum/apt-get update/upgrade in your Dockerfile (either as ONBUILD in a local base image, or each Dockerfile)  
  • build and tag images with the "dirname" of the directory contianing the Dockerfile
  • allocate a tag for images that will be automatically rebuilt , "prod" for instance 
  • iterate over directories with Dockerfiles in your directory tree
  • for each of your dirnames, that corresponds to image repository names in your local docker-registry, check if  the repository has the tag "prod"
  • if it has, then
    • rebuild with the same name:tag (prod) and --no-cache option 
    • push name:prod back to your registry (effectively replacing all layers
    • stop container using image 
    • pull updated image  container host
    • start container with new image
  • Repeat as often as needed 

All of this should  be orchestrated, and Ansible fits the purpose quite well. One thing to be aware of, is that replacing the image layers of a tag like this (when pushing back the updated image) will produce a set of dangling image layers on the docker-regesitry host (we still use the old v1 API registry, but I think this problem also will be present in the new v2 API registry). This fills up unnecessary disk space on the registry server. I therefore created at script to delete unreferenced (dangling)  layer directories in "images" directory of the registry server[9], and a script to pull all images available in a docker-registry [10] in order to have a quick backup in case I messed up the tidying of image layers ;-). 

Some (many?) shops has a continuous delivery (CD) workflow that images inherently is rebuilt so frequently that there is no need for such update schemes. Also one can argue that Docker enables and promotes the use of uni-kernels and/or minimal operating system combined with standalone Go-binaries with few deps. I think there will be all kinds of usage scenario with varying needs for keeping images updated. My impression is that many focus solely on the benefits of the Docker way of scaling up infrastructures, which I agree with too.

This post is an attempt to highlight that there is aspects of image-layer-rot that we also need to be aware of, even in the CD environments  some/many? of us aspire to. 

Example build script

#!/bin/bash

set -eo pipefail

NAME=`basename $(pwd)`
REGISTRY='docker-registry.uio.no:8088/'
TAG=`date +%Y%m%d-%H%M`

function buildpush {
  docker build -t $@ .
  docker push $1
}

case "$1" in
  datepush)
    buildpush ${REGISTRY}${NAME}:${TAG}
    ;;
  prod-rebuild)
    buildpush ${REGISTRY}${NAME}:prod --no-cache
    ;;
  local)
    docker build -t ${NAME}:local .
    ;;
  *)
    echo "Usage: $0 {datepush_prod-rebuild_local}"
    exit 1
esac

Script to detect and initiate rebuild:

#!/bin/bash
set -eo pipefail 

BUILD_DIR="/site/git/stash/docker-images-prod-autobuild"
CERT_DIR='/etc/docker/certs.d/docker-registry.uio.no:8088'
ITERATE=`find $BUILD_DIR -type f -name Dockerfile|awk -F"/" '{print $6}'`
GET_PROD_TAG='curl -s  --cert ./dockerclient.cert --key ./dockerclient.key'
cd $CERT_DIR
for i in $ITERATE
do
 TAG=`$GET_PROD_TAG https://docker-registry.uio.no:8088/v1/repositories/library/$i/tags/prod`
 if [[ $TAG != *Tag* ]]
 then
   cd "$BUILD_DIR/$i"
   ./build.sh prod-rebuild
 fi
done

 

[1] https://www.youtube.com/watch?v=77-jaeUKH7c
[2] https://hub.docker.com/
[3] https://docs.docker.com/registry/spec/api/
[4] https://github.com/docker/docker/issues/3976
[5] https://www.cyphar.com/blog/post/hackweek-13-docker-rebase
[6] https://github.com/ewindisch/docker-rebase
[7] http://www.extellisys.com/articles/docker-layer-overload
[8] http://jasonwilder.com/blog/2014/08/19/squashing-docker-images/
[9] https://github.com/JarleB/docker-tools/blob/master/docker-registry-remove-dangling-image-layers
[10] https://github.com/JarleB/docker-tools/blob/master/pull-all-images-from-registry

 

Av Jarle Bjørgeengen
Publisert 23. mars 2016 12:27 - Sist endret 4. apr. 2016 12:41
Legg til kommentar

Logg inn for å kommentere

Ikke UiO- eller Feide-bruker?
Opprett en WebID-bruker for å kommentere