Play With Kubernetes Quickly Using Docker



This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell



In case you don’t know, Kubernetes is a Google open source project that tackles the problem of how to orchestrate your Docker containers on a data centre.

In a sentence, it allows you to treat groups of Docker containers as single units with their own addressable IP across hosts, and scale them as you wish, allowing you to be declarative about services much in the same way as you can be declarative about configuration with Puppet or Chef, and let Kubernetes take care of the details.


Kubernetes has some terminology it’s worth noting here:

  • Pods: groupings of containers
  • Controllers: entities that drive the state of the Kubernetes cluster towards the desired state
  • Service: a set of pods that work together
  • Label: a simple name-value pair
  • Hyperkube: an all-in-one binary that can run a server
  • Kubelet: an agent that runs on nodes and monitors containers, restarting them if necessary

Labels are central point of Kubernetes. By labelling Kubernetes entities, you can take actions across all relevant pods in your data centre. For example, you might want to ensure web server pods run only on specific nodes.


I tried to follow Kubernetes’ Vagrant stand-up, but got frustrated with its slow pace and clunkiness, which I characterized uncharitably as ‘soviet’. Amazingly, a Twitter-whinge about this later and I got a message from Google’s Lead Engineer on Kubernetes saying they were ‘working on it’. Great, but this moved from great to awesome when I was presented with this, a Docker-only way to get Kubernetes running quickly.

NOTE: this code is not presented as stable, so if this walkthrough doesn’t work for you, check the central Kubernetes repo for the latest.

Step One: Start etcd

Kubernetes uses etcd to distribute information across the cluster, so as a core component we start that first:

docker run \
    --net=host \
    -d kubernetes/etcd: \
    /usr/local/bin/etcd \
        --addr=$(hostname -i):4001 \
        --bind-addr= \

Step Two: Start the Master

docker run \
    --net=host \
    -d \
    -v /var/run/docker.sock:/var/run/docker.sock\ \
f    /hyperkube kubelet \
        --api_servers=http://localhost:8080 \
        --v=2 \
        --address= \
        --enable_server \
        --hostname_override= \

Kubernetes has a simple Master-Minion architecture (for now – I understand this may be changing). The master handle the APIs for running the pods on the Kubernetes nodes, the scheduler (which determines what should run where based on capacity and constraints), and the replication controller, which ensures the right number of nodes have replicated pods.

If you run it immediately, your docker ps should now look something like this:

imiell@rothko:~$ docker ps
CONTAINER ID IMAGE                              COMMAND              CREATED        STATUS        PORTS NAMES
98b25161f27f "/hyperkube kubelet  2 seconds ago  Up 1 seconds        drunk_rosalind 
57a0e18fce17 kubernetes/etcd:            "/usr/local/bin/etcd 31 seconds ago Up 29 seconds       compassionate_sinoussi

One thing to note here is that this master is run from a hyperkube kubelet call, which in turn brings up the master’s containers as a pod. That’s a bit of a mouthful, so let’s break it down.

Hyperkube, as we noted above, is an all-in-one binary for Kubernetes. It will go off and enable the services for the Kubernetes master in a pod. We’ll see what these are below.

Now we have a running Kubernetes cluster, you can manage it from outside using the API by downloading the kubectl binary:

imiell@rothko:~$ wget
imiell@rothko:~$ chmod +x kubelet
imiell@rothko:~$ ./kubectl version
Client Version: version.Info{Major:"0", Minor:"14", GitVersion:"v0.14.1", GitCommit:"77775a61b8e908acf6a0b08671ec1c53a3bc7fd2", GitTreeState:"clean"}
Server Version: version.Info{Major:"0", Minor:"14+", GitVersion:"v0.14.1-dirty", GitCommit:"77775a61b8e908acf6a0b08671ec1c53a3bc7fd2", GitTreeState:"dirty"}

Let’s see how many minions we’ve got using the get sub-command:

imiell@rothko:~$ ./kubectl get minions

We have one, running on localhost. Note the LABELS column. Think how we could label this minion: we could mark this minion as “heavy_db_server=true” if it was running on the tin needed to run our db beastie, and direct db server pods there only.

What about these pods then?

imiell@rothko:~$ ./kubectl get pods
POD       IP CONTAINER(S)       IMAGE(S)                                   HOST                LABELS STATUS  CREATED
nginx-127    controller-manager  Running 16 minutes

This ‘nginx-127’ pod has got three containers from the same Docker image running the master services: the controller-manager, the apiserver, and the scheduler.

Now that we’ve waited a bit, we should be able to see the containers using a normal docker ps:

imiell@rothko:~$ docker ps -a
CONTAINER ID IMAGE                                      COMMAND              CREATED        STATUS        PORTS NAMES
25c781d7bb93 kubernetes/etcd:                    "/usr/local/bin/etcd 4 minutes ago  Up 4 minutes        suspicious_newton 
8922d0ba9a75 "/hyperkube controll 40 seconds ago Up 39 seconds       k8s_controller-manager.bca40ef7_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_c40c7396 
943498867bd6 "/hyperkube schedule 40 seconds ago Up 40 seconds       k8s_scheduler.b41bfb6e_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_871c00e2 
354039df992d "/hyperkube apiserve 41 seconds ago Up 40 seconds       k8s_apiserver.c24716ae_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_4b062320 
033edd18ff9c kubernetes/pause:latest                    "/pause"             41 seconds ago Up 41 seconds       k8s_POD.7c16d80d_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_da72f541 
beddf250f4da "/hyperkube kubelet  43 seconds ago Up 42 seconds       kickass_ardinghelli

Step Three: Run the Service Proxy

The Kubernetes service proxy allows you to expose pods as services from a consistent address. We’ll see this in action later.

docker run \
    -d \
    --net=host \
    --privileged \ \
    /hyperkube proxy \
        --master= \

This is run separately as it requires privileged mode to manipulate iptables on your host.

A docker ps will show the proxy as being up:

imiell@rothko:~$ docker ps -a
CONTAINER ID IMAGE                                      COMMAND              CREATED        STATUS        PORTS NAMES
2c8a4efe0e01 "/hyperkube proxy -- 2 seconds ago  Up 1 seconds        loving_lumiere 
8922d0ba9a75 "/hyperkube controll 15 minutes ago Up 15 minutes       k8s_controller-manager.bca40ef7_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_c40c7396 
943498867bd6 "/hyperkube schedule 15 minutes ago Up 15 minutes       k8s_scheduler.b41bfb6e_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_871c00e2 
354039df992d "/hyperkube apiserve 16 minutes ago Up 15 minutes       k8s_apiserver.c24716ae_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_4b062320 
033edd18ff9c kubernetes/pause:latest                    "/pause"             16 minutes ago Up 15 minutes       k8s_POD.7c16d80d_nginx-127_default_a8ae24cd98c73bd6d873bc54c030606b_da72f541 
beddf250f4da "/hyperkube kubelet  16 minutes ago Up 16 minutes       kickass_ardinghelli

Step Four: Run an Application

Now we have our Kubernetes cluster set up locally, let’s run an application with it.

imiell@rothko:~$ ./kubectl -s http://localhost:8080 run-container todopod --image=dockerinpractice/todo --port=8000
todopod todopod dockerinpractice/todo run-container=todopod 1

This creates a pod from a single image (a simple todo application)

imiell@rothko:~$ kubectl get pods
POD IP        CONTAINER(S)       IMAGE(S)                                   HOST        LABELS                 STATUS  CREATED
nginx-127     controller-manager                   Running About a minute
todopod-c8n0r todopod            dockerinpractice/todo                       run-container=todopod Pending About a minute

Lots of interesting stuff here – the HOST for our todopod (which has been given a unique name as a suffix) has not been set yet, because the provisioning is still Pending (it’s downloading the image from the Docker Hub).

Eventually you will see it’s running:

imiell@rothko:~$ kubectl get pods
POD           IP          CONTAINER(S)       IMAGE(S)                                   HOST                LABELS                STATUS  CREATED
nginx-127                 controller-manager                 Running About a minute
todopod-c8n0r todopod            dockerinpractice/todo             run-container=todopod Running 5 seconds

and it has an ip address ( A replication controller is also set up for it, to ensure it gets replicated:

imiell@rothko:~$ ./kubectl get rc
CONTROLLER   CONTAINER(S)   IMAGE(S)                SELECTOR                REPLICAS
todopod      todopod        dockerinpractice/todo   run-container=todopod   1

We can address this service directly using the pod ip:

imiell@rothko:~$ wget -qO- | head -1

Step Six: Set up a Service

But this is not enough – we want to expose these pods as a service to port 80 somewhere:

imiell@rothko:~$ ./kubectl expose rc todopod --target-port=8000 --port=80
NAME      LABELS    SELECTOR                IP          PORT
todopod       run-container=todopod   80

So now it’s available on

imiell@rothko:~$ ./kubectl get service
NAME          LABELS                                  SELECTOR              IP        PORT
kubernetes    component=apiserver,provider=kubernetes         443
kubernetes-ro component=apiserver,provider=kubernetes         80
todopod                                         run-container=todopod 80

and we’ve successfully mapped port 8000 on the pod to a port 80.

Let’s make things interesting by killing off the todo container:

imiell@rothko:~$ docker ps | grep dockerinpractice/todo
3724233c6637 dockerinpractice/todo:latest "npm start" 13 minutes ago Up 13 minutes k8s_todopod.6d3006f8_todopod-c8n0r_default_439950e4-dc4d-11e4-be97-d850e6c2a11c_da1467a2
imiell@rothko:~$ docker kill 3724233c6637

and then after a moment (to be sure, wait 20 seconds), call it again:

imiell@rothko:~$ wget -qO- | head -1

The service is still there even though the container isn’t! The replication controller picked up that the container died, and restored service for us:

imiell@rothko:~$ docker ps -a | grep dockerinpractice/todo
b80728e90d3f dockerinpractice/todo:latest "npm start" About a minute ago Up About a minute k8s_todopod.6d3006f8_todopod-c8n0r_default_439950e4-dc4d-11e4-be97-d850e6c2a11c_00316aec 
3724233c6637 dockerinpractice/todo:latest "npm start" 15 minutes ago Exited (137) About a minute ago k8s_todopod.6d3006f8_todopod-c8n0r_default_439950e4-dc4d-11e4-be97-d850e6c2a11c_da1467a2

Step Seven: Make the Service Resilient

Management’s angry that the service was down momentarily. We’ve figured out this is because the container died (and the service was automatically recovered) and want to take steps to prevent a recurrence. So we decide to resize the todopod:

imiell@rothko:~$ ./kubectl resize rc todopod --replicas=2

and there are now two pods running todo containers:

imiell@rothko:~$ kubectl get pods
POD           IP          CONTAINER(S)       IMAGE(S)                                   HOST                LABELS                STATUS  CREATED
nginx-127                 controller-manager                 Running 28 minutes
todopod-c8n0r todopod            dockerinpractice/todo             run-container=todopod Running 27 minutes
todopod-pmpmt todopod dockerinpractice/todo run-container=todopod Running 3 minutes

and here’s the two containers:

imiell@rothko:~$ docker ps | grep dockerinpractice/todo
217feb6f25e8 dockerinpractice/todo:latest "npm start" 16 minutes ago Up 16 minutes k8s_todopod.6d3006f8_todopod-pmpmt_default_8e645492-dc50-11e4-be97-d850e6c2a11c_480f79b7 
b80728e90d3f dockerinpractice/todo:latest "npm start" 26 minutes ago Up 26 minutes k8s_todopod.6d3006f8_todopod-c8n0r_default_439950e4-dc4d-11e4-be97-d850e6c2a11c_00316aec

It’s not just the containers that are resilient – try running:

./kubectl delete pod

and see what happens!

It’s Not Magic

Management now thinks that the service is bullet-proof and perfect – but it’s wrong!

The service is still exposed to failure: if the machine that kubernetes is running on dies, the service goes down.

Perhaps more importantly they don’t understand that the todo app is per browser session only, so their todos will not be retained across sessions. Kubernetes does not magically make applications scalable, so some kind of persistent storage and authentication method is required in the application and  to make this work as they want.


This only scratches the surface of Kubernetes’ power. We’ve not looked at multi-container pods and some of the patterns that can be used there, or using labels, for example.

Kubernetes is changing fast, and is being incorporated into other products (such as OpenShift), so it’s worth getting to understand the concepts underlying it. Hyperkube’s a great way to do that fast.

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell



Play with an OpenShift PaaS using Docker

This post is based on material from Docker in Practice, available on Manning’s Early Access Program:


Get Going

1) Allow any registry insecurely

This post is going to talk about playing with Kubernetes using Docker. Setting up Kubernetes can be a bit of a pain, but fortunately there’s a fast way to play with it.

To allow applications to contact the registry internal to OpenShift, you will need to start your Docker daemon allowing insecure registries. For simplicity we’re going to allow any registry to be accessed.

Change your Docker daemon configuration script


or add the insecure-registry argument as above to the end of the existing uncommented DOCKER_OPTS line.

The file to change will depend on your distribution. systemd users will need to update /etc/sysconfig/docker; Ubuntu users /etc/default/docker

Once done, restart your docker daemon with eg:

sudo service docker restart


systemctl restart docker

2) Save some time by downloading some images in advance:

$ docker pull openshift/wildfly-8-centos

Run OpenShift

Run this:

$ docker run \
    -d \
    -v /var/run/docker.sock:/var/run/docker.sock \
    --name openshift-origin-play \
    --net=host \
    --privileged \
    dockerinpractice/openshift-origin-play start --loglevel=4 && \
docker exec -ti openshift-origin-play bash

You’re now in an OpenShift origin PaaS master and node.


The above command’s a bit of a mouthful so let’s break it down.

docker run

Run up a container.


Run the OpenShift origin server as a daemon.

 -v /var/run/docker.sock:/var/run/docker.sock

Use the docker server on the host from within the container. This allows us to run docker from within the origin container.

--name openshift-origin-play 

Give the container a name we can refer to; also, ensure that this can only be run one at a time (as each container name is unique on a host).


Use the host’s network stack (ie don’t contain the network).


Allow the container to run stuff with user privileged (well, we are just playing, right?)


Use this as a base image. It’s the same as the openshift/origin image with a few things added for debugging and playing.

start &&

The default entrypoint (ie command) for this image is to run the openshift server. We pass the argument “start” to this to get the server to start up. If that’s successful…

docker exec -ti openshift-origin-play bash

we enter the container interactively (-ti) with the name we gave it early, running a bash process.

Look at the Console

Visit your OpenShift console at: https://localhost:8443/console and login with the username osuser and password dockerinpractice.


Following the instructions there, let’s…

Start the Infrastructure

For convenience, and to avoid some troublesome issues we set a specific DNS server (we assume here that you have access to google’s DNS servers):

$ echo "nameserver" > /etc/resolv.conf

Set up the services OpenShift builds need:

$ openshift ex router --create --credentials=$KUBECONFIG
$ openshift ex registry --create --credentials=$KUBECONFIG

Create a Project

In OpenShift, a project is a namespace for a collection of applications.

$ openshift ex new-project movieplex --admin=anypassword:osuser

Now, to create an application we need to login as the admin user for the project.

$ su - osuser
$ osc login
Please provide the server URL or just <enter> to use 'https://localhost:8443': 
The server uses a certificate signed by unknown authority. You can bypass the certificate check but it will make all connections insecure.
Use insecure connections (strongly discouraged)? [y/N] y
Authenticate for "openshift"
Username: osuser
Logged into 'https://localhost:8443' as 'osuser'.
Using project 'movieplex'.
Welcome to OpenShift v3! Use 'osc --help' for a list of commands available.
$ osc process -f | osc create -f -
$ osc start-build jee-sample-build

And wait a long time (about 15 minutes for me on my machine).

While waiting run this:

$ osc get build
jee-sample-build-1 STI Pending jee-sample-build-1

The build status will change to “Running”, and eventually “Finished”.

Then you can view the application:

$ osc get service 
frontend <none> name=frontend 8080
mysql <none> name=database 3306

We want to access the front end, so with the above output if you navigate to:

you’ll see the movie app :)


More to Come

There’s a lot more to this (which I’ll blog on), but this gives a taste of how scriptable deployments can be with OpenShift.


The code for the example is available here:

git clone

Scale Your Jenkins Compute With Your Dev Team: Use Docker and Jenkins Swarm

This post is based on material from Docker in Practice, available on Manning’s Early Access Program:


The Problem

At our company we had (another) problem. Our Jenkins server had apparently shrunk from what seemed like an over-spec’d monstrosity running a few jobs a day to a weedy-looking server that couldn’t cope with the hundreds of check-in-triggered jobs that were running 24/7.

Picture this:

Jenkins before

This approach clearly wouldn’t scale. Eventually servers died under the load as more and more jobs ran in parallel. Naturally this would happen when lots of check-ins were happening and the heat was on, so it was a high-visibility problem. We added more servers as a stop-gap, but that was simply putting more fingers in the dyke.

The Solution

Fortunately there’s a neat way around this problem.

Developer laptops tend to be quite powerful, so it’s only natural to consider using them. Wouldn’t it be great if you could allocate jobs to those multi-core machines that mostly lie idle while developers read Hacker News, and say “awesome” a lot?

Jenkins after

Traditionally, achieving this with VMs would be painful, and the overhead of allocating resources and running them on most machines unworkable.

With Docker it becomes easier. Docker containers can be set up that function as dynamic Jenkins slaves that are relatively unobtrusive. The Jenkins swarm plugin allows Jenkins to provision jobs to slaves dynamically.


Here’s a simple proof of concept to demonstrate the idea. You’ll need Docker installed, natch, but nothing else.

$ docker run -d \
    --name jenkins_server \
    -p 8080:8080 \
    -p 50000:50000 \
$ echo "Let's wait a couple of minutes for jenkins to start" && sleep 120
$ docker run -d \
    --hostname jenkins_swarm_slave_1 \
    --name jenkins_swarm_slave_1 \

Navigate to your http://localhost:8080.

Check the swarm client has registered ok on the build executor status page.


Now set up a simple Jenkins job. I set one up to run “echo done” as a shell build step. Then click on “Restrict where this can be run” and apply the label “swarm”.


Run the job, then check the console output and you should see that ran the job on the swarm container.


There you have it!  A dynamic Jenkins slave running “anywhere”, making your Jenkins jobs scalable across your development effort.

Under the Hood

The default startup script for the jenkins_swarm_slave image is here.

This sets up these environment variables through defaults and introspection:

HOST_IP=$(ip route | grep ^default | awk '{print $3}')

Overriding any of these with the docker command line for your environment is trivial. For example, if your Jenkins server is at http://jenkins.internal:12345 you could run:

$ docker run -d \
    -e JENKINS_SERVER=jenkins.internal
    -e JENKINS_PORT=12345
    --name jenkins_server \
    -p 8080:8080 \
    -p 50000:50000 \
    -d dockerinpractice/jenkins_server

And to adapt this for your use case you’ll need to adapt the Dockerfile to install the software needed to run your jobs.

Final Thoughts

This may not be a solution fit for every organisation (security?), but those small-medium sized places that can implement this are the ones most likely to be feeling this pinch.

Oh, and remember that more compute is not always needed! Just because you can kick off that memory-hungry Java cluster regression test every time someone updates the README doesn’t mean you should…

It strikes us that there’s an opportunity for further work here.

  • Why not gamify your compute, so that developers that contribute more get credit?
  • Allow engineers to shut down jobs, or even make them more available at certain times of day?
  • Maybe telemetry in the client node could indicate how busy the machine its on is, and help it decide to accept the job or not?
  • Reporting on jobs that can’t complete, or find a home?

The possibilities are quite dizzying!

Docker in Practice – A Guide for Engineers

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell


We’re writing a book on Docker that’s focussed on its practical aspects.


Why? James Turnbull’s “The Docker Book” is a great introduction to Docker for those that want a good grounding in the basics of Docker. And other books like Docker in Action are in the pipe that take you through the standard usage of Docker in detail.

But as busy engineers with day jobs we didn’t have the problem of getting to know Docker. We were sold on it, and wanted to know how to get this thing to useful, and fast. That meant overcoming all sorts of challenges both big and small, from the philosophical to the mundane. And in our own time to boot.

– Do we need to adopt a microservices architecture to make this work?

– Can this solve our capacity issues?

– How do we manage the way this is changing the way we work?

– Is it secure?

– How do we get the typical engineer to understand what’s going on? Where do they get lost?

– The ecosystem is overwhelming and growing every day! How do we navigate it?

– Does this replace VMs?

– What do we do about Configuration Management?

What we lacked was a coherent guide for these real-world problems. We went out and gave talks, built tools, wrote blogs, and lots of them, all with a slant of Docker’s use in the real world.

We first used Docker in anger at work in the fall of 2013. So by the time we were approached to write a book we knew what we wanted to communicate and had had over a year of real-world experience (or “the name we give to our mistakes”, as Oscar Wilde put it) to write up. And there was a happy ending – our organisation embraced Docker and saved a load of money in the process.

Writing a book on Docker is an interesting challenge as it intersects with so many different parts of the software lifecycle. So we cover a broad range of subjects.

The book is divided into four parts:

1) Docker Fundamentals

Where we give a brief introduction to Docker, its use and its architecture

2) Docker and Development

How Docker can be used by developers, some of the benefits, development patterns and pitfalls

3) Docker and DevOps

How Docker fits in with the test, continuous integration, and continuous delivery cycles

4) Docker in Production

Orchestration decisions, aspects of working systems, and how to deal with troubled waters

Each part contains discrete techniques where we discuss solutions to problems we’ve come across, or ideas to maximise the benefits of Docker adoption. But the arc of the narrative is: “Docker from desk to production”.

The book’s now available to buy on the Manning Early Access Program. You can get 50% off the book with the code ‘mlmiell’ and help to shape its content in the author forum.

This is a big technological shift, and we’re excited about being able to be part of it. We’re really interested in your experiences and views on the approach we’ve taken and solutions we’ve found.

The code for the book will be published and maintained here, and Docker images (of course!) available here.



This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell


Fight Docker Package Drift!

Fight Docker Package Drift!

The Problem

While Dockerfiles can help enormously with pinning down the details of your build process, you can still fall victim to package drift. This is when package management changes behind the scenes, leaving you with nasty surprises to unpick.

You can specify the versions of your apt packages like this:

apt-get install package=version

but what about that package’s dependencies? And their dependencies? And so on…


The following command installs the given package you’re concerned with, and then spits out a Dockerfile RUN instruction with the specific versions used as dependencies. You can then place this into your Dockerfile, and if the build fails in the future, you’ll know something has changed.

The following example does this for vim, but you can change that package to whatever you like.

$ docker run imiell/get-versions vim
RUN apt-get install -y vim=2:7.4.052-1ubuntu3 vim-common=2:7.4.052-1ubuntu3 vim-runtime=2:7.4.052-1ubuntu3 libacl1:amd64=2.2.52-1 libc6:amd64=2.19-0ubuntu6.5 libc6:amd64=2.19-0ubuntu6.5 libgpm2:amd64=1.20.4-6.1 libpython2.7:amd64=2.7.6-8 libselinux1:amd64=2.2.2-1ubuntu0.1 libselinux1:amd64=2.2.2-1ubuntu0.1 libtinfo5:amd64=5.9+20140118-1ubuntu1 libattr1:amd64=1:2.4.47-1ubuntu1 libgcc1:amd64=1:4.9.1-0ubuntu1 libgcc1:amd64=1:4.9.1-0ubuntu1 libpython2.7-stdlib:amd64=2.7.6-8 zlib1g:amd64=1:1.2.8.dfsg-1ubuntu1 libpcre3:amd64=1:8.31-2ubuntu2 gcc-4.9-base:amd64=4.9.1-0ubuntu1 gcc-4.9-base:amd64=4.9.1-0ubuntu1 libpython2.7-minimal:amd64=2.7.6-8 mime-support=3.54ubuntu1.1 mime-support=3.54ubuntu1.1 libbz2-1.0:amd64=1.0.6-5 libdb5.3:amd64=5.3.28-3ubuntu3 libexpat1:amd64=2.1.0-4ubuntu1 libffi6:amd64=3.1~rc1+r3.0.13-12 libncursesw5:amd64=5.9+20140118-1ubuntu1 libreadline6:amd64=6.3-4ubuntu2 libsqlite3-0:amd64=3.8.2-1ubuntu2 libssl1.0.0:amd64=1.0.1f-1ubuntu2.8 libssl1.0.0:amd64=1.0.1f-1ubuntu2.8 readline-common=6.3-4ubuntu2 debconf=1.5.51ubuntu2 dpkg=1.17.5ubuntu5.3 dpkg=1.17.5ubuntu5.3 libnewt0.52:amd64=0.52.15-2ubuntu5 libslang2:amd64=2.2.4-15ubuntu1 vim=2:7.4.052-1ubuntu3

Take that RUN line that was output, and place in your Dockerfile, eg where you had

FROM ubuntu:14.04
RUN apt-get update && apt-get install -y vim

you would now have:

FROM ubuntu:14.04
RUN apt-get update && apt-get install -y vim=2:7.4.052-1ubuntu3 vim-common=2:7.4.052-1ubuntu3 vim-runtime=2:7.4.052-1ubuntu3 libacl1:amd64=2.2.52-1 libc6:amd64=2.19-0ubuntu6.5 libc6:amd64=2.19-0ubuntu6.5 libgpm2:amd64=1.20.4-6.1 [...]

If you try and rebuild this and something has changed, you’re far more likely to catch and identify it early. This should be particularly useful for those paranoid about any kind of changes to their builds. It won’t solve all Docker “build drift” problems, but it is a start.


The source code for this is available here.

Note that this assumes a debian flavour of ubuntu:14.04. If you have a different base image, fork the repo and change the Dockerfile accordingly. Then, from the same directory:

$ docker build -t get-versions .
$ docker run get-versions vim

Help Wanted!

There are, I’m sure, plenty of improvements that could be made – maybe even scripts already in existence – to make this more robust and useful.

I’d also like to know if anyone can do this with other package managers.

If you have any ideas, do let me know:

Win at 2048 with Docker and ShutIt (Redux)

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell


Steps to win at 2048

I blogged about this before, but have revisited it and made the process much easier and more robust.

1) To play 2048:

docker run -d -p 5901:5901 -p 6080:6080 --name win2048 imiell/win2048

2) Start up the vnc session:

vncviewer localhost:1

password for vnc is:


Play for a bit.

3) When ready to “save game”:

docker tag -f my2048tag $(docker commit win2048)

Then, if you lose

4) go back to your save point with:

docker rm -f win2048
docker run -d -p 5901:5901 -p 6080:6080 --name win2048 my2048tag

and repeat steps 2) to 4) until you complete 2048!


Command Annotations

docker run -d -p 5901:5901 -p 6080:6080 --name win2048 imiell/win2048

-d – run the container in daemon mode

-p … – publish ports related to vnc to the host

–name – give the container a name we can easily reference later

vncviewer localhost:1

Access the vncviewer on the ports  5901 and 6080.

docker tag -f my2048tag $(docker commit win2048)

The subshell (in italics) commits the files in the container to a docker image. The command returns a reference to the image that is in turn passed to the docker tag command. This forces the tag for the image to be set to my2048tag (or whatever you choose to call it.

docker rm -f win2048
docker run -d -P --name win2048 my2048tag

We must remove the container running so that the names don’t clash when we start it up again with docker run.

Build it Yourself

If you want to build the image yourself:

docker build

will make a local image you can tag. Code is here

Set Up a Deis (Docker-Friendly) Paas on Digital Ocean for $0.18 Per Hour in Six Easy Steps Using ShutIt


What’s Deis?

Deis is an open source paas that is Docker-friendly.

What’s Digital Ocean?

A very cheap virtual private server provider like AWS, but simpler. Sign up here

What’s a Paas?

Platform as a service, like Heroku. Get an app deployed on demand.

What’s ShutIt?

A tool for automating complex deployments. More here

The Six Steps

1) Set up a Digital Ocean account

2) Get a personal access token by clicking on “Generate New Token”.

3) Set up a free temporary domain (if you don’t have a spare one)

I use for this. I set up there, for example.

Note that this script will wipe any pre-existing settings on Digital Ocean for this domain.If you create one as described above, there’s nothing to worry about.

4) Clone the repo

git clone 

5) Copy and edit the Dockerfile subbing in your access token and domain

cd shutit-coreos-do/deis/dockerfile
cp Dockerfile
sed -i 's/YOUR_ACCESS_TOKEN/<your access token here>/' Dockerfile
sed -i 's/YOUR_DOMAIN/<your domain here>/' Dockerfile

6) Run the build

docker build --no-cache .

and wait a good while. Well done! You’ve now got your own Deis cluster! Admin user is admin/admin

Use It

Instructions for use are here:

Here are some instructions to get you going. We’re going to deploy the example from here in the deis documentation:

curl -sSL | sh
ln -fs $PWD/deis /usr/local/bin/deis  # or elsewhere on your path
deis login # <input admin/admin>
mkdir -p /tmp/example-go && cd /tmp/example-go
deis create 
#Creating application... done, created example-go
deis pull deis/example-go:latest 
#Creating build... done, v5 
curl -s http://example-go.<your domain>
#Powered by Deis

Be sure you switch it off before the hour is up or it will cost you another whole $0.18 per hour!

This method builds on the ShutIt project, and the ShutIt distro.

Any problems, please contact me below/via github issues/other social networks.