DockerConEU 2015 Talk – You Know More Than You Think

This is the prepared text of a talk I gave at DockerConEU 2015.

 

2015_Dockercon_EU_4_3_new1

‘Trust yourself. You know more than you think.’ If I have to distil this talk into one phrase it will be that. My experience in initiating and prosecuting change within an organisation has only hardened my view that it’s you – the engineer who’s been shipping code for years, the technical leader who’s been fighting the good fight in meeting after meeting – it’s you that knows what needs to be done, and often that involves doing things that may feel, or even be wrong. And I hope that by the end you’ll feel emboldened to trust yourself that bit more and enable Docker for your own organisation in your own way.

2015_Dockercon_EU_4_3_new2

2015_Dockercon_EU_4_3_new3

My name is Ian Miell and I’m honoured to be talking here today, given that so many great people submitted talks I’d have liked to hear.

So why should you listen to me? From a Docker perspective I’ve done all of the usual community things:

– written one of the three million docker ecosystem tools (sorry)
– had so many builds on dockerhub they had to ask me to stop (sorry)
– write a Docker and DevOps blog
– spoken at meetups
– published a video on Docker aimed at web developers
– published a book on Docker in Practice

I worked for 14 years for the leading supplier of online sports betting and casino software and pounced on Docker as a solution for many of problems I faced as Head of DevOps (whatever that means) and latterly when I was put in charge of IT Infrastructure as well. Using this and other experience, I then moved on somewhere else where a large chunk of my responsibility is to be a reference point for Docker.

It’s the practical angle I want to talk about today. About how getting Docker done in a living, breathing organisation meant breaking some rules, and how it worked out for me. I hope it’s useful to some of you, and if not, I hope it’s at least interesting.

 

2015_Dockercon_EU_4_3_new4

First I need to set the scene of why I decided to go all-in on Docker.

2015_Dockercon_EU_4_3_new5

In September 2013 I read in Wired magazine about a new technology called Docker.

2015_Dockercon_EU_4_3_new6

The timing couldn’t have been more perfect. I was a DevOps Manager with no budget for DevOps, in a company that couldn’t get a useful VM infrastructure going. We were a software company with 25 customers, and of those about 6 were big players. They competed with each other to throw changes out as fast as possible and were willing to accept technical debt, even wear it as a badge of commitment to delivery. I knew this because I lived daily with the consequences. I was responsible for managing outages.

2015_Dockercon_EU_4_3_new7

I liked to argue that we had exactly the wrong number of customers to avoid technical debt – if we had two customers of similar size we could do things consistently between them, if we had 200,000 we could do what we wanted, and they would vote with their feet. On top this we were a time and materials company, and which customer would want to pay for testing that other customers get the benefit from?

As it was we had a few big players with big pockets who wanted to differentiate themselves from their rivals by pushing forking changes out faster. Not a great environment for productization, and there were subtle and significant differences between customer systems.

2015_Dockercon_EU_4_3_new8
As Live Problem Manager and DevOps Head my biggest frustration was an inability to create realistic customer-specific environments to reproduce problems. Environments were a rare commodity, hand-crafted by old hands like me based on folklore, wiki pages full of bash commands hurriedly noted, and old-fashioned grit. New features always took priority and no customer wanted to pay to sort out technical debt, but were happy to pay to shout at me.

2015_Dockercon_EU_4_3_new9
I’d long bemoaned this and wondered what could be done about it. The standard answers: VMs + Chef/Puppet/Ansible were not yielding results, since no-one wanted to tackle this 15-year old software stack and no-one had the time. As an aside, one of the commonest objections to my advocacy of Docker was: ‘you can do all that with VMs’. Which is true, but it’s far less convenient, iterations are slower, and in my experience it’s less stable and more painful to use. Tellingly, our technical presales engineer – who had to manage multiple environments in various states – went back to maintaining shell scripts for his laptop’s envs because VMs ‘were not worth the hassle’. In any case, despite the talk, no-one had managed to demonstrate this working. As I’ll discuss later, the time and resource-savings that containers bring change the paradigm of development.
After reading the article I checked the project out, started using it, and on Monday went into work with a proof of concept

2015_Dockercon_EU_4_3_new10

which led to my first choice:

2015_Dockercon_EU_4_3_new11

At this point I could have gone to my management and argued the case for this new technology, and waited for a decision and a budget. I chose not to. Instead I sent an email out on the Monday asking whether anyone else had heard of it and whether anyone wanted to work on a way of solving some of our problems. I’ll talk about what happened next in a moment, but at this point I want to talk about failure.

It was failure that drove me down this path, because I’d been here before. In 2006 I and a few others advocated the use of Erlang to solve some specific engineering challenges we were going to have in the coming years to do with scalability and real-time data. We went to the CTO and asked for his support. After many months a relatively insignificant and unrelated toy project was given to someone else in the company apparently uninterested in learning a new technology, and we watched as the results withered on the vine. I still don’t know whether that was the right outcome or not, but I’d seen enough not to let the fate of my vision depend on something I had so little influence over.

As an aside, I think the parallels between Erlang and the whole ‘Data centre as an Computer’ movement, of which Docker is a part, are under-explored. In case you don’t know, Erlang was an engineering solution to the problems of fault tolerance in data centres for telcos that has a message-passing architecture.

2015_Dockercon_EU_4_3_new14

You can’t get much more microservices than services built from millions of co-routines that take up only a few dozen bytes of memory by default, and this standard Erlang diagram is one that will look familiar to anyone who’s used Kubernetes. Anyone interested in what happens next with microservices will do well to look at the history of Erlang.

But back to the point. After sending out my email I had a few responses and a small group of people interested in taking things further. This had a number of beneficial consequences in the following months:

– the team was motivated, and the quality of engineers involved was high
– those that didn’t deliver anything found they had no voice and dropped out
– conversely, those that did deliver got a say and felt empowered to contribute more
– it was fun! solving long-standing problems one by one and pulling together was incredibly satisfying
– time was allocated naturally to where we felt it was important – there was no bureaucracy, no deliverables, no project plans, no business case

By focussing on building solutions rather than seeking support elsewhere a lot of time and energy was saved. A lot, but not all. I had to pull in a lot of favours to get resources and access to things outside the normal processes. A lot of chatter ensued in the organisation about what we were doing, and the supposed conflict between what we were working on and the more strategic solutions being posited as solutions by others.

Much of this chatter centred around our solutions not being ‘industry standard’, which leads me to my next choice:

2015_Dockercon_EU_4_3_new12
When I saw Docker I thought ‘great!’ – I can run multiple reproducible environments cheaply, save state usefully, all without much hassle and outlay. So the natural and immediate plan was to simply shove everything into a container and allow everyone to consume it as a reference.

2015_Dockercon_EU_4_3_new13

I went onto mailing lists to ask about how to achieve this, and got responses like ‘I wouldn’t start from here – you should be using microservices, that’s what Docker is for’.
Fortunately I had the confidence to decide against doing this, mainly because the task was too great. Converting a 15-year old hub-and-spoke architecture with millions of lines of codes and hundreds of apps was a project I didn’t want to take on in my spare time, and would have doomed my efforts to complete failure. I think the area of legacy is a fascinating one for Docker, and it’s going to have to deal with it. Based on experiences at these and other organisations, I’ve come to believe that the approach to Docker for legacy apps should be in three stages:

2015_Dockercon_EU_4_3_new15

– Monolithic build, the speed of which enables
– A DevOps workflow, which naturally leads to
– A break-up into microservices

The point is that real projects, real budgets (0 in my case) cannot afford to do everything properly, and even if they do, they risk running into the sand and losing momentum. An evolutionary approach is required.
Given that I had a monolith to contain my next choice was how to build it. Again, I need to set the scene.

2015_Dockercon_EU_4_3_new16
Since environments were created by hand by experienced engineers in whatever inconsistent environments supplied by our customers, there was a lack of configuration management experience where I worked. Nonetheless, I figured I should try and do the standard thing, and spent one of my precious weekend days trying to learn chef by watching some introductory videos. A couple of hours later time was running out and I was no nearer. At this point – and out of frustration – I whipped up a solution which I knew would work for me using tools I already knew – Python, bash and (p)expect.

I didn’t believe this was the ideal, but I’d built what I needed and I had complete control over it and I knew that our project could deliver something useful quickly. When I showed what I’d done to people at work, the response typically was: ‘you should be using industry standard tools for this’, to which my response was: ‘agreed, here’s the shell scripts, here’s how I’ve done it, please replicate what I’ve done with whichever tool you like and we’ll move to it’. No-one did this.

This approach proved to be very useful for the project for a number of reasons:

One, I’d designed it to be easy to hack on. As an engineer, all you needed to do to contribute was cut and paste code that amounted to shell commands and re-run on your laptop. As our work got taken up through the organisation, contributions were easily made by others.

Two, the tool did exactly what we needed to achieve our goal: no more, and no less. If it didn’t, we built it. This was fun, and empowering. I learned a hell of a lot about config management tooling challenges, which has helped me a great deal as I’ve moved on and picked up other ‘real’ configuration management tools as part of my work. [As an aside, I did a similar thing with CI tools like Jenkins – I implemented a minimal CI tool in bash called ‘cheapci’, available on github which also helped me understand the problems of CI.]

Three, it allowed us to defer the decision about what configuration management tool to use. Since I’d designed it to organise a series of shell scripts and run them in a defined order, one of the outputs was a list of commands that could be fed into any tool you liked, even run in by hand.

So, as with monoliths, I’m not sure ‘not invented here’ deserves such a bad rep. If your aim is to deliver and control your solution and you have the skills, building your own tool can be the right choice, at least for getting your project done. And if you want to get Docker working in your organisation, getting to useful is your first priority. The tool eventually became known as ShutIt (ie not Chef, not Puppet: ShutIt), and after 4 months of legal discussion became open-sourced, and I still maintain it as one of those three million ecosystem tools. To be clear, I don’t suggest you use ShutIt (though I welcome contributions), I’m just using it as an example of how not invented here can be the right choice on the ground.

At this point I want to dwell on one of the points I just made in order to bring me onto

2015_Dockercon_EU_4_3_new17

As I just mentioned, one of my design goals for ShutIt was that it should not get in the way of engineers that wanted to contribute to our endeavour. I didn’t want people to have to learn both Docker and another technology to contribute.

This was part of a broader plan to get people on board with what we were doing as far as possible, to reduce the barriers to entry, and to increase cross-fertilization between different parts of the company.

2015_Dockercon_EU_4_3_new18
One of the patterns of failure I’d seen in attempts at technical change was that it was guarded and defended by a group of elite engineers, with little attempt made to persuade others – ie those that would eventually have to build on and maintain their efforts – to understand what was going on. A former colleague of mine pointed out that he’d been forced to use Maven with little support and that this caused him great resentment.

So from the very beginning I made sure I talked openly about what we were up to, both inside and outside the company.

One thing I was absolutely determined to make sure of following my experience with Erlang was to ensure that I took responsibility for knowledge sharing.

First I tried doing lectures in a room, which went OK, but I had a lucky accident which led me down a different path. I couldn’t get a room for a session, so decided to do it over Google hangout instead. This made the whole process way more efficient. People would remain at their desks as I introduced the material, and then they worked through the examples, speaking up when they got stuck. It allowed people to work at their own pace, be interrupted and feel like they had me as a helper as they learned themselves. I could even get on with other work while they worked through it. The PR effect was massive, as people felt part of the change, and that encouraged a number of great ideas came out of it and made people want to smooth our path. I couldn’t recommend this more. And I’m in good company:

2015_Dockercon_EU_4_3_new19

I came across this quote coincidentally last week and couldn’t agree with it more.

The other thing I did was put myself out there at meetups and talk openly about what we were up to and how we were doing it. What I found interesting was a pattern of thought emerged which held people back from advocating change. Consistently, people would tell me that their organisation was dysfunctional

2015_Dockercon_EU_4_3_new20

but that company X, or even all other companies seemed to have it sorted out:

2015_Dockercon_EU_4_3_new21

I haven’t seen this place. I’ve seen some places do some things better than others, but usually these are the things that those businesses exist to do; it’s what they’re optimized for.

So much for the decisions. How did we get Docker taken up and what did it do for us?

One of our number worked in a team of forty engineers, and took up the challenge of getting his colleagues to use it.

There was significant resistance at first. Believe it or not people were happy maintaining environments by hand. The critical insight we had was that while someone is on a project they don’t want to change, but when they came to starting the next project, the benefits of the ‘dev env in a can’ were obvious.
Then, as more people started to use it, a network effect was created, and once about 8 were on it, the others soon followed.

2015_Dockercon_EU_4_3_new22
There are many I could mention, but I want to talk about three benefits here that Docker faciliated. As more and more engineers embraced it, these benefits became mutually reinforcing in a virtuous circle.

2015_Dockercon_EU_4_3_new23

By having a repeatable daily build of a development environment, friction between engineers and teams was significantly reduced.
Before Docker, environments were unique, so discussions about the software often devolved down to discussions of the archaeology of that environment. Since we now had a reproducible way to get to the same starting state, reproduction of state became simpler.
I ran the 3rd line support team, and with Docker we could instantly get an environment up and running to recreate problems seen on live without begging favours from environment owners. In an early win for Docker I managed to reproduce a database engine crash from a single SQL command moments after we saw it happen on live. No need to find an environment that people were using, check it was OK to crash – this was contained on my laptop and I didn’t even have to wait for an OS to boot up.
2015_Dockercon_EU_4_3_new24

Interactions with test teams were made far simpler also. The daily build of the dev environment had some automated endpoint testing added to it, and the test team were notified by email, with the logs attached. This reduced the friction of interaction between testers and developers greatly, as there was no debate or negotiation about the environments being discussed.
Speed of delivery was also facilitated. Since fixes to the environment setup were shared across the team, there was a reduction in duplicated effort, and benefits fed into the automated build. Testing these changes was much quicker thanks to the layered filesystem, which our build tool leveraged to allow quick testing before a full phoenix build.

To show I eat my own dogfood, I wrote a website a few years ago in my spare time to track mortgage rates; it’s called themortgagemeter.com. I rebuild this site from scratch daily (video here). Doing that has had a number of very useful consequences. I can quickly make changes and run very simple tests against this static system, then throw it away if it doesn’t work. Very little overhead. It also acts as a canary – I’ve caught some interesting problems very quickly that I otherwise wouldn’t had I rebuilt on demand.

2015_Dockercon_EU_4_3_new25
Quality was also improved by being able to iterate faster and earlier in the cycle than before. A vivid example of this was with DB upgrades. Formerly, as we’d only had a few environments that were expensive to re-provision, DB upgrades were a haphazard and costly affair that took place on infrastructure hosted centrally.

2015_Dockercon_EU_4_3_new26

Now DB upgrades could be iterated in very tight cycles on the dev laptop, reducing the cost of failure and improving the quality by the time the customer saw it.

2015_Dockercon_EU_4_3_new27

Our CI process was also changed in two significant ways.

2015_Dockercon_EU_4_3_new28

We had a monolithic model of CI where we had an enormous Jenkins server shared across all teams, and on which changes could not easily be made – if you want a new version of python, for example, that created all sorts of headaches for the central IT team, who found it hard to maintain stability while accommodating these demands. Docker threw all that out:

2015_Dockercon_EU_4_3_new29

Teams could now take ownership of their own environments and take responsibility for stability themselves by producing their own images and containing dependencies to their own isolated environments.

What we did went beyond that, as we used the Jenkins Swarm plugin (not to be confused with the Docker product) to allow the developers’ own laptops to run CI. As one of my colleagues put it ‘why is it so hard for me to provision a VM when I have a Corei5 laptop on my desk that’s mostly idle?‘. So developers would submit their hardware to the Jenkins server as slaves, and Docker images were run on the hardware. This had the interesting property of allowing the compute to scale with the team – the more people that were in work committing changes, the more compute was available to use.

2015_Dockercon_EU_4_3_new30

Once we’d done all this and got Docker embedded we looked for ways to measure the return on investment. We had plenty of anecdotal evidence by this back and positive feedback from both engineers and customers

There was one small but vivid example of the savings made. There was an escrow process that we had to go through with some customers that involved demonstrating to an auditor that in the event of a disaster the customer could reconstruct the website without us. Traditionally, this had taken a fair number of days to work through, and a good amount of negotiation with the auditor to get them to accept. In addition, it was un-repeatable – it took n days each time. With Docker and the tooling we’d built, we not only completed the task in one-fifth of the time, but also the auditor (who had never heard of Docker) was satisfied after watching one run-through that reconstruction was replicable, and the developers on that team got their environment into a container.

2015_Dockercon_EU_4_3_new31

These sorts of anecdotes were all very well, but we wanted to put real numbers on it. To this end we performed a survey of engineers that were actively using it, which boiled down to a simple question: how much time is this saving you a month? To cut to the chase, the rough figure was around 4 days for those users that actively embraced it. Interestingly, we found that engineers were reluctant to admit time was saved, as they felt that somehow this made them feel like they’d been inefficient pre-docker.

In any case, if we took a 4-day/ month figure and apply that across the 600 engineers we came up with a figure of about 130 person years saved per year, which amounted to a lot of money, as you can imagine. And bear in mind that this was before we get to improvements in customer perception, which is a less tangible, but no less important benefit, or even efficiencies in hardware usage, which were significant.

Conclusion

These decisions are not advice! All of these decisions were made in the context I had worked in for over a decade. If you already have working CM tools, maybe you should use those! If your C-level have a good history of funding and delivering promising projects, maybe skunkworks is needlessly hamstringing yourself. As I said at the beginning, you’re the one in your current situation and in the best place to figure out what needs to be done.

Thanks for listening.

2015_Dockercon_EU_4_3_new33

The experience discussed here informed the writing of this book: Get 39% off with the code 39miell

dip

Advertisements

Docker Migration In-Flight CRIU

Docker CRIU Demo

tl;dr

An automated, annotated, interactive demo of live container migration using Virtualbox, Vagrant and ShutIt.

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

CRIU?

CRIU is a technology designed to allow the checkpointing and restoration of programs in userspace in the Linux kernel.

Containerization is a natural fit for this, since in theory most of its dependencies are contained and thus easier to reason about.

Work is proceeding on this technology, and this demo gives a flavour of what’s possible. It’s based on a post Circle-CI recently published.

It shows:

  • A container being checkpointed and restarted
  • A container with state being checkpointed and restarted
  • A container with state being moved from one VM to another

You can see it in action here, and the code is here.

I think this technology is a giant leap forward for Docker. The applications for this for testing, delivery and operations are immense.

Another recent demo involving live Quake migration is here.

 

Refs:

http://criu.org/Docker

http://blog.kubernetes.io/2015/07/how-did-quake-demo-from-dockercon-work.html

 

A High Availability Phoenix and A/B Deployment Framework using Docker

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

tl;dr

A four-step worked example of a process for creating a phoenix-deployed service fronted by haproxy, with two swappable backends deployed using Docker containers, enabling continuous delivery with minimal effort.

Introduction

I’ve long been interested in Phoenix deployment, and Docker as a means of achieving it. I’ve built my own Phoenix deployment framework which I use to automate deployment and auto-rebuild and deploy from scratch daily.

Phoenix Deployment?

Phoenix deployment is the principle of rebuilding ‘from scratch’ rather than updating an environment. Tools like Chef and Puppet are great for managing long-lived servers, but nothing beats regular rebuilds and deployments for ensuring you can reason about your environments.

I’ve been using Phoenix deployment to rebuild applications and services on a daily basis regardless of whether changes have been made. See here and here for previous posts on the subject.

Architecture

If you’re refreshing a service though you generally need to minimise downtime while. To achieve this, I use HAProxy to provide a stable endpoint for two backends – ‘old’ and ‘new’, as this figure shows:

Phoenix_Shutit

Image A and Image B are constructed from scratch each time using ShutIt, an automation tool designed with Phoenix deployments in mind.

 

Worked Example

Here’s a worked example. In it you will create an image that acts as a simple echo server, and then change the code and redeploy the service as a server that converts strings to hex.

Note: tested on a Digital Ocean 14.0.4 image.

 

1) Install pre-requisites:

sudo apt-get update && apt-get install python-pip git docker
sudo pip install shutit

2) Create phoenix build

Create the skeleton script, accepting the defaults:

user@host$ shutit skeleton

# Input a new directory name for this module.
# Default: /tmp/shutit_cohort

# Input module name.
# Default: shutit_cohort

# Input a unique domain.
# Default: imiell.shutit_cohort

# Input a delivery method from: ('docker', 'dockerfile', 'ssh', 'bash').
# Default: docker

docker = build within a docker image
dockerfile = call "shutit build" from within a dockerfile
ssh = ssh to target and build
bash = run commands directly within bash

================================================================================
Run:
cd /tmp/shutit_cohort/bin && ./build.sh
to build.
An image called shutit_cohort will be created
and can be run with the run.sh command in bin/.
================================================================================

 

3) Edit build

In this step you’re going to set up your echo server application as a simply python script embedded in a container image.

Go to the directory just created, eg for the above output:

cd /tmp/shutit_cohort

and open the python file in there:

vi shutit_cohort.py

and change the build method so it looks like this:

def build(self,shutit):
[...]
    # shutit.set_password(password, user='')
    # - Set password for a given user on target       
    shutit.install('socat')
    shutit.send(r'''echo socat tcp-l:80,fork exec:/bin/cat > /echo.sh''')
    shutit.send('chmod +x /echo.sh')

The line:

shutit.install('socat')

ensures that socat is installed on the container, and the next lines:

    shutit.send(r'''echo socat tcp-l:80,fork exec:/bin/cat > /echo.sh''')
    shutit.send('chmod +x /echo.sh')

creates the file ‘/echo.sh’ on the container as an executable using socat that acts as an echo server.

You want your service to run the python script, so change the file ‘bin/run.sh’ and change the last line from this:

${DOCKER} run -d --name ${CONTAINER_NAME} ${DOCKER_ARGS} ${IMAGE_NAME} /bin/sh -c 'sleep infinity'

to this:

${DOCKER} run -d --name ${CONTAINER_NAME} ${DOCKER_ARGS} ${IMAGE_NAME} /bin/sh -c '/echo.sh'

ie replace ‘sleep forever’ with your ‘/echo.sh’ command.

Note: Will use ports 8080-8082.

If these are used for other services, change the ports in phoenix.sh

4) Build and deploy the service

OK, we’re ready to go.

cd bin
sudo ./phoenix.sh

Kicks off the build and deploys the service. It builds and run the HAProxy server and the image that acts as back end ‘A’.

# CONTAINER_ID: 37352b9918bd08b843f2c5174266e1af199b6d05520551b4f9f0489342995618
# BUILD REPORT FOR BUILD END phoenix_imiell_1440614873.89.890216
###############################################################################

Build log file: /tmp/shutit_root/phoenix_imiell_1440614873.89.890216/shutit_build.log
/tmp/shutit_coolly/bin
f500e552cdd20445266ed4d6fa2d1ba3d55ca9845ea39744fc1c8ba1dd96a762

docker ps -a shows our two servers: haproxy taking requests on the host network, and passing it to the backend on 8081:

$ docker ps -a | grep shutit
f500e552cdd2  shutit_coolly          "/bin/sh -c /echo.sh"     0.0.0.0:8081->80/tcp   shutit_coolly
b28abcb76612  shutit_coolly_haproxy  "haproxy -f /usr/loca"                           shutit_coolly_haproxy

Now test your echo server:

imiell@phoenix:/tmp/shutit_coolly/bin$ telnet localhost 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
hello phoenix
hello phoenix

Note: You only need sudo if you need sudo to run docker on your host.

5)Iterate and re-deploy

Now you’re going to change the server and redeploy. We want to use cat’s -A flag to output more details when echoing, so change the socat line in the python script to:

shutit.send(r'''echo socat tcp-l:80,fork exec:'/bin/cat -A' > /echo.sh''')

and re-run the phoenix.sh as  you did before. When done, docker ps -a now shows the container running on port 8082 (ie the ‘B’ port):

$ docker ps -a | grep shutit
af0abdd3abc9 shutit_coolly         "/bin/sh -c /echo.sh"  0.0.0.0:8082->80/tcp shutit_coolly
b28abcb76612 shutit_coolly_haproxy "haproxy -f /usr/loca"                      shutit_coolly_haproxy

and to verify it’s worked:

$ telnet localhost 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
hello phoenix
hello phoenix^M$

Conclusion

By putting this code into git you can easily create a continuous deployment environment for your service that doesn’t interfere with other services, and is easy to maintain and keep track of.

I use this framework for various microservices I use on my home server, from databases I often want to run queries on to websites I manage. And this blog :)

Quick Intro to Kubernetes

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

Overview

Before we get going with Kubernetes, it’s a good idea to get a picture of Kubernetes’ high-level architecture.

kubernetes-key-concepts

Kubernetes has a master-minion architecture. Master nodes are responsible for receiving orders about what should be run on the cluster and orchestrating its resources. Each minion has Docker installed on it, and a ‘kubelet’ service, which manages the pods (sets of containers) running on each node. Information about the cluster is maintained in etcd, a distributed key-value data store, and this is the cluster’s source of truth.

What’s a Pod?

We’ll go over it again later in this article, so don’t worry about it so much now, but if you’re curious, a pod is a grouping of related containers. The concept exists to facilitate simpler management and maintenance of Docker containers.

The end goal of Kubernetes is to make running your containers at scale a simple matter of declaring what you want and letting Kubernetes take care of ensuring the cluster achieves your desires. In this technique you will see how to scale a simple service to a given size by running one command.

Why was Kubenetes built?

Kubernetes was originally developed by Google as a means of managing containers at scale. Google has been running containers for over a decade at scale, and decided to develop this container orchestration system when Docker became popular. It builds on the lessons learned from this extensive experience. It is also known as ‘K8s’.

Installation

To install Kubernetes you have a choice. You can either install directly on your host, which will give you a single-minion cluster, or use Vagrant to install a multi-minion cluster managed with VMs.

To install a single-minion cluster on your host, run:

export KUBERNETES_PROVIDER=vagrant
curl -sS https://get.k8s.io | bash

Latest instructions here

If you want to install a multi-minion cluster, you have another choice. Either follow the instructions on the Kubernetes GitHub repository (see ‘Latest instructions’ above) for Vagrant, or you can try an automated script maintained by me which sets up a two-minion cluster: see here

If you have Kubernetes installed you can follow along from here. The following output will be based on a multi-node cluster. Next we’re going to start simply by creating a single container and using Kubernetes to scale it up.

Scaling a Single Container

You can start up a pod from an image stored on the Docker Hub with the ‘run-container’ subcommand to kubectl.

The following command starts up a pod, giving it the name ‘todo’, and telling Kubernetes to use the dockerinpractice/todo image from the DockerHub.

$ kubectl run-container todo --image=dockerinpractice/todo

Now if you run the ‘get pods’ subcommand you can list the pods, and see that it’s in a ‘Pending’ state, most likely because it’s downloading the image from the Docker Hub

$ kubectl get pods | egrep "(POD|todo)"
POD        IP          CONTAINER(S)       IMAGE(S) HOST LABELS STATUS  CREATED         MESSAGE
todo-hmj8e 10.245.1.3/ run-container=todo                      Pending About a minute

After waiting a few minutes for the todo image to download, you will eventually see that its status has changed to ‘Running’:

$ kubectl get pods | egrep "(POD|todo)"
POD        IP         CONTAINER(S) IMAGE(S) HOST                  LABELS             STATUS  CREATED   MESSAGE
todo-hmj8e 10.246.1.3                       10.245.1.3/10.245.1.3 run-container=todo Running 4 minutes
                      todo dockerinpractice/todo                                     Running About a minute

This time the ‘IP’, ‘CONTAINER(S)’ and ‘IMAGE(S)’ columns are populated. The IP column gives the address of the pod (in this case ‘10.246.1.3’), the container column has one row per container in the pod (in this case we have only one, ‘todo’). You can test that the container (todo) is indeed up and running and serving requests by hitting the IP address and port directly:

$ wget -qO- 10.246.1.3:8000
[...]

Scale

At this point we’ve not seen much difference from running a Docker container directly. To get your first taste of Kubernetes you can scale up this service by running a resize command:

$ kubectl resize --replicas=3 replicationController todo
resized

This command has specified to Kubernetes that we want the todo replication controller to ensure that there are three instances of the todo app running across the cluster.

What is a replication controller?
A replication controller is a Kubernetes service that ensures that the
right number of pods are running across the cluster.

 

$ kubectl get pods | egrep "(POD|todo)"
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
todo-2ip3n 10.246.2.2 10.245.1.4/10.245.1.4 run-container=todo Running 10 minutes
todo dockerinpractice/todo Running 8 minutes
todo-4os5b 10.246.1.3 10.245.1.3/10.245.1.3 run-container=todo Running 2 minutes
todo dockerinpractice/todo Running 48 seconds
todo-cuggp 10.246.2.3 10.245.1.4/10.245.1.4 run-container=todo Running 2 minutes
todo dockerinpractice/todo Running 2 minutes

Kubernetes has taken the resize instruction and the todo replication controller and ensured that the right number of pods are started up. Notice that it placed two on one host (10.245.1.4) and one on another (10.245.1.3). This is because Kubernetes’ default scheduler has an algorithm that spreads pods across nodes by default.

You’ve started to see how Kubernetes can make management of containers easier across multiple hosts. Next we dive into the core Kubernetes concept of pods.

Pods

A pod is a collection of containers that are designed to work together in some way that share resources.

Each pod gets its own IP address, and shares the same volumes and network port range. Because a pod’s containers share a ‘localhost’, the containers can rely on the different services being available and visible wherever they are deployed.

The following figure illustrates this with two containers that share a volume.

pods

In the above figure Container1 might be a webserver that reads data files from the shared volume which is in turn updated by Container2. Both containers are therefore stateless, while state is stored in the shared volume.

This facilitates a microservices approach by allowing you to manage each part of your service separately, allowing you to upgrade one image without needing to be concerned with the others.

The following Pod specification defines a complex pod that has a container that writes random data (simplewriter) to a file every five seconds, and another container that reads from the same file (simplereader). The file is shared via a volume (pod-disk).

{
  "id": "complexpod",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
  "manifest": {
    "version": "v1beta1",
    "id": "complexpod",
    "containers": [{
      "name": "simplereader",
      "image": "dockerinpractice/simplereader",
      "volumeMounts": [{
        "mountPath": "/data",
        "name": "pod-disk"
      }]},{
        "name": "simplewriter",
        "image": "dockerinpractice/simplewriter",
        "volumeMounts": [{
          "mountPath": "/data",
          "name": "pod-disk"
        }]
      }],
      "volumes": [{
        "name": "pod-disk",
        "emptydir": {}
      }]
    }
  }
}

Have a look at the pod specification above. The mount path is the path to the volume mounted on the filesystem of the container. This could be set to a different location for each container. The volume mount name refers to the matching name in the pod manifest’s
‘volumes’ definition. The ‘volumes’ attribute defines the volumes created for this pod. The name of the volume is what’s referred to in the volumeMounts entries above. ’emptydir’ is a temporary directory that shares a pod’s lifetime. Persistent volumes are also available.

To load this pod specification in create a file with the above listing in run:

$ kubectl create -f complexpod.json
pods/complexpod

After waiting a minute for the images to download, you can see the log output of the container by running the ‘kubectl log’ and specifying first the pod and then the container you are interested in.

$ kubectl log complexpod simplereader
 2015-08-04T21:03:36.535014550Z '? U
 [2015-08-04T21:03:41.537370907Z] h(^3eSk4y
 [2015-08-04T21:03:41.537370907Z] CM(@
 [2015-08-04T21:03:46.542871125Z] qm>5
 [2015-08-04T21:03:46.542871125Z] {Vv_
 [2015-08-04T21:03:51.552111956Z] KH+74 f
 [2015-08-04T21:03:56.556372427Z] j?p+!

What Next

We’ve just scratched the surface of Kubernetes’ capabilities and potential here, but this should give a flavour of what can be done with it, and how it can make orchestrating Docker containers simpler.

Take OpenShift for a spin in four commands

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

OpenShift

Is RedHat’s application Platform as a Service (aPaaS). It builds on Docker and Kubernetes to provide an Enterprise-level service for application provisioning.

PaaSes bring a great of benefits with them: centralised resource management, quotas, isolation,

OpenShift is a relatively old product in aPaaS terms. What’s changed recently is that version 3 has been substantially changed to be built around Docker and Kubernetes and written in Go, which are seen as stable building blocks for the future.

In this article you’re going to get OpenShift set up in 5 commands, and see an application provisioned, built and deployed using just a login and a GitHub reference.

NOTE: It will be safer/easier to run if you already have Vagrant+Virtualbox installed. The script will try to install it for you if it’s not already though. This has primarily been tested on Ubuntu and Mac (on tin). If you have other operating systems, please get in touch if you come across problems.

Get Going

Run these four commands (assumes you have pip and git already):

sudo pip install shutit
git clone --recursive https://github.com/ianmiell/shutit-openshift-origin
cd shutit-openshift-origin
./run.sh

And you will get a desktop with OpenShift installed.

NOTE: You’ll need a decent amount of memory free (2G+), and may need to input your password for sudo. You’ll be prompted for both. You can choose to continue with less memory, but you may go into swap or just run out.

NOTE: Assumes you have pip. If not, try this:

sudo apt-get install python-pip || yum install python-pip || yum install python || sudo easy_install python || brew install python

Open up a browser within this desktop and navigate to:

https://localhost:8443

and bypass all the security warnings until you end up at the login screen.

Openshift_login

Now log in as hal-1 with any password.

Build a nodejs app

OK, you’re now logged into OpenShift as a developer.

Openshift_project

 

Create a project by clicking ‘Create’ (a project has been set up but this has quotas set up to demo limits). Fill out the form.

OpenShift_create_proj

and click ‘Create’ again.

Once the Project is set up, click on ‘Create’ again, in the top right hand side this time.

OpenShift_project_github

Choose a builder image (pick nodes:0.10). This build image defines the context in which the code will get built. See my source to image post for more on this.

OpenShift-builder_image

Now click on ‘Create’ on the nodejs page.

If you wait, then after a few minutes you should see a screen like the following:

OpenShift_start_build

and eventually, if you scroll down you will see that the build has started:

OpenShift_building

Eventually, you will see that the app is running:

OpenShift_running

and by clicking on ‘Browse’ and ‘Pods’ you can see that the pod has been deployed:

OpenShift_pods

Now, how to access it? If you look at the services tab:

OpenShift_service

you will see an ip address and port number to access. Go there, and voila, you have your nodejs app:

OpenShift_nodejs_app

Further Work

Now fork the github repo, make a change, and do a build against this fork.

If you can’t be bothered, use my fork at: https://github.com/docker-in-practice/nodejs-ex

 

Conclusion

There’s a lot more to OpenShift than this. If you want to read more see here:

https://docs.openshift.org/latest/welcome/index.html

Any problems with this, raise an issue here:

https://github.com/ianmiell/shutit-openshift-origin

or leave a message

 

 

RedHat's Docker Build Method – S2I

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

Overview

‘Source To Image’ is a means of creating Docker images by depositing source code into a separately-defined Docker image that is responsible for building the image.

You may be wondering why such a build method was conceived. The principal reason is that it allows application developers to make changes to their code without being concerned with the details of Dockerfiles, or even Docker images. If the image is delivered to a aPaaS (application platform as a service), the individual engineer need not know about Docker at all to contribute to the project! This is very useful in an enterprise environment where there are large numbers of people that have specific areas of expertise and are not directly concerned with the details of the build.

STI

Other Benefits

Once the process is set up, the engineer need only be concerned about the changes they want to make to their source code in order to progress them to different environments.

The advantages of this approach break down into a number of areas:

Flexibility

This process can easily be plugged into any existing software delivery process, and use almost any Docker image as its base layer.

Speed

This method of building can be faster than Dockerfile builds, as any number of complex operations can be added to the build process without creating a new layer at each step. S2I also gives you the capability to re-use artifacts between builds to save time.

Separation of concerns

Since source code and Docker image are cleanly and strongly separated, developers can be concerned with code while infrastructure can be concerned with Docker images and delivery. As the base underlying image is separated from the code, upgrades and patches are more easily delivered.

Security

This process can restrict the operations performed in the build to a specific user, unlike Dockerfiles which allow arbitrary commands to be run as root.

Ecosystem

The structure of this framework allows for a shared ecosystem of image and code separation patterns for easier large-scale operations.

This post is going to show you how to build one such pattern, albeit a simple and somewhat limited one! Our application pattern will consist of:

  •  Source code that contains one shell script
  • A builder that creates an image which takes that shell script, makes it runnable, and runs it

Create Your Own S2I Image

1) Start up an S2I development environment

To help ensure a consistent experience you can use a maintained environment to develop your S2I build image and project.

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock dockerinpractice/shutit-s2i

This command ensures the host’s docker daemon is available within the container through mounting the host’s Docker Unix socket to the container, and uses a maintained sti build environment (the image ‘dockerinpractie/shutit-s2i’)

Problems? SELinux enabled?

If you are running in an selinux-enabled environment, then you may have problems running docker within a container!

2) Create your git project

This could be on built elsewhere and placed on GitHub (for example), but to keep this example simple and self-contained we’re going to create it locally in our S2I development environment. As mentioned above, our source code consists of one shell script. As a trivial example, this simply outputs ‘Hello World’ to the terminal.

mkdir /root/myproject
cd /root/myproject
git init
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
cat > app.sh <<< "echo 'Hello World'"
git add .
git commit -am 'Initial commit'

3) Create the builder image

sti create sti-simple-shell /opt/sti-simple-shell
cd /opt/sti-simple-shell

This S2I command creates several files. To get our workflow working, we’re going to focus on editing just these files:

  • Dockerfile
  • .sti/bin/assemble
  • .sti/bin/run

Taking the Dockerfile first, change its contents to match the following listing:

FROM openshift/base-centos7
RUN chown -R default:default /opt/openshift
COPY ./.sti/bin /usr/local/sti
RUN chmod +x /usr/local/sti/*
USER default

This Dockerfile uses the standard openshift base-centos7 image. This has the ‘default’ user already created within it. It then changes ownership of the default openshift code location to the default user, copies the S2I scripts into the default location for an S2I build, snsures the S2I scripts are executable and makes the builder image use the pre-created ‘default’ user by default.

Next you create the assemble script, which is responsible for taking the source code and compiling it ready to run. Below is a simplified, but feature-complete version of this bash script for you to use.

#!/bin/bash -e
cp -Rf /tmp/src/. ./
chmod +x /opt/openshift/src/app.sh

It runs as a bash script, exiting on any failure (-e), copies the application source into the default directory and builds the application from source. In this case, the ‘build’ is the simple step of making the app.sh file executable.

The ‘run’ script of your S2I build is responsible for running your application. It is the script that the image will run by default:

#!/bin/bash -e
exec /opt/openshift/src/app.sh

Now our builder is ready you run ‘make’ to build your S2I builder image. It will create a Docker image called sti-simple-shell. This image will provide the environment for your final image – the one that includes the software project we made above – to be built. The output of your ‘make’ call should look similar to this:

$ make
 imiell@osboxes:/space/git/sti-simple-shell$ make
 docker build --no-cache -t sti-simple-shell .
 Sending build context to Docker daemon 153.1 kB
 Sending build context to Docker daemon
 Step 0 : FROM openshift/base-centos7
 ---> f20de2f94385
 Step 1 : RUN chown -R default:default /opt/openshift
 ---> Running in f25904e8f204
 ---> 3fb9a927c2f1
 Removing intermediate container f25904e8f204
 Step 2 : COPY ./.sti/bin /usr/local/sti
 ---> c8a73262914e
 Removing intermediate container 93ab040d323e
 Step 3 : RUN chmod +x /usr/local/sti/*
 ---> Running in d71fab9bbae8
 ---> 39e81901d87c
 Removing intermediate container d71fab9bbae8
 Step 4 : USER default
 ---> Running in 5d305966309f
 ---> ca3f5e3edc32
 Removing intermediate container 5d305966309f
 Successfully built ca3f5e3edc32

If you run ‘docker images’ you should now see an image called sti-simple-shell stored locally on your host.

4) Build the Application Image

Looking back at the image at the top of this post, we now have the three things we need for an S2I build in place:

  • Source code
  • A builder image that provides an environment for building and running the source code
  • The sti program

These three are located in one place in this walkthrough, but the only one that needs to be local to our run is the sti program. The builder image can be fetched from a registry, and the source code can be fetched from a git repository such as GitHub.

$ sti build --force-pull=false --loglevel=1 file:///root/myproject sti-simple-shell final-image-1
 I0608 13:02:00.727125 00119 sti.go:112] Building final-image-1
 I0608 13:02:00.843933 00119 sti.go:182] Using assemble from image:///usr/local/sti
 I0608 13:02:00.843961 00119 sti.go:182] Using run from image:///usr/local/sti
 I0608 13:02:00.843976 00119 sti.go:182] Using save-artifacts from image:///usr/local/sti
 I0608 13:02:00.843989 00119 sti.go:120] Clean build will be performed
 I0608 13:02:00.844003 00119 sti.go:130] Building final-image-1
 I0608 13:02:00.844026 00119 sti.go:330] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.178553 00119 sti.go:388] ---> Installing application source
 I0608 13:02:01.179582 00119 sti.go:388] ---> Building application from source
 I0608 13:02:01.294598 00119 sti.go:216] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.353449 00119 sti.go:246] Successfully built final-image-1

You can now run your built image, with the source code applied to it:

$ docker run final-image-1
 Hello World

Change and rebuild

It’s easier to see the purpose of this build method now we have a working example. Imagine you are a new developer ready to contribute to the project. You can simply make changes to the git repository and run a simple command to rebuild the image without knowing anything about Docker:

cd /root/myproject
cat > app.sh <<< "echo 'Hello S2I!'"
git commit -am 'new message'
sti build --force-pull=false file:///root/myproject sti-simple-shell final-image-2

Running this image shows the new message we just set in the code:

 

$ docker run final-image-2
Hello S21!

What Next?

This post demonstrated a simple example, but it’s easy to imagine how this framework could be adapted to your particular requirements. What you end up with is a means for developers to push changes out to other consumers of their software without caring about the details of Docker image production.

Other techniques can be used in combination with this to facilitate DevOps processes. For example, by using git post-commit hooks you can automate the S2I build call on checkin.

 

RedHat’s Docker Build Method – S2I

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

Overview

‘Source To Image’ is a means of creating Docker images by depositing source code into a separately-defined Docker image that is responsible for building the image.

You may be wondering why such a build method was conceived. The principal reason is that it allows application developers to make changes to their code without being concerned with the details of Dockerfiles, or even Docker images. If the image is delivered to a aPaaS (application platform as a service), the individual engineer need not know about Docker at all to contribute to the project! This is very useful in an enterprise environment where there are large numbers of people that have specific areas of expertise and are not directly concerned with the details of the build.

STI

Other Benefits

Once the process is set up, the engineer need only be concerned about the changes they want to make to their source code in order to progress them to different environments.

The advantages of this approach break down into a number of areas:

Flexibility

This process can easily be plugged into any existing software delivery process, and use almost any Docker image as its base layer.

Speed

This method of building can be faster than Dockerfile builds, as any number of complex operations can be added to the build process without creating a new layer at each step. S2I also gives you the capability to re-use artifacts between builds to save time.

Separation of concerns

Since source code and Docker image are cleanly and strongly separated, developers can be concerned with code while infrastructure can be concerned with Docker images and delivery. As the base underlying image is separated from the code, upgrades and patches are more easily delivered.

Security

This process can restrict the operations performed in the build to a specific user, unlike Dockerfiles which allow arbitrary commands to be run as root.

Ecosystem

The structure of this framework allows for a shared ecosystem of image and code separation patterns for easier large-scale operations.

This post is going to show you how to build one such pattern, albeit a simple and somewhat limited one! Our application pattern will consist of:

  •  Source code that contains one shell script
  • A builder that creates an image which takes that shell script, makes it runnable, and runs it

Create Your Own S2I Image

1) Start up an S2I development environment

To help ensure a consistent experience you can use a maintained environment to develop your S2I build image and project.

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock dockerinpractice/shutit-s2i

This command ensures the host’s docker daemon is available within the container through mounting the host’s Docker Unix socket to the container, and uses a maintained sti build environment (the image ‘dockerinpractie/shutit-s2i’)

Problems? SELinux enabled?

If you are running in an selinux-enabled environment, then you may have problems running docker within a container!

2) Create your git project

This could be on built elsewhere and placed on GitHub (for example), but to keep this example simple and self-contained we’re going to create it locally in our S2I development environment. As mentioned above, our source code consists of one shell script. As a trivial example, this simply outputs ‘Hello World’ to the terminal.

mkdir /root/myproject
cd /root/myproject
git init
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
cat > app.sh <<< "echo 'Hello World'"
git add .
git commit -am 'Initial commit'

3) Create the builder image

sti create sti-simple-shell /opt/sti-simple-shell
cd /opt/sti-simple-shell

This S2I command creates several files. To get our workflow working, we’re going to focus on editing just these files:

  • Dockerfile
  • .sti/bin/assemble
  • .sti/bin/run

Taking the Dockerfile first, change its contents to match the following listing:

FROM openshift/base-centos7
RUN chown -R default:default /opt/openshift
COPY ./.sti/bin /usr/local/sti
RUN chmod +x /usr/local/sti/*
USER default

This Dockerfile uses the standard openshift base-centos7 image. This has the ‘default’ user already created within it. It then changes ownership of the default openshift code location to the default user, copies the S2I scripts into the default location for an S2I build, snsures the S2I scripts are executable and makes the builder image use the pre-created ‘default’ user by default.

Next you create the assemble script, which is responsible for taking the source code and compiling it ready to run. Below is a simplified, but feature-complete version of this bash script for you to use.

#!/bin/bash -e
cp -Rf /tmp/src/. ./
chmod +x /opt/openshift/src/app.sh

It runs as a bash script, exiting on any failure (-e), copies the application source into the default directory and builds the application from source. In this case, the ‘build’ is the simple step of making the app.sh file executable.

The ‘run’ script of your S2I build is responsible for running your application. It is the script that the image will run by default:

#!/bin/bash -e
exec /opt/openshift/src/app.sh

Now our builder is ready you run ‘make’ to build your S2I builder image. It will create a Docker image called sti-simple-shell. This image will provide the environment for your final image – the one that includes the software project we made above – to be built. The output of your ‘make’ call should look similar to this:

$ make
 imiell@osboxes:/space/git/sti-simple-shell$ make
 docker build --no-cache -t sti-simple-shell .
 Sending build context to Docker daemon 153.1 kB
 Sending build context to Docker daemon
 Step 0 : FROM openshift/base-centos7
 ---> f20de2f94385
 Step 1 : RUN chown -R default:default /opt/openshift
 ---> Running in f25904e8f204
 ---> 3fb9a927c2f1
 Removing intermediate container f25904e8f204
 Step 2 : COPY ./.sti/bin /usr/local/sti
 ---> c8a73262914e
 Removing intermediate container 93ab040d323e
 Step 3 : RUN chmod +x /usr/local/sti/*
 ---> Running in d71fab9bbae8
 ---> 39e81901d87c
 Removing intermediate container d71fab9bbae8
 Step 4 : USER default
 ---> Running in 5d305966309f
 ---> ca3f5e3edc32
 Removing intermediate container 5d305966309f
 Successfully built ca3f5e3edc32

If you run ‘docker images’ you should now see an image called sti-simple-shell stored locally on your host.

4) Build the Application Image

Looking back at the image at the top of this post, we now have the three things we need for an S2I build in place:

  • Source code
  • A builder image that provides an environment for building and running the source code
  • The sti program

These three are located in one place in this walkthrough, but the only one that needs to be local to our run is the sti program. The builder image can be fetched from a registry, and the source code can be fetched from a git repository such as GitHub.

$ sti build --force-pull=false --loglevel=1 file:///root/myproject sti-simple-shell final-image-1
 I0608 13:02:00.727125 00119 sti.go:112] Building final-image-1
 I0608 13:02:00.843933 00119 sti.go:182] Using assemble from image:///usr/local/sti
 I0608 13:02:00.843961 00119 sti.go:182] Using run from image:///usr/local/sti
 I0608 13:02:00.843976 00119 sti.go:182] Using save-artifacts from image:///usr/local/sti
 I0608 13:02:00.843989 00119 sti.go:120] Clean build will be performed
 I0608 13:02:00.844003 00119 sti.go:130] Building final-image-1
 I0608 13:02:00.844026 00119 sti.go:330] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.178553 00119 sti.go:388] ---> Installing application source
 I0608 13:02:01.179582 00119 sti.go:388] ---> Building application from source
 I0608 13:02:01.294598 00119 sti.go:216] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.353449 00119 sti.go:246] Successfully built final-image-1

You can now run your built image, with the source code applied to it:

$ docker run final-image-1
 Hello World

Change and rebuild

It’s easier to see the purpose of this build method now we have a working example. Imagine you are a new developer ready to contribute to the project. You can simply make changes to the git repository and run a simple command to rebuild the image without knowing anything about Docker:

cd /root/myproject
cat > app.sh <<< "echo 'Hello S2I!'"
git commit -am 'new message'
sti build --force-pull=false file:///root/myproject sti-simple-shell final-image-2

Running this image shows the new message we just set in the code:

 

$ docker run final-image-2
Hello S21!

What Next?

This post demonstrated a simple example, but it’s easy to imagine how this framework could be adapted to your particular requirements. What you end up with is a means for developers to push changes out to other consumers of their software without caring about the details of Docker image production.

Other techniques can be used in combination with this to facilitate DevOps processes. For example, by using git post-commit hooks you can automate the S2I build call on checkin.