Take OpenShift for a spin in four commands

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

OpenShift

Is RedHat’s application Platform as a Service (aPaaS). It builds on Docker and Kubernetes to provide an Enterprise-level service for application provisioning.

PaaSes bring a great of benefits with them: centralised resource management, quotas, isolation,

OpenShift is a relatively old product in aPaaS terms. What’s changed recently is that version 3 has been substantially changed to be built around Docker and Kubernetes and written in Go, which are seen as stable building blocks for the future.

In this article you’re going to get OpenShift set up in 5 commands, and see an application provisioned, built and deployed using just a login and a GitHub reference.

NOTE: It will be safer/easier to run if you already have Vagrant+Virtualbox installed. The script will try to install it for you if it’s not already though. This has primarily been tested on Ubuntu and Mac (on tin). If you have other operating systems, please get in touch if you come across problems.

Get Going

Run these four commands (assumes you have pip and git already):

sudo pip install shutit
git clone --recursive https://github.com/ianmiell/shutit-openshift-origin
cd shutit-openshift-origin
./run.sh

And you will get a desktop with OpenShift installed.

NOTE: You’ll need a decent amount of memory free (2G+), and may need to input your password for sudo. You’ll be prompted for both. You can choose to continue with less memory, but you may go into swap or just run out.

NOTE: Assumes you have pip. If not, try this:

sudo apt-get install python-pip || yum install python-pip || yum install python || sudo easy_install python || brew install python

Open up a browser within this desktop and navigate to:

https://localhost:8443

and bypass all the security warnings until you end up at the login screen.

Openshift_login

Now log in as hal-1 with any password.

Build a nodejs app

OK, you’re now logged into OpenShift as a developer.

Openshift_project

 

Create a project by clicking ‘Create’ (a project has been set up but this has quotas set up to demo limits). Fill out the form.

OpenShift_create_proj

and click ‘Create’ again.

Once the Project is set up, click on ‘Create’ again, in the top right hand side this time.

OpenShift_project_github

Choose a builder image (pick nodes:0.10). This build image defines the context in which the code will get built. See my source to image post for more on this.

OpenShift-builder_image

Now click on ‘Create’ on the nodejs page.

If you wait, then after a few minutes you should see a screen like the following:

OpenShift_start_build

and eventually, if you scroll down you will see that the build has started:

OpenShift_building

Eventually, you will see that the app is running:

OpenShift_running

and by clicking on ‘Browse’ and ‘Pods’ you can see that the pod has been deployed:

OpenShift_pods

Now, how to access it? If you look at the services tab:

OpenShift_service

you will see an ip address and port number to access. Go there, and voila, you have your nodejs app:

OpenShift_nodejs_app

Further Work

Now fork the github repo, make a change, and do a build against this fork.

If you can’t be bothered, use my fork at: https://github.com/docker-in-practice/nodejs-ex

 

Conclusion

There’s a lot more to OpenShift than this. If you want to read more see here:

https://docs.openshift.org/latest/welcome/index.html

Any problems with this, raise an issue here:

https://github.com/ianmiell/shutit-openshift-origin

or leave a message

 

 

Advertisements

RedHat's Docker Build Method – S2I

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

Overview

‘Source To Image’ is a means of creating Docker images by depositing source code into a separately-defined Docker image that is responsible for building the image.

You may be wondering why such a build method was conceived. The principal reason is that it allows application developers to make changes to their code without being concerned with the details of Dockerfiles, or even Docker images. If the image is delivered to a aPaaS (application platform as a service), the individual engineer need not know about Docker at all to contribute to the project! This is very useful in an enterprise environment where there are large numbers of people that have specific areas of expertise and are not directly concerned with the details of the build.

STI

Other Benefits

Once the process is set up, the engineer need only be concerned about the changes they want to make to their source code in order to progress them to different environments.

The advantages of this approach break down into a number of areas:

Flexibility

This process can easily be plugged into any existing software delivery process, and use almost any Docker image as its base layer.

Speed

This method of building can be faster than Dockerfile builds, as any number of complex operations can be added to the build process without creating a new layer at each step. S2I also gives you the capability to re-use artifacts between builds to save time.

Separation of concerns

Since source code and Docker image are cleanly and strongly separated, developers can be concerned with code while infrastructure can be concerned with Docker images and delivery. As the base underlying image is separated from the code, upgrades and patches are more easily delivered.

Security

This process can restrict the operations performed in the build to a specific user, unlike Dockerfiles which allow arbitrary commands to be run as root.

Ecosystem

The structure of this framework allows for a shared ecosystem of image and code separation patterns for easier large-scale operations.

This post is going to show you how to build one such pattern, albeit a simple and somewhat limited one! Our application pattern will consist of:

  •  Source code that contains one shell script
  • A builder that creates an image which takes that shell script, makes it runnable, and runs it

Create Your Own S2I Image

1) Start up an S2I development environment

To help ensure a consistent experience you can use a maintained environment to develop your S2I build image and project.

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock dockerinpractice/shutit-s2i

This command ensures the host’s docker daemon is available within the container through mounting the host’s Docker Unix socket to the container, and uses a maintained sti build environment (the image ‘dockerinpractie/shutit-s2i’)

Problems? SELinux enabled?

If you are running in an selinux-enabled environment, then you may have problems running docker within a container!

2) Create your git project

This could be on built elsewhere and placed on GitHub (for example), but to keep this example simple and self-contained we’re going to create it locally in our S2I development environment. As mentioned above, our source code consists of one shell script. As a trivial example, this simply outputs ‘Hello World’ to the terminal.

mkdir /root/myproject
cd /root/myproject
git init
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
cat > app.sh <<< "echo 'Hello World'"
git add .
git commit -am 'Initial commit'

3) Create the builder image

sti create sti-simple-shell /opt/sti-simple-shell
cd /opt/sti-simple-shell

This S2I command creates several files. To get our workflow working, we’re going to focus on editing just these files:

  • Dockerfile
  • .sti/bin/assemble
  • .sti/bin/run

Taking the Dockerfile first, change its contents to match the following listing:

FROM openshift/base-centos7
RUN chown -R default:default /opt/openshift
COPY ./.sti/bin /usr/local/sti
RUN chmod +x /usr/local/sti/*
USER default

This Dockerfile uses the standard openshift base-centos7 image. This has the ‘default’ user already created within it. It then changes ownership of the default openshift code location to the default user, copies the S2I scripts into the default location for an S2I build, snsures the S2I scripts are executable and makes the builder image use the pre-created ‘default’ user by default.

Next you create the assemble script, which is responsible for taking the source code and compiling it ready to run. Below is a simplified, but feature-complete version of this bash script for you to use.

#!/bin/bash -e
cp -Rf /tmp/src/. ./
chmod +x /opt/openshift/src/app.sh

It runs as a bash script, exiting on any failure (-e), copies the application source into the default directory and builds the application from source. In this case, the ‘build’ is the simple step of making the app.sh file executable.

The ‘run’ script of your S2I build is responsible for running your application. It is the script that the image will run by default:

#!/bin/bash -e
exec /opt/openshift/src/app.sh

Now our builder is ready you run ‘make’ to build your S2I builder image. It will create a Docker image called sti-simple-shell. This image will provide the environment for your final image – the one that includes the software project we made above – to be built. The output of your ‘make’ call should look similar to this:

$ make
 imiell@osboxes:/space/git/sti-simple-shell$ make
 docker build --no-cache -t sti-simple-shell .
 Sending build context to Docker daemon 153.1 kB
 Sending build context to Docker daemon
 Step 0 : FROM openshift/base-centos7
 ---> f20de2f94385
 Step 1 : RUN chown -R default:default /opt/openshift
 ---> Running in f25904e8f204
 ---> 3fb9a927c2f1
 Removing intermediate container f25904e8f204
 Step 2 : COPY ./.sti/bin /usr/local/sti
 ---> c8a73262914e
 Removing intermediate container 93ab040d323e
 Step 3 : RUN chmod +x /usr/local/sti/*
 ---> Running in d71fab9bbae8
 ---> 39e81901d87c
 Removing intermediate container d71fab9bbae8
 Step 4 : USER default
 ---> Running in 5d305966309f
 ---> ca3f5e3edc32
 Removing intermediate container 5d305966309f
 Successfully built ca3f5e3edc32

If you run ‘docker images’ you should now see an image called sti-simple-shell stored locally on your host.

4) Build the Application Image

Looking back at the image at the top of this post, we now have the three things we need for an S2I build in place:

  • Source code
  • A builder image that provides an environment for building and running the source code
  • The sti program

These three are located in one place in this walkthrough, but the only one that needs to be local to our run is the sti program. The builder image can be fetched from a registry, and the source code can be fetched from a git repository such as GitHub.

$ sti build --force-pull=false --loglevel=1 file:///root/myproject sti-simple-shell final-image-1
 I0608 13:02:00.727125 00119 sti.go:112] Building final-image-1
 I0608 13:02:00.843933 00119 sti.go:182] Using assemble from image:///usr/local/sti
 I0608 13:02:00.843961 00119 sti.go:182] Using run from image:///usr/local/sti
 I0608 13:02:00.843976 00119 sti.go:182] Using save-artifacts from image:///usr/local/sti
 I0608 13:02:00.843989 00119 sti.go:120] Clean build will be performed
 I0608 13:02:00.844003 00119 sti.go:130] Building final-image-1
 I0608 13:02:00.844026 00119 sti.go:330] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.178553 00119 sti.go:388] ---> Installing application source
 I0608 13:02:01.179582 00119 sti.go:388] ---> Building application from source
 I0608 13:02:01.294598 00119 sti.go:216] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.353449 00119 sti.go:246] Successfully built final-image-1

You can now run your built image, with the source code applied to it:

$ docker run final-image-1
 Hello World

Change and rebuild

It’s easier to see the purpose of this build method now we have a working example. Imagine you are a new developer ready to contribute to the project. You can simply make changes to the git repository and run a simple command to rebuild the image without knowing anything about Docker:

cd /root/myproject
cat > app.sh <<< "echo 'Hello S2I!'"
git commit -am 'new message'
sti build --force-pull=false file:///root/myproject sti-simple-shell final-image-2

Running this image shows the new message we just set in the code:

 

$ docker run final-image-2
Hello S21!

What Next?

This post demonstrated a simple example, but it’s easy to imagine how this framework could be adapted to your particular requirements. What you end up with is a means for developers to push changes out to other consumers of their software without caring about the details of Docker image production.

Other techniques can be used in combination with this to facilitate DevOps processes. For example, by using git post-commit hooks you can automate the S2I build call on checkin.

 

RedHat’s Docker Build Method – S2I

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

Overview

‘Source To Image’ is a means of creating Docker images by depositing source code into a separately-defined Docker image that is responsible for building the image.

You may be wondering why such a build method was conceived. The principal reason is that it allows application developers to make changes to their code without being concerned with the details of Dockerfiles, or even Docker images. If the image is delivered to a aPaaS (application platform as a service), the individual engineer need not know about Docker at all to contribute to the project! This is very useful in an enterprise environment where there are large numbers of people that have specific areas of expertise and are not directly concerned with the details of the build.

STI

Other Benefits

Once the process is set up, the engineer need only be concerned about the changes they want to make to their source code in order to progress them to different environments.

The advantages of this approach break down into a number of areas:

Flexibility

This process can easily be plugged into any existing software delivery process, and use almost any Docker image as its base layer.

Speed

This method of building can be faster than Dockerfile builds, as any number of complex operations can be added to the build process without creating a new layer at each step. S2I also gives you the capability to re-use artifacts between builds to save time.

Separation of concerns

Since source code and Docker image are cleanly and strongly separated, developers can be concerned with code while infrastructure can be concerned with Docker images and delivery. As the base underlying image is separated from the code, upgrades and patches are more easily delivered.

Security

This process can restrict the operations performed in the build to a specific user, unlike Dockerfiles which allow arbitrary commands to be run as root.

Ecosystem

The structure of this framework allows for a shared ecosystem of image and code separation patterns for easier large-scale operations.

This post is going to show you how to build one such pattern, albeit a simple and somewhat limited one! Our application pattern will consist of:

  •  Source code that contains one shell script
  • A builder that creates an image which takes that shell script, makes it runnable, and runs it

Create Your Own S2I Image

1) Start up an S2I development environment

To help ensure a consistent experience you can use a maintained environment to develop your S2I build image and project.

docker run -ti -v /var/run/docker.sock:/var/run/docker.sock dockerinpractice/shutit-s2i

This command ensures the host’s docker daemon is available within the container through mounting the host’s Docker Unix socket to the container, and uses a maintained sti build environment (the image ‘dockerinpractie/shutit-s2i’)

Problems? SELinux enabled?

If you are running in an selinux-enabled environment, then you may have problems running docker within a container!

2) Create your git project

This could be on built elsewhere and placed on GitHub (for example), but to keep this example simple and self-contained we’re going to create it locally in our S2I development environment. As mentioned above, our source code consists of one shell script. As a trivial example, this simply outputs ‘Hello World’ to the terminal.

mkdir /root/myproject
cd /root/myproject
git init
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
cat > app.sh <<< "echo 'Hello World'"
git add .
git commit -am 'Initial commit'

3) Create the builder image

sti create sti-simple-shell /opt/sti-simple-shell
cd /opt/sti-simple-shell

This S2I command creates several files. To get our workflow working, we’re going to focus on editing just these files:

  • Dockerfile
  • .sti/bin/assemble
  • .sti/bin/run

Taking the Dockerfile first, change its contents to match the following listing:

FROM openshift/base-centos7
RUN chown -R default:default /opt/openshift
COPY ./.sti/bin /usr/local/sti
RUN chmod +x /usr/local/sti/*
USER default

This Dockerfile uses the standard openshift base-centos7 image. This has the ‘default’ user already created within it. It then changes ownership of the default openshift code location to the default user, copies the S2I scripts into the default location for an S2I build, snsures the S2I scripts are executable and makes the builder image use the pre-created ‘default’ user by default.

Next you create the assemble script, which is responsible for taking the source code and compiling it ready to run. Below is a simplified, but feature-complete version of this bash script for you to use.

#!/bin/bash -e
cp -Rf /tmp/src/. ./
chmod +x /opt/openshift/src/app.sh

It runs as a bash script, exiting on any failure (-e), copies the application source into the default directory and builds the application from source. In this case, the ‘build’ is the simple step of making the app.sh file executable.

The ‘run’ script of your S2I build is responsible for running your application. It is the script that the image will run by default:

#!/bin/bash -e
exec /opt/openshift/src/app.sh

Now our builder is ready you run ‘make’ to build your S2I builder image. It will create a Docker image called sti-simple-shell. This image will provide the environment for your final image – the one that includes the software project we made above – to be built. The output of your ‘make’ call should look similar to this:

$ make
 imiell@osboxes:/space/git/sti-simple-shell$ make
 docker build --no-cache -t sti-simple-shell .
 Sending build context to Docker daemon 153.1 kB
 Sending build context to Docker daemon
 Step 0 : FROM openshift/base-centos7
 ---> f20de2f94385
 Step 1 : RUN chown -R default:default /opt/openshift
 ---> Running in f25904e8f204
 ---> 3fb9a927c2f1
 Removing intermediate container f25904e8f204
 Step 2 : COPY ./.sti/bin /usr/local/sti
 ---> c8a73262914e
 Removing intermediate container 93ab040d323e
 Step 3 : RUN chmod +x /usr/local/sti/*
 ---> Running in d71fab9bbae8
 ---> 39e81901d87c
 Removing intermediate container d71fab9bbae8
 Step 4 : USER default
 ---> Running in 5d305966309f
 ---> ca3f5e3edc32
 Removing intermediate container 5d305966309f
 Successfully built ca3f5e3edc32

If you run ‘docker images’ you should now see an image called sti-simple-shell stored locally on your host.

4) Build the Application Image

Looking back at the image at the top of this post, we now have the three things we need for an S2I build in place:

  • Source code
  • A builder image that provides an environment for building and running the source code
  • The sti program

These three are located in one place in this walkthrough, but the only one that needs to be local to our run is the sti program. The builder image can be fetched from a registry, and the source code can be fetched from a git repository such as GitHub.

$ sti build --force-pull=false --loglevel=1 file:///root/myproject sti-simple-shell final-image-1
 I0608 13:02:00.727125 00119 sti.go:112] Building final-image-1
 I0608 13:02:00.843933 00119 sti.go:182] Using assemble from image:///usr/local/sti
 I0608 13:02:00.843961 00119 sti.go:182] Using run from image:///usr/local/sti
 I0608 13:02:00.843976 00119 sti.go:182] Using save-artifacts from image:///usr/local/sti
 I0608 13:02:00.843989 00119 sti.go:120] Clean build will be performed
 I0608 13:02:00.844003 00119 sti.go:130] Building final-image-1
 I0608 13:02:00.844026 00119 sti.go:330] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.178553 00119 sti.go:388] ---> Installing application source
 I0608 13:02:01.179582 00119 sti.go:388] ---> Building application from source
 I0608 13:02:01.294598 00119 sti.go:216] No .sti/environment provided (no evironment file found in application sources)
 I0608 13:02:01.353449 00119 sti.go:246] Successfully built final-image-1

You can now run your built image, with the source code applied to it:

$ docker run final-image-1
 Hello World

Change and rebuild

It’s easier to see the purpose of this build method now we have a working example. Imagine you are a new developer ready to contribute to the project. You can simply make changes to the git repository and run a simple command to rebuild the image without knowing anything about Docker:

cd /root/myproject
cat > app.sh <<< "echo 'Hello S2I!'"
git commit -am 'new message'
sti build --force-pull=false file:///root/myproject sti-simple-shell final-image-2

Running this image shows the new message we just set in the code:

 

$ docker run final-image-2
Hello S21!

What Next?

This post demonstrated a simple example, but it’s easy to imagine how this framework could be adapted to your particular requirements. What you end up with is a means for developers to push changes out to other consumers of their software without caring about the details of Docker image production.

Other techniques can be used in combination with this to facilitate DevOps processes. For example, by using git post-commit hooks you can automate the S2I build call on checkin.

 

Bash Shortcuts Gem

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

TL;DR

These commands can tell you what key bindings you have in your bash shell by default.

bind -P | grep 'can be'
stty -a | grep ' = ..;'

Background

I’d aways wondered what key strokes did what in bash – I’d picked up some well-known ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always wondered whether there was a list of these I could easily get and comprehend. I found some, but always forgot where it was when I needed them, and couldn’t remember many of them anyway.

Then debugging a problem tab completion in ‘here’ documents, I stumbled across bind.

bind and stty

‘bind’ is a bash builtin, which means it’s not a program like awk or grep, but is picked up and handled by the bash program itself.

It manages the various key bindings in the bash shell, covering everything from autocomplete to transposing two characters on the command line. You can read all about it in the bash man page (in the builtins section, near the end).

Bind is not responsible for all the key bindings in your shell – running the stty will show the ones that apply to the terminal:

stty -a | grep ' = ..;'

These take precedence and can be confusing if you’ve tried to bind the same thing in your shell! Further confusion is caused by the fact that ‘^D’ means ‘CTRL and d pressed together whereas in bind output, it would be ‘C-d’.

edit: am indebted to joepvd from hackernews for this beauty

    $ stty -a | awk 'BEGIN{RS="[;n]+ ?"}; /= ..$/'
    intr = ^C
    quit = ^
    erase = ^?
    kill = ^U
    eof = ^D
    swtch = ^Z
    susp = ^Z
    rprnt = ^R
    werase = ^W
    lnext = ^V
    flush = ^O

 

Breaking Down the Command

bind -P | grep can

Can be considered (almost) equivalent to a more instructive command:

bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can

‘bind -l’ lists all the available keystroke functions. For example, ‘complete’ is the auto-complete function normally triggered by hitting ‘tab’ twice. The output of this is passed to a sed command which passes each function name to ‘bind -q’, which queries the bindings.

sed 's/.*/bind -q /'

The output of this is passed for running into /bin/bash.

/bin/bash 2>&1 | grep -v warning: | grep 'can be'

Note that this invocation of bash means that locally-set bindings will revert to the default bash ones for the output.

The ‘2>&1’ puts the error output (the warnings) to the same output channel, filtering out warnings with a ‘grep -v’ and then filtering on output that describes how to trigger the function.

In the output of bind -q, ‘C-‘ means ‘the ctrl key and’. So ‘C-c’ is the normal. Similarly, ‘t’ means ‘escape’, so ‘tt’ means ‘autocomplete’, and ‘e’ means escape:

$ bind -q complete
complete can be invoked via "C-i", "ee".

and is also bound to ‘C-i’ (though on my machine I appear to need to press it twice – not sure why).

Add to bashrc

I added this alias as ‘binds’ in my bashrc so I could easily get hold of this list in the future.

alias binds="bind -P | grep 'can be'"

Now whenever I forget a binding, I type ‘binds’, and have a read :)

[adinserter block=”1″]

 

The Zinger

Browsing through the bash manual, I noticed that an option to bind enables binding to

-x keyseq:shell-command

So now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):

bind -x '"C-xC-o":bind -P | grep can'

Of course, you can bind to a single key if you want, and any command you want. You could also use this for practical jokes on your colleagues…

Now I’m going to sort through my history to see what I type most often :)

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

A CoreOS Cluster in Two Minutes With Four Commands

A CoreOS Cluster in Two Minutes With Four Commands

[adinserter block=”1″]

You may have heard about CoreOS, be wondering what all the fuss is about and want to play with it.

If you have a machine with over 3G memory to spare, then you can be delivered a shell on a CoreOS cluster in under two minutes with four commands. Here they are:

sudo pip install shutit
git clone https://github.com/ianmiell/shutit-coreos-vagrant
cd shutit-coreos-vagrant
./coreos.sh

It uses ShutIt to automate the stand-up. The script is here.

See it in action here:

 

What Next?

Now get going with CoreOS’s quickstart guide

Didn’t Work?

More likely my fault than yours. Message me on twitter if you have problems: @ianmiell

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

The Most Pointless Docker Command Ever

What?

This article will show you how you can undo the things Docker does for you in a Docker command. Clearer now?

OK, Docker relies on Linux namespaces to isolate effectively copy parts of the system so it ends up looking like you are on a separate machine.

For example, when you run a Docker container:

$ docker run -ti busybox ps -a
PID USER COMMAND
 1 root ps -a

it only ‘sees’ its own process IDs. This is because it has its own PID namespace.

Similarly, you have your own network namespace:

$ docker run -ti busybox netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path

You also have your own view of inter-process communication and the filesystem.

Go on then

This is possibly the most pointless possible docker command ever run, but here goes:

docker run -ti 
    --privileged 
    --net=host --pid=host --ipc=host 
    --volume /:/host 
    busybox 
    chroot /host

The three ‘=host’ flags bypass the network, pid and ipc namespaces. The volume flag mounts the root filesystem of the host to the ‘/host’ folder in the container (you can’t mount to ‘/’ in the container). The privileged flags gives the user full access to the root user’s capabilities.

All we need is the chroot command, so we use a small image (busybox) to chroot to the filesystem we mounted.

What we end up with is a Docker container that is running as root with full capabilities in the host’s filesystem, will full access to the network, process table and IPC constructs on the host. You can even ‘su’ to other users on the host.

If you can think of a legitimate use for this, please drop me a line!

Why?

Because you can!

Also, it’s quite instructive. And starting from this, you can imagine scenarios where you end up with something quite useful.

Imagine you have an image – called ‘filecheck’ – that runs a check on the filesystem for problematic files. Then you could run a command like this (which won’t work BTW – filecheck does not exist):

docker run --workdir /host -v /:/host:ro filecheck

This modified version of the pointless command dispenses with the chroot in favour of changing the workdir to ‘/host’, and – crucially – the mount now uses the ‘:ro’ suffix to mount the host’s filesystem read-only, preventing the image from doing damage to it.

So you can check your host’s filesystem relatively safely without installing anything.

You can imagine similar network or process checkers running for their namespaces.

Can you think of any other uses for modifications of this pointless command?
This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

 

My Favourite Docker Tip

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

The Problem

To understand the problem we’re going to show you a simple scenario where not having this is just plain annoying.

Imagine you are experimenting in Docker containers, and in the midst of your work you do something interesting and reusable. Here’s it’s going to be a simple echo command, but it could be some long and complex concatenation of programs that result in a useful output.

docker run -ti --rm ubuntu /bin/bash
echo my amazing command
exit

Now you forget about this triumph, and after some time you want to recall the incredible echo command you ran earlier. Unfortunately you can’t recall it and you no longer have the terminal session on your screen to scroll to. Out of habit you try looking through your bash history on the host:

history | grep amazing

…but nothing comes back, as the bash history is kept within the now-removed container and not the host you were returned to.

The Solution – Manual

To share your bash history with the host, you can use a volume mount when running your docker images. Here’s an example:

docker run \
    -e HIST_FILE=/root/.bash_history \
    -v=$HOME/.bash_history:/root/.bash_history \
    -ti \
    ubuntu /bin/bash

The -e argument specifies the history file bash is using on the host.

The -v argument maps the container’s root’s bash history file to the host’s, saving its history to your user’s bash history on the host.

This is quite a handful to type every time, so to make this more user-friendly you can set an alias up by putting the above command as an alias into your ‘~/.bashrc’ file.

alias dockbash='docker run -e HIST_FILE=/root/.bash_history -v=$HOME/.bash_history:/root/.bash_history

Making it Seamless

This is still not seamless as you have to remember to type ‘dockbash’ if you really wanted to perform a ‘docker run’ command. For a more seamless experience you can add this to your ‘~/.bashrc’ file:

function basher() {
    if [[ $1 = 'run' ]]
    then
        shift
        /usr/bin/docker run -e \
            HIST_FILE=/root/.bash_history \
            -v $HOME/.bash_history:/root/.bash_history \
            "$@"
    else
        /usr/bin/docker "$@"
    fi
}
alias docker=basher

It sets up an alias for docker, which by default points to the ‘real’ docker executable in /usr/bin/docker. If the first argument is ‘run’ then it adds the bash magic.

Now when you next open a bash shell and run any ‘docker run’ command, the commands you run within that container will be added to your host’s bash history.

Conclusion

As a heavy Docker user, this change has reduced my frustrations considerably. No longer do I think to myself ‘I’m sure I did something like this a while ago’ without being able to recover my actions.