Bash Shortcuts Gem

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

TL;DR

These commands can tell you what key bindings you have in your bash shell by default.

bind -P | grep 'can be'
stty -a | grep ' = ..;'

Background

I’d aways wondered what key strokes did what in bash – I’d picked up some well-known ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always wondered whether there was a list of these I could easily get and comprehend. I found some, but always forgot where it was when I needed them, and couldn’t remember many of them anyway.

Then debugging a problem tab completion in ‘here’ documents, I stumbled across bind.

bind and stty

‘bind’ is a bash builtin, which means it’s not a program like awk or grep, but is picked up and handled by the bash program itself.

It manages the various key bindings in the bash shell, covering everything from autocomplete to transposing two characters on the command line. You can read all about it in the bash man page (in the builtins section, near the end).

Bind is not responsible for all the key bindings in your shell – running the stty will show the ones that apply to the terminal:

stty -a | grep ' = ..;'

These take precedence and can be confusing if you’ve tried to bind the same thing in your shell! Further confusion is caused by the fact that ‘^D’ means ‘CTRL and d pressed together whereas in bind output, it would be ‘C-d’.

edit: am indebted to joepvd from hackernews for this beauty

    $ stty -a | awk 'BEGIN{RS="[;n]+ ?"}; /= ..$/'
    intr = ^C
    quit = ^
    erase = ^?
    kill = ^U
    eof = ^D
    swtch = ^Z
    susp = ^Z
    rprnt = ^R
    werase = ^W
    lnext = ^V
    flush = ^O

 

Breaking Down the Command

bind -P | grep can

Can be considered (almost) equivalent to a more instructive command:

bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can

‘bind -l’ lists all the available keystroke functions. For example, ‘complete’ is the auto-complete function normally triggered by hitting ‘tab’ twice. The output of this is passed to a sed command which passes each function name to ‘bind -q’, which queries the bindings.

sed 's/.*/bind -q /'

The output of this is passed for running into /bin/bash.

/bin/bash 2>&1 | grep -v warning: | grep 'can be'

Note that this invocation of bash means that locally-set bindings will revert to the default bash ones for the output.

The ‘2>&1’ puts the error output (the warnings) to the same output channel, filtering out warnings with a ‘grep -v’ and then filtering on output that describes how to trigger the function.

In the output of bind -q, ‘C-‘ means ‘the ctrl key and’. So ‘C-c’ is the normal. Similarly, ‘t’ means ‘escape’, so ‘tt’ means ‘autocomplete’, and ‘e’ means escape:

$ bind -q complete
complete can be invoked via "C-i", "ee".

and is also bound to ‘C-i’ (though on my machine I appear to need to press it twice – not sure why).

Add to bashrc

I added this alias as ‘binds’ in my bashrc so I could easily get hold of this list in the future.

alias binds="bind -P | grep 'can be'"

Now whenever I forget a binding, I type ‘binds’, and have a read :)

[adinserter block=”1″]

 

The Zinger

Browsing through the bash manual, I noticed that an option to bind enables binding to

-x keyseq:shell-command

So now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):

bind -x '"C-xC-o":bind -P | grep can'

Of course, you can bind to a single key if you want, and any command you want. You could also use this for practical jokes on your colleagues…

Now I’m going to sort through my history to see what I type most often :)

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

Advertisements

A CoreOS Cluster in Two Minutes With Four Commands

A CoreOS Cluster in Two Minutes With Four Commands

[adinserter block=”1″]

You may have heard about CoreOS, be wondering what all the fuss is about and want to play with it.

If you have a machine with over 3G memory to spare, then you can be delivered a shell on a CoreOS cluster in under two minutes with four commands. Here they are:

sudo pip install shutit
git clone https://github.com/ianmiell/shutit-coreos-vagrant
cd shutit-coreos-vagrant
./coreos.sh

It uses ShutIt to automate the stand-up. The script is here.

See it in action here:

 

What Next?

Now get going with CoreOS’s quickstart guide

Didn’t Work?

More likely my fault than yours. Message me on twitter if you have problems: @ianmiell

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

The Most Pointless Docker Command Ever

What?

This article will show you how you can undo the things Docker does for you in a Docker command. Clearer now?

OK, Docker relies on Linux namespaces to isolate effectively copy parts of the system so it ends up looking like you are on a separate machine.

For example, when you run a Docker container:

$ docker run -ti busybox ps -a
PID USER COMMAND
 1 root ps -a

it only ‘sees’ its own process IDs. This is because it has its own PID namespace.

Similarly, you have your own network namespace:

$ docker run -ti busybox netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path

You also have your own view of inter-process communication and the filesystem.

Go on then

This is possibly the most pointless possible docker command ever run, but here goes:

docker run -ti 
    --privileged 
    --net=host --pid=host --ipc=host 
    --volume /:/host 
    busybox 
    chroot /host

The three ‘=host’ flags bypass the network, pid and ipc namespaces. The volume flag mounts the root filesystem of the host to the ‘/host’ folder in the container (you can’t mount to ‘/’ in the container). The privileged flags gives the user full access to the root user’s capabilities.

All we need is the chroot command, so we use a small image (busybox) to chroot to the filesystem we mounted.

What we end up with is a Docker container that is running as root with full capabilities in the host’s filesystem, will full access to the network, process table and IPC constructs on the host. You can even ‘su’ to other users on the host.

If you can think of a legitimate use for this, please drop me a line!

Why?

Because you can!

Also, it’s quite instructive. And starting from this, you can imagine scenarios where you end up with something quite useful.

Imagine you have an image – called ‘filecheck’ – that runs a check on the filesystem for problematic files. Then you could run a command like this (which won’t work BTW – filecheck does not exist):

docker run --workdir /host -v /:/host:ro filecheck

This modified version of the pointless command dispenses with the chroot in favour of changing the workdir to ‘/host’, and – crucially – the mount now uses the ‘:ro’ suffix to mount the host’s filesystem read-only, preventing the image from doing damage to it.

So you can check your host’s filesystem relatively safely without installing anything.

You can imagine similar network or process checkers running for their namespaces.

Can you think of any other uses for modifications of this pointless command?
This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

 

My Favourite Docker Tip

Currently co-authoring a book on Docker: Get 39% off with the code 39miell

dip

The Problem

To understand the problem we’re going to show you a simple scenario where not having this is just plain annoying.

Imagine you are experimenting in Docker containers, and in the midst of your work you do something interesting and reusable. Here’s it’s going to be a simple echo command, but it could be some long and complex concatenation of programs that result in a useful output.

docker run -ti --rm ubuntu /bin/bash
echo my amazing command
exit

Now you forget about this triumph, and after some time you want to recall the incredible echo command you ran earlier. Unfortunately you can’t recall it and you no longer have the terminal session on your screen to scroll to. Out of habit you try looking through your bash history on the host:

history | grep amazing

…but nothing comes back, as the bash history is kept within the now-removed container and not the host you were returned to.

The Solution – Manual

To share your bash history with the host, you can use a volume mount when running your docker images. Here’s an example:

docker run \
    -e HIST_FILE=/root/.bash_history \
    -v=$HOME/.bash_history:/root/.bash_history \
    -ti \
    ubuntu /bin/bash

The -e argument specifies the history file bash is using on the host.

The -v argument maps the container’s root’s bash history file to the host’s, saving its history to your user’s bash history on the host.

This is quite a handful to type every time, so to make this more user-friendly you can set an alias up by putting the above command as an alias into your ‘~/.bashrc’ file.

alias dockbash='docker run -e HIST_FILE=/root/.bash_history -v=$HOME/.bash_history:/root/.bash_history

Making it Seamless

This is still not seamless as you have to remember to type ‘dockbash’ if you really wanted to perform a ‘docker run’ command. For a more seamless experience you can add this to your ‘~/.bashrc’ file:

function basher() {
    if [[ $1 = 'run' ]]
    then
        shift
        /usr/bin/docker run -e \
            HIST_FILE=/root/.bash_history \
            -v $HOME/.bash_history:/root/.bash_history \
            "$@"
    else
        /usr/bin/docker "$@"
    fi
}
alias docker=basher

It sets up an alias for docker, which by default points to the ‘real’ docker executable in /usr/bin/docker. If the first argument is ‘run’ then it adds the bash magic.

Now when you next open a bash shell and run any ‘docker run’ command, the commands you run within that container will be added to your host’s bash history.

Conclusion

As a heavy Docker user, this change has reduced my frustrations considerably. No longer do I think to myself ‘I’m sure I did something like this a while ago’ without being able to recover my actions.

Convert Any Server to a Docker Container

DEPRECATED PAGE:

UPDATED HERE

 

OLD POST:


 

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

How and Why?

Let’s say you have a server that has been lovingly hand-crafted that you want to containerize.

Figuring out exactly what software is required on there and what config files need adjustment would be quite a task, but fortunately blueprint exists as a solution to that.

What I’ve done here is automate that process down to a few simple steps. Here’s how it works:

Blueprint_Server

You kick off a ShutIt script (as root) that automates the bash interactions required to get a blueprint copy of your server, then this in turn kicks off another ShutIt script which creates a Docker container that provisions the container with the right stuff, then commits it. Got it? Don’t worry, it’s automated and only a few lines of bash.

There are therefore 3 main steps to getting into your container:

– Install ShutIt on the server

– Run the ‘copyserver’ ShutIt script

– Run your copyserver Docker image as a container

Step 1

Install ShutIt as root:

sudo su -
(apt-get update && apt-get install -y python-pip git docker) || (yum update && yum install -y python-pip git docker which)
pip install shutit

The pre-requisites are python-pip, git and docker. The exact names of these in your package manager may vary slightly (eg docker-io or docker.io) depending on your distro.

You may need to make sure the docker server is running too, eg with ‘systemctl start docker’ or ‘service docker start’.

Step 2

Check out the copyserver script:

git clone https://github.com/ianmiell/shutit_copyserver.git

Step 3

Run the copy_server script:

cd shutit_copyserver/bin
./copy_server.sh

There are a couple of prompts – one to correct perms on a config file, and another to ask what docker base image you want to use. Make sure you use one as close to the original server as possible.

Note that this requires a version of docker that has the ‘docker exec’ option.

Step 4

Run the build server:

docker run -ti copyserver /bin/bash

You are now in a practical facsimile of your server within a docker container!

This is not the finished article, so if you need help dockerizing a server, let me know what the problem is, as improvements can still be made.

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

A Field Guide to Docker Security Measures

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

Introduction

If you’re unsure of how to secure Docker for your organisation (given that security wasn’t part of its design), I thought it would be useful to itemise some of the ways in which you can reduce or help manage the risk of running it.

The Two Sides

In this context there are two sides to security from the point of view of a sysadmin, ‘outsider’ and ‘insider’:

  • ‘Outsider’ – preventing an attacker doing damage once they have access to a container
  • ‘Insider’ – preventing a malicious user with access to the docker command from doing damage

‘Outsider’ will be a familiar scenario to anyone who’s thought about security.

‘Insider’ may be a new scenario to some. Since Docker gives you the root user on the host system (albeit within a container), there is the potential to wreak havoc on the host by accident or design. A simple example (don’t run this at home kids – I’ve put a dummy flag in anyway) is:

docker run --dontpastethis --privileged -v /usr:/usr busybox rm -rf /usr

Which will delete your host’s /usr folder. If you want people to be able to run docker, but not with the ability to do this level of damage, there are some steps you can take.

Some measures, naturally, will apply to both. Also some are as much organisational as technical.

Insiders and Outsiders

  • Run Docker Daemon with –selinux

If you run your Docker daemon with the –selinux flag it will do a great deal to prevent those in containers you from doing damage to the host system by creating its own security

This can be set in your docker config file, which usually lives in /etc under /etc/docker or /etc/sysconfig/docker

Defending Against Outsiders

  • Remove capabilities

Capabilities are a division of root into 32 categories. Many of these are disabled by default in Docker (for example, you can’t manipulate iptables rules in a Docker container by default)

To disable all of them you can run:

docker run -ti --cap-drop ALL debian /bin/bash

Or, if you want to be more fine-grained start with nothing, and then re-introduce capabilities as needed:

docker run -ti --cap-drop=CHOWN --cap-drop=DAC_OVERRIDE \
    --cap-drop=FSETID --cap-drop=FOWNER --cap-drop=KILL \
    --cap-drop=MKNOD --cap-drop=NET_RAW --cap-drop=SETGID \
    --cap-drop=SETUID --cap-drop=SETFCAP --cap-drop=SETPCAP \
    --cap-drop=NET_BIND_SERVICE --cap-drop=SYS_CHROOT \
    --cap-drop=AUDIT_WRITE \
    debian /bin/bash

Run ‘man capabilities’ for more information.

Defending Against Insiders

The main problem with giving users access to the docker runtime is that they could run with –privileged and wreak havoc, even if you have selinux enabled.

So if you’re sufficiently paranoid that you want to remove the ability for users to run Docker, some problems arise:

– How to prevent users from effectively running docker with privileges?

– How to allow users to build images?

udocker is a highly experimental and as-yet incomplete program which only allows you to run docker containers as your own (already logged-in) user id.

It’s small enough for security inspection (just a few lines of code: https://github.com/docker-in-practice/udocker/blob/master/udocker.go, forked from https://github.com/ewindisch/udocker) and potentially very useful where you want to lock down what can be run.

To run:

$ git clone https://github.com/docker-in-practice/udocker.git
$ apt-get install golang-go
$ go build
$ id
uid=1001(imiell) gid=1001(imiell) groups=1001(imiell),27(sudo),132(docker)
./udocker fedora:20 whoami
whoami: cannot find name for user ID 1001
$ ./udocker fedora:20 build-locale-archive
permission denied
FATA[0000] Error response from daemon: Cannot start container 6ba3db7094a20c9742a3289401dcf915e03a2906d4e44dbbed42e194de13fd44: [8] System error: permission denied

Compare normal docker:

$ docker run fedora:20 id
uid=0(root) gid=0(root) groups=0(root)

If you then lock down the docker runtime to be executable only by root, you disable much of docker’s attack surface.

  • Docker build on audited server (and private registry)

One solution to allow you to build without access to the docker runtime may be to allow people to submit Dockerfiles via a limited web service which takes care of building the image for you.

It’s relatively easy to knock up a server that takes a Dockerfile as a POST request, builds the image with a web framework such as python-flask, and then deposits the resulting image for post-processing. Or you could even use email as a transport, and email them back a tar file of the checked image build :)

You can also do your static Dockerfile and image checking here before allowing promotion to a privately-run registry. For example you could:

  • Enforce USERs in images

If you have a build server that takes a Dockerfile and produces an image, it becomes relatively easy to do tests.

The first static check I implemented was checking that the image had a valid :

– There is at least one USER line

– The last USER line is not root/uid0

  • Run in a VM

The Google approach. Give each user a locked-down VM on which they can run and do what they like, and define ingress and egress at that level.

This can be a pragmatic approach. Some will object that you lost a lot of the benefits of running Docker at scale, but for many developers running tests or Jenkins servers and slaves this will not matter.

Future Work

  • User namespaces

Support for the mapping of users from host to container is being discussed here:

https://github.com/docker/docker/issues/7906

Further Reading

There’s lots more going on in this space. Here’s some highlight links:

Comprehensive CIS Docker security guide

Docker’s security guide

GDS Docker security guidelines

Dan Walsh (aka Mr SELinux) talk on Docker security

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

Docker SELinux Experimentation with Reduced Pain

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip

Introduction

As a Docker enthusiast that works for a corp that cares about security, SELinux is going to be a big deal. While SELinux is in principle simple, in practice it’s difficult to get to grips with. My initial attempts involved reading out of date blogs for tools that were deprecated, and confusing introductions that left me wondering where to go.

Fortunately, I came across this blog, which explained how to implement an SELinux policy for apache in Docker.

I tried to apply this to a Vagrant centos image with Docker on it, but kept getting into a state where something was not working, but I didn’t know what had happened, and then would have to re-provision the box, re-install the software, remember my steps etc etc..

So I wrote a ShutIt script to automate this process, reducing the iteration time to re-provision and re-try changes to this SELinux policy.

See it in action here

Overview

This diagram illustrates the way this script works.

docker-selinux

Once ShutIt is set up, you run it as root with:

# shutit build --delivery bash

The ‘build’ argument tells ShutIt to run the commands in the to the revelant delivery target. By default this is Docker, but here we’re using ShutIt to automate the process of delivery via bash. ssh is also an option.

Running is root is obviously a risk, so be warned if you experiment with the script.

The script is here. It’s essentially a dynamic shell script (readily comprehended in the build method), which can react to different outputs. For example:

# If the Vagrantfile exists, we assume we've already init'd appropriately.
if not shutit.file_exists('Vagrantfile'):
	shutit.send('vagrant init jdiprizio/centos-docker-io')

only calls ‘vagrant init’ if there’s no Vagrant file in the folder. Similarly, these lines:

# Query the status - if it's powered off or not created, bring it up.
if shutit.send_and_match_output('vagrant status',['.*poweroff.*','.*not created.*','.*aborted.*']):
    shutit.send('vagrant up')

send ‘vagrant status’ to the terminal and will call ‘vagrant up’ if the status returns anything that isn’t indicating it’s already up. So the script will only bring up the VM when needed.

And these lines:

vagrant_dir = shutit.cfg[self.module_id]['vagrant_dir']
setenforce  = shutit.cfg[self.module_id]['setenforce'

pick up the config items set in the get_config method, and uses them to determine where to deploy on the host system and whether to fully enforce SELinux on the host.

Crucially, it doesn’t destroy the vagrant environment, so you can re-use the VM with all the software on it pre-installed. It ensures that the environment is cleaned up in such a way that you don’t waste time waiting for a long re-provisioning of the VM.

By setting the vagrant directory (which defaults to /tmp/vagrant_dir, see below) you can wipe it completely with an ‘rm -rf’ if you ever want to be sure you’re starting afresh.

Options

Here’s the invocation with configuration options:

# shutit build -d bash \
    -s io.dockerinpractice.docker_selinux.docker_selinux setenforce no \
    -s io.dockerinpractice.docker_selinux.docker_selinux vagrant_dir /tmp/tmp_vagrant_dir

The -s options define the options available to the docker_selinux module. Here we specify that the VM should have setenforce set to off, and the vagrant directory to use is /tmp/tmp_vagrant_dir.

Setup

Instructions on setup are kept here

#install git
#install python-pip
#install docker
git clone https://github.com/ianmiell/shutit.git
cd shutit
pip install --user -r requirements.txt
echo "export PATH=$(pwd):${PATH}" >> ~/.bashrc
. ~/.bashrc

Then clone the docker-selinux repo and run the script:

git clone https://github.com/ianmiell/docker-selinux.git
cd docker-selinux
sudo su
shutit build --delivery bash

Troubleshooting

Note you may need to alter this line

docker_executable:docker

in the

~/.config/shutit

file to change ‘docker’ to ‘sudo docker’ or however you run docker on your host.

Conclusion

This has considerably sped up my experimentation with SELinux, and I now have a reliable and test-able set of steps to help others (you!) get to grips with SELinux and improve our understanding.

This post is based on material from Docker in Practice, available on Manning’s Early Access Program. Get 39% off with the code: 39miell

dip