Talk on Docker and ShutIt

An edited talk I gave a while back on Docker and ShutIt:

 

Advertisements

Using ShutIt and Docker to play with AWS (Part One)

I’m using ShutIt to play with AWS at the moment.

I can leverage the core libraries to easily build on top with my secret data and store in my source control system, and I’ll show how you can do this here.

Firstly, there’s a core aws library that takes care of installing the aws command line tool which is part of the ShutIt libraries:

Shutit AWS module

ShutIt AWS build script

It contains the ability to configure the AWS access token, but obviously we don’t want to store that in the core library.

The solution to this to create my own module which effectively inherits from that generic solution, adding my pems and configuring for access.

/space/git/shutit/shutit skeleton /my/git/path/ianmiellaws ianmiellaws my.domain
cd /my/git/path/ianmiellaws
mv /path/to/pems context/
cd /my/git/path/ianmiellaws/
vi configs/
vi ianmiellaws.py
./test.sh

The bits in bold were the bits edited in the above vi edit:

imiell@lp01728:/space/git/work/notes/aws/ianmiellaws$ cat ianmiellaws.py
from shutit_module import ShutItModule

class ianmiellaws(ShutItModule):

    def is_installed(self,shutit):        
        return False

    def build(self,shutit):
        shutit.send_host_file('t2.pem','context/pems/t2.pem')     
        return True


def module():
    return ianmiellaws(
        'my.domain.ianmiellaws.ianmiellaws', 1159697827.00,
        description='',
        maintainer=''
        depends=['shutit.tk.setup','shutit.tk.aws.aws']       
)

In the config file edit I put (replacing the stuff in caps with my details):

[my.domain.aws.aws]
access_key_id:MYKEYHERE
secret_access_key:MYSECRETACCESSKEYHERE
region:MYREGION
output:

Then build it:

/path/to/shutit build -m /path/to/shutit/library

Then run it:

docker run -t -i ianmiellaws /bin/bash

and you should be able to access your AWS services from wherever you have the container.

In the next posts I’ll show how to build on top of this to write a module to automatically provision an AWS instance and run a docker service on it.

Phoenix deployment pain (and win)

I’ve posted previously on phoenix deployment, and thought I’d share an incident that happened that was resolved more easily because it was caught early.

Got a ping that the site was down, so looked into it.

Tried rebuilding the site using ShutIt and discovered quickly that apt-gets were failing.

So what changed?

I checked the network (fine), and then tried to repro on a simple ubuntu:12.10 image. Same there. I did the same on a completely different network. Tried hitting the apt urls directly – white screen.

A quick “ask around” later and I figure out that support for these ubuntu repos had been withdrawn.

So then I could work on an upgrade relatively quickly, and I had yesterday’s container to compare differences. In the meantime apache’s default security settings had changed, so that was quite painful.

But not as painful as it could have been, thanks to PD.

Phoenix Deployment with Docker and ShutIt

I wrote a website in my spare time (themortgagemeter.com) a few years ago for a family member’s business tracking mortgage rates over time in real time.

As an experiment I wrote it in such a way that the entire database would be backed up to BitBucket daily (as well as the running code on the live box). This allowed me to easily pull the repo, and dev straight off master wherever I was developing. It was well worth the investment; much time was saved in easy dev environment creation.

When Docker came along, I thought this would be an ideal opportunity to encapsulate the entire site in Docker to further this efficiency. Since there was configuration and interactions to manage, I used ShutIt to store the configuration and build steps.

Once it was containerized (or Shut), it was a short step to implement phoenix deployment.

Phoenix deployment is the principle of rebuilding rather than upgrading. Fortunately ShutIt made it easy to implement this.

Here is the script:

$ cat phoenix.sh 
/space/git/shutit/shutit build --shutit_module_path /space/git/shutit/library --image_tag stackbrew/ubuntu:raring
docker.io tag $(sudo docker commit $(sudo docker ps -a | grep -v themortgagemeter | grep raring | awk '{print $1}')) themortgagemeter
docker.io rm -f themortgagemeter
docker.io run -t -d -h themortgagemeter --name themortgagemeter -p 80:80 themortgagemeter

So now I have the peace of mind of knowing that whatever is running there now was built from scratch today.

With some simple tests (part of the ShutIt build lifecycle) and external monitoring (using Uptime Robot) I can be sure it will not suffer from bit-rot.

Docker, ShutIt and the Perfect 2048 Game (3 – Brute Force Escapes)

Docker, ShutIt and the Perfect 2048 Game (3 – Brute Force Escapes)

 

Now that I’m getting near the end of the highest tile on 2048, the air is getting thin.

I often get into a state like this:

Image

where I need a four – not a two – to drop in on the top.

Since fours are less frequent than twos, I have to start up, try, quit, and repeat multiple times, which gets quite tedious.

So I automated it using visgrep.

visgrep is a seemingly obscure tool (part of the xautomation package on ubuntu) to search for images within other images. I won’t go into its use here (it’s _not_ user-friendly and in-depth documentation is perhaps kept by monks somewhere), but there is a man page.

I’ve used it in the commented-out sections here:

https://github.com/ianmiell/shutit/blob/master/library/win2048/resources/start_win2048.sh

To use it, uncomment the lines, and put the appropriate key code in, save the container and run it as below:

host_prompt$ while [ 1 ]; do sudo docker run -t -i -p 5901:5901 -p 6080:6080 -e HOME=/root imiell/2048_left /root/start_vnc.sh; sleep 5; done

New 'b80055fbbeff:1 ()' desktop is b80055fbbeff:1

Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/b80055fbbeff:1.log

Did you expose ports 5901 and 6080?
If so, then vnclient localhost:1 should work.
18874476
307,129 0
307,130 0
306,131 0
307,131 0
308,131 0
307,132 0
307,133 0
0
FAIL
[about output loops n times]

New '3fd43021ea4a:1 ()' desktop is 3fd43021ea4a:1

Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/3fd43021ea4a:1.log

Did you expose ports 5901 and 6080?
If so, then vnclient localhost:1 should work.
16777324
0
OK
container_prompt$

So when I see an “OK” and a prompt within the container I know I can vnc in and continue playing.

The time saved by doing this is significant.

I’m not really a game player, but I imagine this principle could be applied to a lot of games. Also, the process could be made much more slick, as this is still very much by-hand.