Using ShutIt to Build Your Own Taiga Server

Recently someone brought to my attention this issue on Github.

It’s a common problem. Awesome-looking project, how exactly do I install it to play with it?

This is a perfect use case for ShutIt.

I built it by setting up a skeleton directory, which creates a standalone ShutIt module:

./shutit skeleton /path/to/shutit/library/taigaio taigaio

which gave me the boilerplate to produce the build section here:

You can see the ShutIt API is fairly intuitive – you call commands like login, send, multisend, run_script and logout on the shutit object to perform actions within the bash session that’s set up for you.

Once written, you can test with:

$ cd /path/to/shutit/library/taigaio/bin
$ ./

Then, to run it:

$ docker run -i -t -p -p taigaio bash -c '/root/ && /root/ && echo READY! && sleep 3000d'

Navigate to http://localhost:8000 and login as admin/123123

If you just want to get going:

$ docker run -i -t -p -p imiell/taigaio bash -c '/root/ && /root/ && echo READY! && sleep 3000d'

Wait to see “READY!” and navigate to http://localhost:8000 and login as admin/123123




Using ShutIt and Docker to play with AWS (Part Two)

Using ShutIt and Docker to play with AWS (Part Two)

Previously I showed a very basic way of using ShutIt to connect to AWS.

I’ve taken this one step further so now there is a template for automating:

  • killing any AWS instance you have running
  • provisioning a new instance
  • logging onto the new instance
  • installing docker on it
  • pulling and running your image

All you will need is a .pem file and to know the security group you want to use.

I’m assuming you already have an AWS account on the free tier and have nothing running on it, or one throwaway instance we’re going to kill.

Here are the steps to get going, from an ubuntu:14.04 basic install:

1) Basic installs

apt-get update
apt-get install -y python-pip
git clone
cd shutit
pip install -r requirements.txt 
mkdir -p ~/.shutit && touch ~/.shutit/config && chmod 600 ~/.shutit/config 
vi ~/.shutit/config

2)Edit config for AWS

Then edit the file as here, changing the bits in CAPS as necessary:

# region, eg 
# region:eu-west-1
# Your pem filename, eg if your pem file is called: yourpemname.pem
# pem_name:yourpemname 

3) Place your .pem file in the context’s pem directory:

cp /path/to/yourpemname.pem examples/aws_example/context/yourpemname.pem

4) Run it

cd examples/aws_example_provision
../../shutit build --shutit_module_path ../../library:.. --interactive 0

And wait.

Once you’ve seen that that works, you can now try changing it to automate the setup of an app on AWS.

You can start by uncommenting the lines here in

 shutit.send('sudo yum install -y docker')
 shutit.send('sudo service docker start')
 # Example of what you might want to do. Note that this won't work out
 # of the box as the security policy of the VM needs to allow access to
 # the relevant port.
 #shutit.send('sudo docker pull training/webapp')
 #shutit.send('sudo docker run -d --net=host training/webapp')
 # Exit back to the "real container"

And running the build again.

Using ShutIt and Docker to play with AWS (Part One)

I’m using ShutIt to play with AWS at the moment.

I can leverage the core libraries to easily build on top with my secret data and store in my source control system, and I’ll show how you can do this here.

Firstly, there’s a core aws library that takes care of installing the aws command line tool which is part of the ShutIt libraries:

Shutit AWS module

ShutIt AWS build script

It contains the ability to configure the AWS access token, but obviously we don’t want to store that in the core library.

The solution to this to create my own module which effectively inherits from that generic solution, adding my pems and configuring for access.

/space/git/shutit/shutit skeleton /my/git/path/ianmiellaws ianmiellaws my.domain
cd /my/git/path/ianmiellaws
mv /path/to/pems context/
cd /my/git/path/ianmiellaws/
vi configs/

The bits in bold were the bits edited in the above vi edit:

imiell@lp01728:/space/git/work/notes/aws/ianmiellaws$ cat
from shutit_module import ShutItModule

class ianmiellaws(ShutItModule):

    def is_installed(self,shutit):        
        return False

    def build(self,shutit):
        return True

def module():
    return ianmiellaws(
        'my.domain.ianmiellaws.ianmiellaws', 1159697827.00,

In the config file edit I put (replacing the stuff in caps with my details):


Then build it:

/path/to/shutit build -m /path/to/shutit/library

Then run it:

docker run -t -i ianmiellaws /bin/bash

and you should be able to access your AWS services from wherever you have the container.

In the next posts I’ll show how to build on top of this to write a module to automatically provision an AWS instance and run a docker service on it.

Phoenix deployment pain (and win)

I’ve posted previously on phoenix deployment, and thought I’d share an incident that happened that was resolved more easily because it was caught early.

Got a ping that the site was down, so looked into it.

Tried rebuilding the site using ShutIt and discovered quickly that apt-gets were failing.

So what changed?

I checked the network (fine), and then tried to repro on a simple ubuntu:12.10 image. Same there. I did the same on a completely different network. Tried hitting the apt urls directly – white screen.

A quick “ask around” later and I figure out that support for these ubuntu repos had been withdrawn.

So then I could work on an upgrade relatively quickly, and I had yesterday’s container to compare differences. In the meantime apache’s default security settings had changed, so that was quite painful.

But not as painful as it could have been, thanks to PD.

Phoenix Deployment with Docker and ShutIt

I wrote a website in my spare time ( a few years ago for a family member’s business tracking mortgage rates over time in real time.

As an experiment I wrote it in such a way that the entire database would be backed up to BitBucket daily (as well as the running code on the live box). This allowed me to easily pull the repo, and dev straight off master wherever I was developing. It was well worth the investment; much time was saved in easy dev environment creation.

When Docker came along, I thought this would be an ideal opportunity to encapsulate the entire site in Docker to further this efficiency. Since there was configuration and interactions to manage, I used ShutIt to store the configuration and build steps.

Once it was containerized (or Shut), it was a short step to implement phoenix deployment.

Phoenix deployment is the principle of rebuilding rather than upgrading. Fortunately ShutIt made it easy to implement this.

Here is the script:

$ cat 
/space/git/shutit/shutit build --shutit_module_path /space/git/shutit/library --image_tag stackbrew/ubuntu:raring tag $(sudo docker commit $(sudo docker ps -a | grep -v themortgagemeter | grep raring | awk '{print $1}')) themortgagemeter rm -f themortgagemeter run -t -d -h themortgagemeter --name themortgagemeter -p 80:80 themortgagemeter

So now I have the peace of mind of knowing that whatever is running there now was built from scratch today.

With some simple tests (part of the ShutIt build lifecycle) and external monitoring (using Uptime Robot) I can be sure it will not suffer from bit-rot.