Centralise Your Bash History


Have you ever run a command on one of your hosts and then wanted to retrieve it later? Then couldn’t remember where you ran it, or it’s lost from your history?

This happens to me all the time. The other day I was hunting for a command I was convinced I’d run, but wasn’t sure where it was or whether it was stored.

So I finally wrote a service that records my every command centrally.

Here’s an overview of its (simple) architecture.





This stores your bash history on a server in a file.

The service runs on a port of your choosing, and if you have some lines in your ~.bashrc file then it will all work seamlessly for you.

There’s some basic authentication (shared key) to prevent abuse of the service.


To set up, see the README.


Needs socat installed, and bash version 4+.

Related posts

Ten Things About Bash
Ten More Things About Bash


I need help to improve this – see the README.


If you ask nicely I might host it for you, without warranty etc..


If you want to learn more about bash, read my book Learn Bash the Hard Way, available at $5:



How (and Why) I Run My Own DNS Servers



Despite my woeful knowledge of networking, I run my own DNS servers on my own websites run from home.

I achieved this through trial and error and now it requires almost zero maintenance, even though I don’t have a static IP at home.

Here I share how (and why) I persist in this endeavour.


This is an overview of the setup:


This is how I set up my DNS. I:

  • got a domain from an authority (a .tk domain in my case)
  • set up glue records to defer DNS queries to my nameservers
  • set up nameservers with static IPs
  • set up a dynamic DNS updater from home


Walking through step-by-step how I did it:

1) Set up two Virtual Private Servers (VPSes)

You will need two stable machines with static IP addresses.

If you’re not lucky enough to have these in your possession, then you can set one up on the cloud. I used this site, but there are plenty out there.  NB I asked them, and their IPs are static per VPS. I use the cheapest cloud VPS (1$/month) and set up debian on there.

NOTE: Replace any mention of DNSIP1 and DNSIP2 below with the first and second static IP addresses you are given.

Log on and set up root password

SSH to the servers and set up a strong root password.

2) Set up domains

You will need two domains: one for your dns servers, and one for the application running on your host.

I use dot.tk to get free throwaway domains. In this case, I might setup a myuniquedns.tk DNS domain and a myuniquesite.tk site domain.

Whatever you choose, replace your DNS domain when you see YOURDNSDOMAIN below. Similarly, replace your app domain when you see YOURSITEDOMAIN below.

3) Set up a ‘glue’ record

If you use dot.tk as above, then to allow you to manage the YOURDNSDOMAIN domain you will need to set up a ‘glue’ record.

What this does is tell the current domain authority (dot.tk) to defer to your nameservers (the two servers you’ve set up) for this specific domain. Otherwise it keeps referring back to the .tk domain for the IP.

See here for a fuller explanation.

Another good explanation is here.

To do this you need to check with the authority responsible how this is done, or become the authority yourself.

dot.tk has a web interface for setting up a glue record, so I used that.

There, you need to go to ‘Manage Domains’ => ‘Manage Domain’ => ‘Management Tools’ => ‘Register Glue Records’ and fill out the form.

Your two hosts will be called ns1.YOURDNSDOMAIN and ns2.YOURDNSDOMAIN, and the glue records will point to either IP address.

Note, you may need to wait a few hours (or longer) for this to take effect. If really unsure, give it a day.

If you like this post, you might be interested in my book Learn Bash the Hard Way, available here for just $5.


4) Install bind on the DNS Servers

On a Debian machine (for example), and as root, type:

apt install bind9

bind is the domain name server software you will be running.

5) Configure bind on the DNS Servers

Now, this is the hairy bit.

There are two parts this with two files involved: named.conf.local, and the db.YOURDNSDOMAIN file.

They are both in the /etc/bind folder. Navigate there and edit these files.

Part 1 – named.conf.local

This file lists the ‘zone’s (domains) served by your DNS servers.

It also defines whether this bind instance is the ‘master’ or the ‘slave’. I’ll assume ns1.YOURDNSDOMAIN is the ‘master’ and ns2.YOURDNSDOMAIN is the ‘slave.

Part 1a – the master

On the master/ns1.YOURNDSDOMAIN, the named.conf.local should be changed to look like this:

 type master;
 file "/etc/bind/db.YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
 type master;
 file "/etc/bind/YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };

zone "14.127.75.in-addr.arpa" {
 type master;
 notify no;
 file "/etc/bind/db.75";
 allow-transfer { DNSIP2; };

logging {
 channel query.log {
 file "/var/log/query.log";
 // Set the severity to dynamic to see all the debug messages.
 severity debug 3;
category queries { query.log; };

The logging at the bottom is optional (I think). I added it a while ago, and I leave it in here for interest. I don’t know what the 14.127 zone stanza is about.

Part 1b – the slave

On the slave/ns2.YOURNDSDOMAIN, the named.conf.local should be changed to look like this:

 type slave;
 file "/var/cache/bind/db.YOURDNSDOMAIN";
 masters { DNSIP1; };

 type slave;
 file "/var/cache/bind/db.YOURSITEDOMAIN";
 masters { DNSIP1; };

zone "14.127.75.in-addr.arpa" {
 type slave;
 file "/var/cache/bind/db.75";
 masters { DNSIP1; };



Now we get to the meat – your DNS database is stored in this file.

On the master/ns1.YOURDNSDOMAIN the db.YOURDNSDOMAIN file looks like this:

$TTL 4800
  2018011615 ; Serial
  604800 ; Refresh
  86400 ; Retry
  2419200 ; Expire
  604800 ) ; Negative Cache TTL

On the slave/ns2.YOURDNSDOMAIN it’s very similar, but has ns1 in the SOA line, and the IN NS lines reversed. I can’t remember if this reversal is needed or not…:

  2018011615 ; Serial
 604800 ; Refresh
 86400 ; Retry
 2419200 ; Expire
 604800 ) ; Negative Cache TTL

A few notes on the above:

  • The dots at the end of lines are not typos – this is how domains are written in bind files. So google.com is written google.com.
  • The YOUREMAIL.YOUREMAILDOMAIN. part must be replaced by your own email. For example, my email address: ian.miell@gmail.com becomes ianmiell.gmail.com..  Note also that the dot between first and last name is dropped. email ignores those anyway!
  • YOURDYNAMICIP is the IP address your domain should be pointed to (ie the IP address returned by the DNS server). It doesn’t matter what it is at this point, because….

the next step is to dynamically update the DNS server with your dynamic IP address whenever it changes.

6) Copy ssh keys

Before setting up your dynamic DNS you need to set up your ssh keys so that your home server can access the DNS servers.

NOTE: This is not security advice. Use at your own risk.

First, check whether you already have an ssh key generated:

ls ~/.ssh/id_rsa

If that returns a file, you’re all set up. Otherwise, type:


and accept the defaults.

Then, once you have a key set up, copy your ssh ID to the nameservers:

ssh-copy-id root@DNSIP1
ssh-copy-id root@DNSIP2

Inputting your root password on each command.

7) Create an IP updater script

Now ssh to both servers and place this script in /root/update_ip.sh:

set -o nounset
sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN
sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN
/etc/init.d/bind9 restart

Make it executable by running:

chmod +x /root/update_ip.sh

Going through it line by line:

  • set -o nounset

This line throws an error if the IP is not passed in as the argument to the script.

  • sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN

Replaces the IP address with the contents of the first argument to the script.

  • ​​​sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN

Ups the ‘serial number’

  • /etc/init.d/bind9 restart

Restart the bind service on the host.

8) Cron Your Dynamic DNS

At this point you’ve got access to update the IP when your dynamic IP changes, and the script to do the update.

Here’s the raw cron entry:

* * * * * curl ifconfig.co 2>/dev/null > /tmp/ip.tmp && (diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)")); curl ifconfig.co 2>/dev/null > /tmp/ip.tmp2 && (diff /tmp/ip.tmp2 /tmp/ip2 || (mv /tmp/ip.tmp2 /tmp/ip2 && ssh root@ "/root/update_ip.sh $(cat /tmp/ip2)"))

Breaking this command down step by step:

curl ifconfig.co 2>/dev/null > /tmp/ip.tmp

This curls a ‘what is my IP address’ site, and deposits the output to /tmp/ip.tmp

diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)"))

This diffs the contents of /tmp/ip.tmp with /tmp/ip (which is yet to be created, and holds the last-updated ip address). If there is an error (ie there is a new IP address to update on the DNS server), then the subshell is run. This overwrites the ip address, and then ssh’es onto the

The same process is then repeated for DNSIP2 using separate files (/tmp/ip.tmp2 and /tmp/ip2).



You may be wondering why I do this in the age of cloud services and outsourcing. There’s a few reasons.

It’s Cheap

The cost of running this stays at the cost of the two nameservers (24$/year) no matter  how many domains I manage and whatever I want to do with them.


I’ve learned a lot by doing this, probably far more than any course would have taught me.

More Control

I can do what I like with these domains: set up any number of subdomains, try my hand at secure mail techniques, experiment with obscure DNS records and so on.

I could extend this into a service. If you’re interested, my rates are very low :)

If you like this post, you might be interested in my book Learn Bash the Hard Way, available here for just $5.


If you liked this post, you might also like:

Create Your Own Git Diagrams

Ten Things I Wish I’d Known About bash

Ten More Things I Wish I’d Known About bash

Project Management as Code with Graphviz

Ten Things I Wish I’d Known Before Using Jenkins Pipelines


Ten More Things I Wish I'd Known About bash


My previous post took off far more than I expected, so I thought I’d write another piece on less well-known bash features.

As I said before, because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it while studying it up. It’s really gratifying to know that other people think it’s important too, despite being un-hip.

A preview of the book is available here. It focusses more than these articles on ensuring you are drilled in and understand the concepts you need to take your bash usage to a higher level. This is written more for ‘fun’.

HN discussion here.



1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit ‘up’, ‘left’ until at the ‘p’ and type ‘e’ and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you’ll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile

I won’t try and explain this one here…


2) pushd / popd

This one comes in very handy for scripts, especially when operating within a loop.

Let’s say you’re in a for loop moving in and out of folders like this:

for d1 in $(ls -d */)
  # Store original working directory.
  cd "$d1"
  for d2 in $(ls -d */)
    pushd "$d2"
    # Do something
  # Return to original working directory
  cd "${original_wd}"

You can rewrite the above using the pushd stack like this:

for d1 in $(ls -d *)
  pushd "$d1"
  for d2 in $(ls  -d */)
    pushd "$d2"
    # Do something

Which tracks the folders you’ve pushed and popped as you go.

Note that if there’s an error in a pushd you may lose track of the stack and popd too many time. You probably want to set -e in your script as well (see previous post)

There’s also cd -, but that doesn’t ‘stack’ – it just returns you to the previous folder:

cd ~
cd /tmp
cd blah
cd - # Back to /tmp
cd - # Back to 'blah'
cd - # Back to /tmp
cd - # Back to 'blah' ...

3) shopt vs set

This one bothered me for a while.

What’s the difference between set and shopt?

sets we saw before, but shopts look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here.

Essentially, it looks like it’s a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options.

But I’m still unsure… if you know the answer, let me know.

4) Here Docs and Here Strings

‘Here docs’ are files created inline in the shell.

The ‘trick’ is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc

Notice that:

  • the string could be included in the file if it was not ‘alone’ on the line
  • the string SOMEENDSTRING is more normally END, but that is just convention

Lesser known is the ‘here string’:

$ cat > asd <<< 'This file has one line'


5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash.

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ echo $PASS
  • The # means ‘match and remove the following pattern from the start of the string’
  • The % means ‘match and remove the following pattern from the end of the string

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script.

If you want to use glob patterns that are greedy (see globbing here) then you double up:

VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}

$ echo ${VAR%%*FOOTER}



6) ​Variable Defaults

These are very handy for knocking up scripts.

If you have a variable that’s not set, you can ‘default’ them by using this. Create a file called default.sh with these contents

echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second.

Observer how the third argument’s default has been assigned, but not the first two.

You can also assign directly with ${VAR:=defaultval} (equals sign, not dash) but note that this won’t work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap builtin can be used to ‘catch’ when a signal is sent to your script.

Here’s an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
trap cleanup TERM INT QUIT

Any attempt to CTRL-C, CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can’t be trapped in this way

But mostly I’ve used this for ‘cleanups’ like the above, which serve their purpose.

8) Shell Variables

It’s well worth getting to know the standard shell variables available to you. Here are some of my favourites:


Don’t rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE


No need to give a variable name for read

$ read
my input
$ echo ${REPLY}


Handy for debugging

echo ${LINENO}
echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO

Note that there are two ‘lines’ above, even though you used ; to separate the commands.


You can timeout reads, which can be really handy in some scripts

echo You have 5 seconds to respond...
echo ${REPLY:-noreply}


9) Extglobs

If you’re really knee-deep in bash, then you might want to power up your globbing. You can do this by setting the extglob shell option. Here’s the setup:

shopt -s extglob
B="  ${A}  "

Now see if you can figure out what each of these does:

echo "B      |${B}|"
echo "B#+( ) |${B#+( )}|"
echo "B#?( ) |${B#?( )}|"
echo "B#*( ) |${B#*( )}|"
echo "B##+( )|${B##+( )}|"
echo "B##*( )|${B##*( )}|"
echo "B##?( )|${B##?( )}|"

Now, potentially useful as it is, it’s hard to think of a situation where you’d absolutely want to do it this way. Normally you’d use a tool better suited to the task (like sed) or just drop bash and go to a ‘proper’ programming language like python.

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here).

What I didn’t know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.


This is based on some of the contents of my book Learn Bash the Hard Way, available at $5:


Preview available here.

I also wrote Docker in Practice 

Get 39% off with the code: 39miell2

If you liked this post, you might also like these:

Ten Things I Wish I’d Known About bash

Centralise Your Bash History

How (and Why) I Run My Own DNS Servers

My Favourite Secret Weapon – strace

A Complete Chef Infrastructure on Your Laptop


Ten Things I Wish I’d Known About bash


Recently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it.

A preview is available here.

You don’t have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.

Here I’ve focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.



1)  `` vs $()

These two operators do the same thing. Compare these two lines:

$ echo `ls`
$ echo $(ls)

Why these two forms existed confused me for a long time.

If you don’t know, both forms substitute the output of the command contained within it into the command.

The principal difference is that nesting is simpler.

Which of these is easier to read (and write)?

    $ echo `echo \`echo \\\`echo inside\\\`\``


    $ echo $(echo $(echo $(echo inside)))

If you’re interested in going deeper, see here or here.

2) globbing vs regexps

Another one that can confuse if never thought about or researched.

While globs and regexps can look similar, they are not the same.

Consider this command:

$ rename -n 's/(.*)/new$1/' *

The two asterisks are interpreted in different ways.

The first is ignored by the shell (because it is in quotes), and is interpreted as ‘0 or more characters’ by the rename application. So it’s interpreted as a regular expression.

The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.

So by looking at man bash can you figure out why these two commands produce different output?

$ ls *
$ ls .*

The second looks even more like a regular expression. But it isn’t!

3) Exit Codes

Not everyone knows that every time you run a shell command in bash, an ‘exit code’ is returned to bash.

Generally, if a command ‘succeeds’ you get an error code of 0. If it doesn’t succeed, you get a non-zero code. 1 is a ‘general error’, and others can give you more information (eg which signal killed it, for example).

But these rules don’t always hold:

$ grep not_there /dev/null
$ echo $?

$? is a special bash variable that’s set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0?

Grok this and a lot will click into place in what follows.

4) if statements, [ and [[

Here’s another ‘spot the difference’ similar to the backticks one above.

What will this output?

if grep not_there /dev/null
    echo hi
    echo lo

grep’s return code makes code like this work more intuitively as a side effect of its use of exit codes.

Now what will this output?

a) hihi
b) lolo
c) something else

if [ $(grep not_there /dev/null) = '' ]
    echo -n hi
    echo -n lo
if [[ $(grep not_there /dev/null) = '' ]]
    echo -n hi
    echo -n lo

The difference between [ and [[ was another thing I never really understood. [ is the original form for tests, and then [[ was introduced, which is more flexible and intuitive. In the first if block above, the if statement barfs because the $(grep not_there /dev/null) is evaluated to nothing, resulting in this comparison:

[ = '' ]

which makes no sense. The double bracket form handles this for you.

This is why you occasionally see comparisons like this in bash scripts:

if [ x$(grep not_there /dev/null) = 'x' ]

so that if the command returns nothing it still runs. There’s no need for it, but that’s why it exists.

5) sets

Bash has configurable options which can be set on the fly. I use two of these all the time:

set -e

exits from a script if any command returned a non-zero exit code (see above).

This outputs the commands that get run as they run:

set -x

So a script might start like this:

set -e
set -x
grep not_there /dev/null
echo $?

What would that script output?

6) ​​<()

This is my favourite. It’s so under-used, perhaps because it can be initially baffling, but I use it all the time.

It’s similar to $() in that the output of the command inside is re-used.

In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.

Confused? Here’s an example.

Have you ever done something like this?

$ grep somestring file1 > /tmp/a
$ grep somestring file2 > /tmp/b
$ diff /tmp/a /tmp/b

That works, but instead you can write:

diff <(grep somestring file1) <(grep somestring file2)

Isn’t that neater?

7) Quoting

Quoting’s a knotty subject in bash, as it is in many software contexts.

Firstly, variables in quotes:

echo "$A"
echo '$A'

Pretty simple – double quotes dereference variables, while single quotes go literal.

So what will this output?

mkdir -p tmp
cd tmp
touch a
echo "*"
echo '*'

Surprised? I was.

8) Top three shortcuts

There are plenty of shortcuts listed in man bash, and it’s not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.

Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I’ll skip over the most obvious ones (eg !! – repeat last command, and ~ – your home directory).


I use this dozens of times a day. It repeats the last argument of the last command. If you’re working on a file, and can’t be bothered to re-type it command after command it can save a lot of work:

grep somestring /long/path/to/some/file/or/other.txt
vi !$



This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:

grep isthere /long/path/to/some/file/or/other.txt
egrep !:1-$
fgrep !:1-$

The ! means ‘look at the previous command’, the : is a separator, and the 1 means ‘take the first word’, the - means ‘until’ and the $ means ‘the last word’.

Note: you can achieve the same thing with !*. Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with !:2-3.


I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:

grep isthere /long/path/to/some/file/or/other.txt
cd !$:h

which can save a lot of work in the course of the day.

9) startup order

The order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):


It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).

So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the ‘green’ line, and these are the order of files read:

[bash runs, then terminates]

This can save you a hell of a lot of time debugging.

10) getopts (cheapci)

If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with getopts can pay large dividends.

For fun, I once wrote a script called cheapci which I used to work like a Jenkins job.

The code here implements the reading of the two required, and 14 non-required arguments. Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.

This is based on some of the contents of my book Learn Bash the Hard Way, available at $7:


Preview available here.

I also wrote Docker in Practice 

Get 39% off with the code: 39miell2


Project Management as Code with Graphviz


My team and I have been using graphviz and git to perform project management tasks.

It has numerous benefits, including:

  • Asynchronous project updates (ie fewer meetings)
  • Improved updates for users
  • Visualisation of complexity of project for stakeholders and team
  • Assumptions challenged. Progress can be measured using git itself (eg log)

HackerNews Discussion here






Recently I’ve had to take on some project management tasks, managing engineering for a relatively large-scale project in a large enterprise covering a wide variety of use cases and demands.

One of the biggest challenges was how to express the dependencies that needed to be overcome to get the business outcomes the stakeholders wanted. Cries of ‘we just want x’ were answered by me with varying degrees of quality repeatedly, and generally left the stakeholders unsatisfied.

Being a software engineer – and not a project manager – by background or training, I naturally used graphviz instead to create dependency diagrams, and git to manage them.

The examples here are in source here and I welcome PRs and are based on the ‘project’ of preparing for a holiday.


We start with a simple graph with a couple of dependencies:

digraph G {
 "Enjoy Holiday" -> "Book tickets"
 "Enjoy Holiday" -> "Pack suitcase night before"
 "Pack suitcase night before" -> "Buy guide book"
 "Pack suitcase night before" -> "Buy electric converter"

The emboldening is mine for illustration; the file is plain text.

This file can be saved as simple.gv (.gv is for ‘graphviz’) and will generate this graph as a .png if you run dot -Tpng simple.gv > simple.png:


Looking closer at simple.gv:

digraph – Tells graphviz that this is a directed graph, ie the relationships have a direction, indicated by the -> arrows. The arrow can be read as ‘depends on’.

Enjoy Holiday is the name of a node. Whenever this node is referenced in future it is the ‘same’ node. In the case of Pack suitcase the night before you can see that two nodes depend on it, and it depends on one. These relationships are expressed through the arrows in the file.

The dot program is part of the graphviz package. Other commands include neato, circo and others. Check man dot to read more about them.

This shows how easy it is to create a simple graph from a text file easily stored in git.


That top-down layout can be a bit restrictive to some eyes. If so, you can change the layout by using another command in the graphviz package. For example, running neato -Tpng simple.gv > simple.png produces this graph:


Note how:

  • Enjoy holiday is now nearer the ‘centre’ of the graph
  • The nodes are overlapping (we’ll deal with this later)
  • The arrows have shortened (we’ll deal with this later too)

If you’re fussy about your diagrams you can spend a lot of time fiddling with them like this, so it’s useful to get a feel for what the different commands do.

If you like this post, you might like my books  Learn Git the Hard WayLearn Bash the Hard Way or Docker in Practice




Get 39% off Docker in Practice with the code: 39miell2


We can get more project information into a node by colorizing the nodes. I do this with a simple scheme of:

  • green = done
  • orange = in progress
  • red = not started

Here’s an updated .gv file:

digraph G { 
 "EH" [label="Enjoy Holiday",color="red"] 
 "BT" [label="Book tickets",color="green"] 
 "PSNB" [label="Pack suitcase night before",color="red"] 
 "BGB" [label="Buy guide book",color="orange"] 
 "BEC" [label="Buy electric converter",color="orange"] 
 "EH" -> "BT" 
 "EH" -> "PSNB" 
 "PSNB" -> "BGB" 
 "PSNB" -> "BEC" 

Running the command

dot -Tpng simple_colors.gv > simple_colors.png

on this results in this graph:


Two things have changed here. Referring to the full description of the node can get tiresome, so ‘Enjoy holiday’ has been referenced with ‘EH’, and associated with a ‘label’, and ‘color’.

"EH" [label="Enjoy Holiday",color="red"]

The nodes are defined in this way at the top, and then referred to with their relationships at the end. All sorts of attributes are available.


Similarly, you can change the attributes of nodes in the graph, and their relationships in code.

I find that with a complex graph with some text in each node, a rectangular node makes for better  layouts. Also, I like to specify the distance between nodes, and prevent them from overlapping (two ‘problems’ we saw before).

digraph G { 
 node [color="black", shape="rectangle"] 
 "EH" [label="Enjoy Holiday",color="red"] 
 "BT" [label="Book tickets",color="green"] 
 "PSNB" [label="Pack suitcase night before",color="red"] 
 "BGB" [label="Buy guide book",color="orange"] 
 "BEC" [label="Buy electric converter",color="orange"] 
 "EH" -> "BT" 
 "EH" -> "PSNB" 
 "PSNB" -> "BGB" 
 "PSNB" -> "BEC" 

By adding the ranksep and nodesep attributes, we can influence the layout of the graph by specifying the distance between nodes in their rank in the hierarchy, and separation between them. Similarly, overlap prevents the problem we saw earlier with overlapping nodes.

The node line specifies the characteristics of the nodes – in this case rectangular and black by default.

Running the same dot command as above results in this graph:


which is arguably uglier than previous ones, but these changes help us as the graphs become more complex.

More Complex Graphs

Compiling this more complex graph with dot:

digraph G {

 node [color="black", shape="rectangle"]

 "EH" [label="ENJOY HOLIDAY\nWe want to have a good time",color="red"]
 "BTOW" [label="Book time off\nCheck with boss that time off is OK, put in system",color="red"]
 "BFR" [label="Book fancy restaurant\nThe one overlooking the river",color="red"]
 "BPB" [label="Buy phrase book\nThey don't speak English, so need to know how to book",color="red"]
 "BT" [label="Book tickets\nDo this using Expedia",color="green"]
 "PSNB" [label="Pack suitcase night before\nSuitcase in understairs cupboard",color="red"]
 "BGB" [label="Buy guide book\nIdeally the Time Out one",color="orange"]
 "BEC" [label="Buy electric converter\nDon't want to get ripped off at airport",color="orange"]
 "GTS" [label="Go to the shops\nNeed to go to town",color="orange"]
 "GCG" [label="Get cash (GBP)\nAbout 200 quid",color="green"]
 "GCD" [label="Get cash (DOLLARS)\nFrom bureau de change under arches",color="orange"]
 "EH" -> "BT"
 "EH" -> "BFR"
 "EH" -> "BTOW"
 "BFR" -> "BPB"
 "BPB" -> "GTS"
 "BPB" -> "GCG"
 "EH" -> "PSNB"
 "EH" -> "GCD"
 "PSNB" -> "BGB"
 "BGB" -> "GTS"
 "PSNB" -> "BEC"
 "BGB" -> "GCG"
 "BEC" -> "GCG"

gives this graph:


And with neato:


You can see the graphs look quite different depending on which layout engine/binary you use. Some may suit your purpose better than others.

Project Planning with PRs

Now that you have a feel for graphing as code, you can check these into git and share them with your team. In our team, each node represents a JIRA ticket, and shows its ID and summary.

A big benefit of this is that project updates can be asynchronous. Like many people, I work with engineers across the world, and their ability to communicate updates by this method reduces communication friction considerably.

For example, the other day we had a graph representing our next phase of work that was looking too heavy for one sprint. Rather than calling a meeting and go over each line item, I just asked him to update the graph file and raise a PR for me to review.

We then workshopped the changes over the PR, and only discussed a couple of points over the phone. Fewer meetings, and more content-rich discussions.

Surface Assumptions

Beyond fewer and more effective meetings, another benefit is the objective recording of assumptions within the team. Surprisingly often, I have discovered hidden dependencies through this method that had either not been fully understood or discussed.

It’s also surfaced further items of work required to reach the solution, which has resulted in more and more clear tickets being raised that relate to the target solution. The discipline of coding these up helps force these into the open.

Happier Stakeholders

While inside the team, the understanding of what needs to happen is clearer, stakeholders clamouring for updates are clearer on what’s blocking the outcomes they want.

Another benefit is an increased confidence in the process. There’s a document that’s readily comprehensible they can dig into if they want to find out more. But the fact that there’s a transparent graph of dependencies usually suffices to persuade people that things are under control.

Alternate Views

Finally, here are some alternate views of the same graph. We’ve already seen dot and neato. Here are the others. I’m not going to explain them technically as I’ve read the man page definitions and am none the wiser. They use words like ‘outerplanar’ and ‘force-directed’. Graph rendering is a complicated affair.










Is here.

If you know more than me about graphviz and have any improvements/interesting tweaks/suggestions then please contribute.


If you like this post, you might like my books  Learn Git the Hard WayLearn Bash the Hard Way or Docker in Practice




Get 39% off Docker in Practice with the code: 39miell2



How to Manually Clear Locks in Jenkins


Recently I got into a situation where I hit a bug with Jenkins where Jenkinsfile locks were not released if the job was terminated.

I tried:

  • Restarting Jenkins
  • Reinstalling the plugin
  • Removing the locks manually from the top level Jenkins page
  • Raising a bug

None of these worked.

I found a solution that involved manually hacking files.


  1. Find the file named:

    In the /var/jenkins_home​ folder (or wherever Jenkins is installed).

    It will look like this:


  2. <org.jenkins.plugins.lockableresources.LockableResource>

    Remove the line in bold containing the buildExternalizableId attribute.

  3. Change the queuingStarted item



  4. Restart Jenkins

Author is currently working on the second edition of Docker in Practice 

Get 39% off with the code: 39miell2