Git Hooks the Hard Way

This post is adapted from an advanced chapter of Learn Git the Hard Way.

Each section is self-contained, and should be typed out by hand to ensure the concepts are embedded in your mind and to force you to think. This is the Hard Way.


Git hooks allow you to control what the git repository does when certain actions are performed. They’re called ‘hooks’ because they allow you to ‘hook’ a script at a specific point in the git workflow.

In this post you will cover:

  • What git hooks are
  • Pre-commit hooks
  • Pre-receive hooks
  • The `git cat-file` command

By the end, you should be comfortable with what git hooks are, and able to use them in your own projects.

Create Repositories

To understand git hooks properly, you’re going to create a ‘bare’ repository with nothing in it, and then clone from that ‘bare’ repo.

1  $ mkdir lgthw_hooks 
2 $ cd lgthw_hooks
3 $ mkdir git_origin
4 $ cd git_origin
5 $ git init --bare
6 $ cd ..
7 $ git clone git_origin git_clone

Now you have two repositories: git_origin, which is the bare repository you will push to, and git_clone, which is the repository you will work in. You can think of them as part of a client-server git workflow where users treat the git_origin folder as the server, and clones as the client.

Next, add some content to the repository, and push it:

8  $ echo 'first commit' > file1
9 $ git add file1
10 $ git commit -m 'adding file1'
11 $ git push

Nothing surprising should have happened there. The content was added, committed and pushed to the origin repo.

Adding a ‘pre-commit’ Hook

Now imagine that you’ve set a rule for yourself that you shouldn’t work at weekends. To try and enforce this you can use a git hook in your clone.

Add a second change, and take a look at the .git/hooks folder:

12 $ echo 'second change in clone' >> file1
13 $ ls .git/hooks

In the .git/hooks folder are various examples of scripts that can be run at various points in the git content lifecycle. If you want to, you can take a look at them now to see what they might do, but this can be a bit bewildering.

What you’re going to do now is create a script that is run before any commit is accepted into your local git repository:

14 $ cat > .git/hooks/pre-commit << EOF
15 > echo NO WORKING AT WEEKENDS!
16 > exit 1
17 > EOF
18 $ chmod +x .git/hooks/pre-commit

What you have done is create a pre-commit script in the hooks folder of the repository’s local .git folder, and made it executable. All the script does is print the message about not working at weekends, and exits with a code of 1, which is a generic error code in a shell script (exit 0 would mean ‘OK’).

Now see what happens when you try to commit:

19 $ git commit -am 'Second change'

You should have seen that the commit did not work. If you’re still not sure whether it got in, run a log command and check that the diff is still there:

20 $ git log
21 $ git diff

This should confirm that no commit has taken place.

To show a reverse example that lets the commit through, replace the script with this content:

22 $ cat > .git/hooks/pre-commit << EOF
23 > echo OK
24 > exit 0
25 > EOF

This time you’ve added an ‘OK’ message, and exited with a 0 (success) code rather than a 1 for error.

Now your commit should work, and you should see an ‘OK’ message as you commit.

26 $ git commit -am 'Second change'

A More Sophisticated Example

The above pre-commit scripts were fairly limited in their usefulness, but just to give a flavour of what’s possible, we’re going to give an example that is able to choose whether to allow or reject a commit based on its content.

Imagine you’ve decided not to allow any mention of politics in your code. The following hook will reject any mention of ‘politics’ (or any word beginning with ‘politic’).

27 $ echo 'a political comment' >> file1
28 $ cat > .git/hooks/pre-commit << EOF
29 $ if grep -rni politic *
30 > then
31 > echo 'no politics allowed!'
32 > exit 1
33 > fi
34 > echo OK
35 > exit 0
36 > EOF
37 $ git commit -am 'Political comment'

Again, the commit should have been rejected. If you change the content to something else that doesn’t mention politics, it will commit and push just fine.

38 $ echo 'a boring comment' >> file1
39 $ git commit -am 'Boring comment'
40 $ git push

Even more sophisticated scripts are possible, but require a deeper knowledge of bash (or other scripting languages), which is out of scope. We will, however, look at one much more realistic example in last section of this chapter.

Are Hooks Part of Git Content?

A question you may be asking yourself at this point is whether the hooks are part of the code or not. You won’t have seen any mention of the hooks in your commits, so does it move with the repository as you commit and push?

An easy way to check is to look at the remote bare repository directly.

41 $ cd ../git_origin
42 $ ls hooks

Examining the output of the above will show that the `pre-commit` script is not present on the bare origin remote.

This presents us with a problem if we are working in a team. If the whole team decides that they want no mention of politics in their commits, then they will have to remember to add the hook to their local clone. This isn’t very practical.

But if we (by convention) have a single origin repository, then we can prevent commits being pushed to it by implementing a `pre-receive` hook. These are a little more complex to implement, but arguably more useful as they can enforce rules per team on a canonical repository.

The `pre-commit` hook we saw before is an example of a ‘client-side hook’, that sits on the local repository. Next we’ll look at an example of a ‘server-side hook’ that is called when changes are ‘received’ from another git repository.

Pre-Receive Hooks

First type this out, and then I’ll explain what it’s doing. As best you can, try and work out what it’s doing as you go, but don’t worry if you can’t figure it out.

43 $ cat > hooks/pre-receive << 'EOF'
44 > #!/bin/bash
45 > read _oldrev newrev _branch
46 > git cat-file -p $newrev | grep '[A-Z][A-Z]*-[0-9][0-9]*'
47 > EOF

This time you created a pre-receive script, which will be run when anything is pushed to this repository. These pre-receive scripts work in a different way to the pre-commit hook scripts. Whereas the pre-commit script allowed you to grep the content that was being committed, pre-receive scripts do not. This is because the commit has been ‘packaged up’ by git, and the contents of the commit are delivered up as that package.

The read command in the above code is the key one to understand. It reads three variables: _oldrev, newrev, and _branch from standard input. The contents of these variables will match, respectively: the previous git revision reference this commit refers to; the new git revision reference this commit refers to; and the branch the commit is on. Git arranges that these references are given to the pre-receive script on standard input so that action can be taken accordingly.

Then you use the (previously unseen)git cat-file command to output details of the latest commit value stored in the newrev variable. The output of this latest commit is run through a grep command that looks for a specific string format in the commit message. If the grep finds a match, then it returns no error and all is ok. If it doesn’t find a match, then grep returns an error, as does the script.

Make the script executable:

48 $ chmod +x hooks/pre-receive

Then make a new commit and try to push it:

49 $ cd ../git_clone
50 $ echo 'another change' >> file1
51 $ git commit -am 'no mention of ticket id'
52 $ git push

That should have failed, which is what you wanted. The reason you wanted it to fail is buried in the grep you typed in:

grep '[A-Z][A-Z]*-[0-9][0-9]*'

This grep only returns successfully if it matches a string that matches the format of a JIRA ticket ID (eg PROJ-123). The end effect is to enforce that the last commit being pushed must have a reference to such a ticket ID for it to be accepted. You might want such a policy to ensure that every set of commits can be traced back to a ticket ID.

Cleanup

To clean up what you just did:

53 $ cd ../..
54 $ rm -rf lgthw_hooks

What You Learned

We’ve only scratched the surface of what commit hooks can do, and their subtleties and complexities. But you should now be able to:

  • Know what a git hook is
  • Understand the difference between a client-side and server-side hook
  • Implement your own git hooks
  • Understand how GitHub/BitBucket’s hook mechanisms work

Learn Bash the Hard Way

Learn Git the Hard Way

Learn Terraform the Hard Way


Get 39% off Docker in Practice with the code: 39miell

Advertisements

Notes on Books Read in 2018

Here are some notes on books I read in 2018. They’re not book reviews, more notes on whatever I found interesting in them. I recommend reading all of them; the books I didn’t get much out of I won’t list here.

Turing and the Universal Machine, by Jon Agar

This concise book has quite a controversial philosophy behind it, namely that it’s not technology that shapes society, but society that shapes technology by demanding it solves its problems.

I’m not sure I buy it (don’t we all demand instant teleportation technology?), but the argument takes us through some interesting information about the advent of the computer.

To do this Agar goes back to the early days of the railways. What I didn’t know was that in 1876, the London Railway Clearing House had 1440 clerks whose job it was to work out how money should be divided between different railway companies. Computation demands were also increased by the need to ensure the safe passage of trains through the complex railway system.

Similarly, aviation required computers to perform the calculations required to drive the safe design of aeroplanes. According to this history, the first specialised programming language – the catchily-named but eminently google-able Plankalkül – was invented by Konrad Zuse. Not a name I knew before reading this book, but definitely one that needs to be considered alongside Turing.

These examples from railways and aviation as well as the history of Bletchley Park all suggest that it was ‘crises of bureaucratic control‘ in industrial complexes gave rise to the innovations that led to the modern computer.

Deep Work, by Cal Newport

In the end quite a superficial book that contains a simple insight, which is that in a distracted world it might be important to make space to do the ‘deep’ work free of distraction that gives us something of value to contribute to society.

Also makes the point that this is nothing new. Main example given is Jung, a busy therapist working in a city who made sure he went to an isolated place to work in depth on his writing and ‘deeper’ work.

However, it does emphasises the importance of rest, which led me to pick up…

Why We Sleep, by Matthew Walker

When someone’s spent decades studying something you do for eight hours a day, then you probably should listen to them.

Contains wonderful nuggets, like the fact that heart attacks spike when the clocks go back, and plummet when the clocks go forward, suggesting that something as small a disruption to sleep as that can have a significant effect on the body.

Or that PTSD sufferers can be treated by encouraging REM sleep, suggesting that a good night’s sleep can help you cope with traumatic situations big and small.

Or that the punishing US sleep-deprived doctors’ training regime was instituted by a Dr Halstead, who was himself a cocaine addict, and as most people already know, many deaths are caused by tired doctors.

Or that 19 hours awake makes you drive as badly as being at the limit of drunkenness (0.08% blood-alcohol). If you sleep for four hours, you are 11.5 times as likely to have an accident. If you’re at the legal limit for drink and have had only four hours’ sleep, then you are 30x more likely to have an accident. Being tired makes everything worse…

I gave up caffeine as a result of reading this book, and now wake up feeling refreshed every day, as well as having less need for alcohol). I no longer begrudge my over-8-hour need for sleep, and embrace it.

If you read this book and don’t take sleep more seriously as result, then you’re probably too sleep-deprived to think clearly ;-)

Sleep, by Nick Littlehales

Why We Sleep did a good job of persuading me to take sleep more seriously, but didn’t tell me much about what to do to improve it. This book is written by a ‘sleep coach’ for top athletes. His big break was with Manchester United: he wrote to Alex Ferguson and asked him what he was doing about sleep. Like most of us, the answer was ‘nothing’, but that soon changed.

Littlehales encourages us to think of sleep in 90-minute cycles. The most interesting piece of advice was a counter-intuitive one: if you come home late from work drinks (for example), don’t go straight to bed. You’re likely to still be buzzing, and may lay awake stressing about who said what to who. Instead, start your standard pre-sleep wind-down, and go to bed later to pick up on your next 90 minute cycle. You’ll get better sleep that way.

I have started to think of my day in 90-minute chunks – from waking up to starting work, winding down to sleep, a break every ninety minutes from thought-intensive work to go for a quick walk, and so on.

Exactly, by Simon Winchester

Very enjoyable book about precision in engineering. Among other things, a fascinating retelling of the Hubble telescope’s original failure due to a tiny flaw in a measuring device, and an interesting early history of the industrial revolution and how precision engineering played a central role: if the ability to create pistons with accuracy were any worse, then steam engines would have been effectively useless.

Also led me to look up Reverend Wilkins, who wrote works on cryptography, suggested standardisation of language and measurement, argued for the feasibility of travel to the moon, and invented the transparent beehive. Impressive enough at any time, but this was nearly 400 years ago!

The Square and the Tower, by Niall Ferguson

Typically for Ferguson, this combines broad historical scholarship and modish ideas. In this case, he looks at graph theory to view history through the lens of different types of networks of people that shape it.

It’s a fertile subject as he takes us through periods where the hierarchical mode of being, while mostly effective for human societies, breaks down in the face of coherent but distributed networks of people.

I found fascinating the history of British army victory in Borneo, where Walter Walker overthrew traditional notions of military command to pioneer decentralised fighting networks, and its comparison to the US army’s failure in Vietnam.

Also fascinating was Ferguson’s analysis of Al Quaeda’s 9/11 attack as resting on a misunderstanding of US power. By taking out the World Trade Centre, the US financial system was not brought down, as the capitalist system is fundamentally a distributed network. Al Quaeda wrongly thought it was a hierarchical system. However, Lehman’s collapse very nearly did, since the effect of its failure was networked to all other banks in the system.

Sapiens, by Yuval Noah Harari

Homo Deus, by Yuval Noah Harari

I wrote about Sapiens here, but the follow-up Homo Deus was almost as good.

Before I read this book, I had never made the connection between the expulsion from Eden and the Agricultural Revolution. With no more gathering wild fruits, Adam is condemned to eat by the sweat of his brow.

I also didn’t know that of the 56million people that died in 2012, 620,000 were due to human violence (20% war, 80% crime), 800,000 committed suicide, and 1,500,000 due to diabetes. In other words, sugar is more deadly than weapons.

The central argument of the book is that mankind has come to worship itself through humanism, perhaps just before we’re making ourselves redundant (because superintelligent AI, natch). A nice tour of pop science and history thought on where we are going.

The Tipping Point, by Malcolm Gladwell

Well known to (and probably already read by) many readers, the bit I found most interesting about this book was the brief history of the clean-up of the New York Subway.

It took six years to clean up the graffiti on the New York Subway. The cleaners would wait until the kids finished their work, then paint over it while still in the sidings. The artists would cry as their work was destroyed. After this, they started going after the fare jumpers.

In other words, small things matter, and build up to a ‘tipping point’, where the (positive) contagion is self-sustaining.

Scale, by Geoffrey West

West explains why Godzilla couldn’t exist, and as a result explains how the growth of cities, animals, or indeed anything physical at all can be explained through relatively simple mathematical models.

As someone who finds the fractal complexity of cities, software estimation, and operational scaling interesting, I was inspired by this book to write a blog post on the relationship between mathematical fractality and software project estimation. Fortunately, somebody already did a better job than I could have, so I didn’t need to.

A Brief History of Everyone Who Ever Lived, by Adam Rutherford

Enjoyable discussion of what we currently understand about our species’ lineage.

This handy table and mnemonic was in there. I’d always wondered about these:

Domain     Dumb        Eukryota (complex life)                 Kingdom    King        Animali (animals)
Phylus     Phillip     Chordate (animals with central column)
Class      Came        Mammalia (milk-producing)
Order      Over        Primates (monkeys, apes)
Family     For         Hominidae (great apes)
Genus      Group       Homo, Gorilla
Species    Sex         Sapiens, Gorilla   

I also finally found out something else that had always bugged me: the definition of species is slippery, but the most stable one is: animals who, when they reproduce, are likely to produce fertile offspring. Ligers, for example, are infertile. Of course the notion of species is a human and slippery one, since we now know Homo Neanderthalensis and Homo Sapiens reproduced. Categorisation is messy and complicated.

Another thing that had always bugged me was explaining why African villages have more genetic diversity than (say) white Londoners. It’s pretty obvious on reflection: because we all came out of Africa, however much we reproduce we’re still drawn from a subset of those Africans, until we generate more genetic diversity than existed from that original pool.

Which will take a long time, since there are typically 100 unique mutations in each person. And most of those (presumably) serve no purpose and will only be passed on (or ‘catch on’) by chance. Not all of them though: the ability to drink milk in adulthood, for example, is a mutation we now believe is only five- to ten-thousand years old.

The Deeper Genome, by John Parrington

I’m going to have to read this one over and over, as it’s an incredibly dense introduction to the latest research into genetics. I hadn’t understood epigenetics and its implications at all until I read about it here, and my previous understanding of DNA as ‘just’ a stream of binary-encodable data was exploded by the three-dimensionality hinted at by the existence of ‘action at a distance’ in the genetic code.

I think I understood about 10% of this book if I’m honest…


Learn Bash the Hard Way

Learn Git the Hard Way

Learn Terraform the Hard Way

LearnGitBashandTerraformtheHardWay



Get 39% off Docker in Practice with the code: 39miell2


Six Ways to Level Up Your nmap Game

 

What is nmap?


nmap is a network exploration tool and security / port scanner.

If you’ve heard of it, and you’re like me, you’ve most likely used it like this:

nmap 127.0.0.1

ie, you’ve pointed it at an IP address and observed the output:

Starting Nmap 7.60 ( https://nmap.org ) at 2018-11-24 18:36 GMT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00033s latency).
Not shown: 991 closed ports
PORT      STATE SERVICE
22/tcp    open  ssh
53/tcp    open  domain
80/tcp    open  http
443/tcp   open  https
631/tcp   open  ipp
5432/tcp  open  postgresql
8080/tcp  open  http-proxy
9002/tcp  open  dynamid
50000/tcp open  ibm-db2

Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds

which tells you the open ports on a host.

I used nmap like this for years, but only recently grokked the manual to see what else it could do. Here’s a quick look and some of the more useful things I found out.

1) Scan a Network

As its description implies, nmap can scan a range of IP addresses. You can do this in a couple of ways.

If you want to use a CIDR range, you can scan like this:

nmap 192.168.0.1/24

which will scan the whole range. The 192.168.1.0 address may be different depending on the network you are on.

Or, if you’re less comfortable with CIDR, you can use a glob like this:

nmap 192.168.1.*

I use this to work out which machines are active on my home network:

nmap -sn 192.168.1.0/24

where the -sn flag skips the default port scan.

2) Scan All Ports

One gotcha about nmap is that it doesn’t scan all ports by default. Instead it ‘scans the 1,000 most common ports for each protocol’. Quite often you might want to find _any_ open ports on the hosts. You can achieve this with:

nmap -p- localhost

where the -p flag indicates the ports to scan and the - means ‘all of them’.

Beware that this (and many other nmap activities, but especially this) can trigger all sorts of network security tripwires, so be sure that it’s OK to run this on the network, and don’t be surprised if you get booted from the network either. I get round this in the example above by running it locally.

You can also specify the specific service you want to find by its name in /etc/services. One I use commonly is:

nmap -p domain 192.168.1.0/24

which tells me all the DNS servers on the network.

3) Get service versions

You can use the -sV flag to get more information on service versions. This command tells me that I’m running a couple of dnsmasq servers on my local network, and their versions.

$ nmap -sV -p domain 192.168.1.0/24 | grep -E '(scan report for|open)'
Nmap scan report for Ians-MBP.home (192.168.1.65)
Nmap scan report for cage.home (192.168.1.66)
53/tcp open domain dnsmasq 2.79
Nmap scan report for Ians-Air-2.home (192.168.1.119)
Nmap scan report for basquiat.home (192.168.1.124)
Nmap scan report for Google-Home-Mini.home (192.168.1.127)
Nmap scan report for dali.home (192.168.1.133)
53/tcp open domain dnsmasq 2.79
Nmap scan report for Google-Home-Mini.home (192.168.1.137)
Nmap scan report for api.home (192.168.1.254)

nmap does this by having a database of versions and their behaviours, and under the hood runs various commands to interrogate and match to these versions.

This can be useful to figure out whether you have any services that appear vulnerable to attackers if they were to scan your network and may need upgrading.

4) Use -A for more data

There are further options to tune the version scan. For example, --version-all takes more time and does more probing to ensure a version match. Using this in addition to the -A flag, which also enables other detection techniques to be used as well:

$ nmap -A -p 443 192.168.1.124 --version-all

Starting Nmap 7.60 ( https://nmap.org ) at 2018-11-25 11:55 GMT
Nmap scan report for basquiat.home (192.168.1.124)
Host is up (0.00054s latency).

PORT STATE SERVICE VERSION
443/tcp open ssl/http Apache httpd 2.4.29 ((Ubuntu))
|_http-server-header: Apache/2.4.29 (Ubuntu)
|_http-title: Site doesn't have a title (text/html).
| ssl-cert: Subject: commonName=meirionconsulting.com
| Subject Alternative Name: DNS:meirionconsulting.com
| Not valid before: 2018-09-28T01:01:51
|_Not valid after: 2018-12-27T01:01:51
|_ssl-date: TLS randomness does not represent time

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 12.78 seconds

The amount of detail can be surprisingly rich and useful.

5) Find out what nmap is up to

nmap isn’t very chatty and can take a long time to return a result, so like many other command line tools, it offers a verbosity (-v) and debug (-d) flags that can tell you more about what’s going on:

nmap -vv -dd -sn 192.168.0.0/24

Adding an extra v or d will make nmap more chatty if needed:

[...]
Ping Scan Timing: About 31.25% done; ETC: 12:32 (0:01:08 remaining) 
ultrascan_host_probe_update called for machine 192.168.0.1 state HOST_DOWN -> HOST_DOWN (trynum 1 time: 2002984) 
ultrascan_host_probe_update called for machine 192.168.0.2 state HOST_DOWN -> HOST_DOWN (trynum 1 time: 2002937) 
ultrascan_host_probe_update called for machine 192.168.0.3 state HOST_DOWN -> HOST_DOWN (trynum 1 time: 2002893)
[...]

6) Script your own scans with NSE

nmap uses the ‘Netmap Scripting Engine’ to run these probing scripts and generate the output. It uses the Lua programming language to achieve this.

On my machine these scripts are located in /usr/share/nmap/scripts. You can call them like this:

nmap --script=http-sitemap-generator example.com

There are all sorts of cool-looking scripts in there that may be useful to you, relating to everything from apache server status to xserver access.

More information is available here.


If you like this, you might like one of my books:

Learn Bash the Hard Way

Learn Git the Hard Way

Learn Terraform the Hard Way

LearnGitBashandTerraformtheHardWay


If you liked this post, you might also like these:

Ten Things I Wish I’d Known About bash

Centralise Your Bash History

How (and Why) I Run My Own DNS Servers

My Favourite Secret Weapon – strace

A Complete Chef Infrastructure on Your Laptop


Five Things I Wish I’d Known About Git

Git can be utterly bewildering to someone who uses it casually, or is not interested in things like directed acyclic graphs.

For such users, the best thing you can do is buy my book (free sample available), which guides you through the usage of git in a practical way that embeds the concepts ready for daily use.

The second best thing you can do is read on. Here I briefly go through five things I wish someone had explained to me before I started using git.

1) The Four Stages

Having come from using CVS as a source control (an older example of a Version Control System (VCS)), one of the most baffling things about git was its different approach to the state of content.

CVS had two states of data:

  • uncommitted
  • committed

and this results in these kinds of workflows:

traditional_vcs

Whereas git has four states:

  • Local changes
  • Staged/added changes
  • Committed
  • Pushed to remote

Here’s a diagram that illustrates the four stages:

1.1.3.mermaid

If, like me, you use git commit -am "checkin message" to commit your work, then the second ‘adding/staging’ state is more or less invisible to you, since the -a does it for you. It’s for this reason that I encourage new users to drop the -a flag and git add by hand, so that they understand these distinctions.

One subtlety is that the -a flag doesn’t add new files to the content tracked by git – it just adds changes made.

These states exist so that people can work independently and offline, syncing later. This was the driving force behind the development of git.

From this comes another key point: all git repositories are created equal. My clone of your repository is not dependent on yours for its existence. Each repository stands on its own, and is only related to others if you configure it so. This is another key difference between git and more traditional (nay, obsolete) client/server models of content history management.

This results in a workflow that looks more like this:

distributed_vcs

which is a far more flexible (and potentially more complicated) workflow.

2) What is a Reference?

Git docs and blogs keep talking about references, but what is a reference?

A reference is just this: a pointer to a commit. And a commit is a unique reference to a new state of the content.

Once this is understood, a few other concepts make more sense.

HEAD is a reference to ‘where you are’ in the content history. It’s the content you’re currently looking at in your git repo.

When you git commit, the HEAD moves to the new commit.

A git tag reference is one that can have arbitrary text, and does not move when a new commit is seen.

A git branch is a reference that moves with the HEAD whenever you commit a new change.

A couple of other confusing things then become clearer. For example, a detached HEAD is nothing to panic about despite its scary name – it just means that your HEAD is not pointed at a branch.

To help cement the above, look at this diagram:

1.5.4.tex

It represents a series of commits.

Confusingly, with git diagrams, the arrows go backwards in time. A is the first commit, then B, and so on to the latest commit (H).

There are three references – master (which is pointed at C), experimental, which is pointed at H, and HEAD, which is also pointed at H. HEAD, remember is ‘where we are’.

3) What’s a Fast-Forward?

Now that you understand what a HEAD reference is, understanding what a fast-forward is pretty simple.

Usually, when you merge two branches together, you get a new commit:1.5.2.tex

In the above diagram, I is a commit that represents the merging of H and G from its common ancestor (D). The changes made on both branches are applied together from D and the resulting state of the content after the commit is stored in a new state (I).

But consider the diagram we saw above:

1.5.4.tex

There we have two branches, but no changes were made on one of them. Let’s say we want to merge the changes on experimental (E and H) into master – we’ve experimented, and the experiment was successful.

In this case, merging E and H into master requires no changes from H, since there’s no F and G changes that need to be merged together with E and H. They are all in one line of changes.

Such a merge only requires that the master reference is picked up and moved from C to H. This is a ‘fast-forward’ – the reference just needed moving along, and no content needed to be reconciled.

4) What’s a Rebase?

My manual page for git rebase says:

Reapply commits on top of another base tip

this is much more comprehensible than previous versions of this man page, but will still confuse many people.

A visual example makes it much clearer.

Consider this example:

2.5.3.tex

You could merge feature1 into the master branch, and you’d end up with a new commit (G), which makes the tree look like this:

2.5.4.tex

You can see that you’ve retained the chronology, as both branches keep their history and order of commits.

A git rebasetakes a different approach. It ‘picks up’ the changes on our branch (commit D on feature1 in this case) and applies it to the end of the branch we are on (HEAD is at master).

2.5.5.tex

It’s as though we just checked out master and then made a change (D) on a new branch (feature1), rather than branched off from master some time ago at C and did our feature1 work there.

This looks a lot neater, doesn’t it? master can now be ‘fast-forwarded’ to where feature1 is by moving master‘s pointer along to D.

The downside is that we’ve lost something from the history by doing this. It doesn’t reflect the order things happened in anymore chronologically. Do you care about this?

5) The power of git log

The above concepts are all very well, but how do you grasp these in the course of your day-to-day work?

For this I highly recommend getting to grips with git’s native log command. While there are many GUIs that can display history, they all have their own opinions on how things should be displayed, and moreover are not available everywhere. As a source of truth, git log is unimpeachable and transparent.

I wrote about this in more depth here, but to give yourself a flavour, try these two commands on a repo of your choice. They cover 90% of my git log usage day-to-day:

$ git log --oneline --graph

$ git log --oneline --graph --simplify-by-decoration --all

 


Concepts explained here are taught in my book Learn Git the Hard Way.

learngitthehardway


 

 

 

Eleven bash Tips You Might Want to Know

Here are some tips that might help you be more productive with bash.

1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit ‘up’, ‘left’ until at the ‘p’ and type ‘e’ and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile
$

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you’ll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile
$

 

 

2) pushd / popd vs ‘cd -‘

This one comes in very handy for scripts, especially when operating within a loop.

Let’s say you’re in a for loop moving in and out of folders like this:

for d1 in $(ls -d */)
do
  # Store original working directory.
  original_wd="$(pwd)"
  cd "$d1"
  for d2 in $(ls -d */)
  do
    pushd "$d2"
    # Do something
    popd
  done
  # Return to original working directory
  cd "${original_wd}"
done

NOTE: I’m well aware the above code is unsafe – see here.
The code above is intended to illustrate pushd/popd without distraction
for a relative beginner.
There’s a post in the fact that people like me use $(ls -d */) all
the time without deleterious consequences 99% of the time, but
that can wait. That said, it’s well worth knowing that this
kind of issue exists in bash as it can trip you up. 

You can rewrite the above using the pushd stack like this:

for d1 in $(ls -d *)
do
  pushd "$d1"
  for d2 in $(ls  -d */)
  do
    pushd "$d2"
    # Do something
    popd
  done
  popd
done

Which tracks the folders you’ve pushed and popped as you go.

Note that if there’s an error in a pushd you may lose track of the stack and popd too many time. You probably want to set -e in your script as well (see previous post)

There’s also cd -, but that doesn’t ‘stack’ – it just returns you to the previous folder:

cd ~
cd /tmp
cd blah
cd - # Back to /tmp
cd - # Back to 'blah'
cd - # Back to /tmp
cd - # Back to 'blah' ...

Material here based on material from my book
Learn Bash the Hard Way.
Free preview available here.

hero


 

3) shopt vs set

This one bothered me for a while.

What’s the difference between set and shopt?

sets we saw before, but shopts look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here.

Essentially, it looks like it’s a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options.

But I’m still unsure… if you know the answer, let me know.

4) Here Docs and Here Strings

‘Here docs’ are files created inline in the shell.

The ‘trick’ is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc
$

Notice that:

  • the string could be included in the file if it was not ‘alone’ on the line
  • the string SOMEENDSTRING is more normally END, but that is just convention

Lesser known is the ‘here string’:

$ cat > asd <<< 'This file has one line'

 

5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash.

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS
  • The # means ‘match and remove the following pattern from the start of the string’
  • The % means ‘match and remove the following pattern from the end of the string

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script.

If you want to use glob patterns that are greedy (see globbing here) then you double up:

VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}
$ echo ${VAR%%*FOOTER}

 

6) ​Variable Defaults

These are very handy when you’re knocking up scripts quickly.

If you have a variable that’s not set, you can ‘default’ them by using this. Create a file called default.sh with these contents

#!/bin/bash
FIRST_ARG="${1:-no_first_arg}"
SECOND_ARG="${2:-no_second_arg}"
THIRD_ARG="${3:-no_third_arg}"
echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second.

Observer how the third argument’s default has been assigned, but not the first two.

You can also assign directly with ${VAR:=defaultval} (equals sign, not dash) but note that this won’t work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap builtin can be used to ‘catch’ when a signal is sent to your script.

Here’s an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
} 
trap cleanup TERM INT QUIT

Any attempt to CTRL-C, CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can’t be trapped in this way

But mostly I’ve used this for ‘cleanups’ like the above, which serve their purpose.

8) Shell Variables

It’s well worth getting to know the standard shell variables available to you. Here are some of my favourites:

RANDOM

Don’t rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
16313
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
113610703
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE

REPLY

No need to give a variable name for read

$ read
my input
$ echo ${REPLY}

LINENO and SECONDS

Handy for debugging

echo ${LINENO}
115
echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
174380
174381
116

Note that there are two ‘lines’ above, even though you used ; to separate the commands.

TMOUT

You can timeout reads, which can be really handy in some scripts

#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}

 

9) Extglobs

If you’re really knee-deep in bash, then you might want to power up your globbing. You can do this by setting the extglob shell option. Here’s the setup:

shopt -s extglob
A="12345678901234567890"
B="  ${A}  "

Now see if you can figure out what each of these does:

echo "B      |${B}|"
echo "B#+( ) |${B#+( )}|"
echo "B#?( ) |${B#?( )}|"
echo "B#*( ) |${B#*( )}|"
echo "B##+( )|${B##+( )}|"
echo "B##*( )|${B##*( )}|"
echo "B##?( )|${B##?( )}|"

Now, potentially useful as it is, it’s hard to think of a situation where you’d absolutely want to do it this way. Normally you’d use a tool better suited to the task (like sed) or just drop bash and go to a ‘proper’ programming language like python.

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here).

What I didn’t know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.

11) source vs ‘.’

This one confused me for a long time.

You can type:

$ cat > somescript.sh << END
A=11
END
$ source somescript.sh
$ echo $A

which will run the script somescript.sh and do so while retaining the environment changes in the script in your environment.

Try this to compare:

$ cat > somescript.sh << END
A=12
END
$ chmod +x somescript.sh
$ ./somescript.sh
$ echo $A

The dot (‘.‘) command does something similar, but what’s the difference? Why does it exist?

The answer is simple: in bash they are exactly the same. The ‘.‘ was the original command, and is more portable, since it works in the sh shell as well as bash.

You may also be wondering what the difference between the dots in:

./somescript.sh

and

. ./somescript.sh

is. In the . ./somescript.sh invocation, the first dot acts as an equivalent of the source command, while the ./ after indicates that the script will be found in this folder, the dot there representing the local folder (try running cd . to see what happens).

If you didn’t use the ./, and . wasn’t in your PATH environment variable, then somescript.sh might not be found. Simple, right?



If you liked this post, you might also like these:

Ten Things I Wish I’d Known About bash

Centralise Your Bash History

How (and Why) I Run My Own DNS Servers

My Favourite Secret Weapon – strace

A Complete Chef Infrastructure on Your Laptop


 

Learn Bash Debugging Techniques the Hard Way

In this article I’m going to give you a hands-on introduction to standard bash debugging techniques.

In addition, you’ll learn some techniques to make your bash scripts more robust to failure.

This article uses the hard way method, which emphasises hands-on-keyboard work to embed the learning. You’re going to have to think and type to learn.

Syntax Checking Options

Start by creating this simple script:

$ mkdir -p lbthw_debugging
$ cd lbthw_debugging
$ cat > debug_script.sh << 'END'
#!/bin/bash
A=some value
echo "${A}
echo "${B}"
END
$ chmod +x debug_script.sh

Now run it with the -n flag like this:

$ bash -n debug_script.sh -n

This flag only parses the script, rather than actually running it. It’s useful for detecting basic syntax errors.

You’ll see it’s broken. Fix it. Then run it again.

If you’re not sure how to fix, contact me.

Verbose and Trace Flags

Now run with -v to see the verbose output.

$ bash -v debug_script.sh

and then run with -x to trace the output:

$ bash -x debug_script.sh

What do you notice about the output of the commands? Read them carefully.

Do you see the problem?

Using these flags together can help debug scripts where there is an elementary error, or even just working out what’s going on when a script runs. I used -x only yesterday to figure out why a systemctl service wasn’t running or logging.

 


Material here based on the ‘advanced’ section of my book
Learn Bash the Hard Way.
Free preview available here.

hero


 

Managing Variables

Variables are a core part of most serious bash scripts (and even one-liners!), so managing them is another important way to reduce the possibility of your script breaking.

Change your script to add the ‘set’ line immediately after the first line and see what happens:

#!/bin/bash
set -o nounset
A="some value"
echo "${A}"
echo "${B}"

Now research what the nounset option does. Which set flag does this correspond to?

Now, without running it, try and figure out what this script will do. Will it run?

#!/bin/bash
set -o nounset
A="some value"
B=
echo "${A}"
echo "${B}"

I always set nounset on my scripts as a habit. It can catch many problems before they become serious.

Tracing Variables

If you are working with a particularly complex script, then you can get to the point where you are unsure what happened to a variable.

Try running this script and see what happens:

#!/bin/bash 
set -o nounset 
declare A="some value" 
function a { 
  echo "${BASH_SOURCE}>A A=${A} LINENO:${1}" 
} 
trap "a $LINENO" DEBUG 
B=value 
echo "${A}" 
A="another value" 
echo "${A}" 
echo "${B}"

There’s a problem with this code. The output is slightly wrong. Can you work out what is going on? If so, try and fix it.

You may need to refer to the bash man page, and make sure you understand quoting in bash properly.

It’s quite a tricky one to fix ‘properly’, so if you can’t fix it, or work out what’s wrong with it, then ask me directly and I will help.

Profiling Bash Scripts

Returning to the xtrace (or set -x flag), we can exploit its use of a PS variable to implement the profiling of a script:

#!/bin/bash
set -o nounset
set -o xtrace
declare A="some value"
PS4='$(date "+%s%N => ")'
B=
echo "${A}"
A="another value"
echo "${A}"
echo "${B}"
ls
pwd
curl -q bbc.co.uk

From this you should be able to tell what PS4 does. Have a play with it, and read up and experiment with the other PS variables to get familiar with what they do.

NOTE: If you are on a Mac, then you might only get
second-level granularity on the date!

 

Linting with Shellcheck

Finally, here is a very useful tip for understanding bash more deeply and improving any bash scripts you come across.

Shellcheck is a website and a package available on most platforms that gives you advice to help fix and improve your shell scripts. Very often, its advice has prompted me to research more deeply and understand bash better.

Here is some example output from a script I found on my laptop:

$ shellcheck shrinkpdf.sh
In shrinkpdf.sh line 44:
          -dColorImageResolution=$3             \
                                 ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 46:
          -dGrayImageResolution=$3              \
                                ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 48:
          -dMonoImageResolution=$3              \
                                ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 57:
        if [ ! -f "$1" -o ! -f "$2" ]; then
                      ^-- SC2166: Prefer [ p ] || [ q ] as [ p -o q ] is not well defined.
In shrinkpdf.sh line 60:
        ISIZE="$(echo $(wc -c "$1") | cut -f1 -d\ )"
                      ^-- SC2046: Quote this to prevent word splitting.
                      ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
In shrinkpdf.sh line 61:
        OSIZE="$(echo $(wc -c "$2") | cut -f1 -d\ )"
                      ^-- SC2046: Quote this to prevent word splitting.
                      ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.

The most common reminders are regarding potential quoting issues, but you can see other useful tips in the above output, such as preferred arguments to the test construct, and advice on “useless” echos.

 

Exercise

1) Find a large bash script on a social coding site such as GitHub, and run shellcheck over it. Contribute back any improvements you find.


 

Why Are Enterprises So Slow?

 

tl;dr

In this article I want to explain a few things about enterprises and their software, based on my experiences, and also describe what things need to be in place to make change  come about.

Have you ever found yourself saying things like:

  • Why are enterprises so slow?
  • How do they decide what to buy?
  • Why is it so hard to deliver things in an enterprise?

I worked for a large ‘enterprise’ organisation for a few years trying to deliver infrastructure software change, and found myself having to explain these things to developers who worked there, salespeople, external open source engineers, software engineers who worked for enterprise vendors, and even many, many people within that organisation.

A few of those people suggested I write these explanations up so that they could pass it on to their fellow salespeople/engineers etc..

PolygonofDespair.png

The polygon of Enterprise despair

Background

Before the enterprise, I worked for a startup that grew from a single room to 700+ people over 15 years.

‘Enterprise’ was a word often thrown at us when rejecting our software, usually in the sentence “your software isn’t enterprise enough”. I had no idea what that meant, but I have a much better idea now. It didn’t help that the people saying that were usually pretty clueless about software engineering.

Like many other software developers whose experience was in an unregulated startup environment, I had little respect for the concept of enterprise software. Seems I wasn’t alone.

When I finally got sick of the startup life I took a job at a huge organisation in financial services over 200 times as large. You don’t get much more ‘enterprise’ than that, but even within that context I was working in the ‘infrastructure team’, the part of the group that got beaten up for being (supposedly) slow to deliver, and then delivering less usable software than was desired. So it was like being in the enterprise, squared.

Over the time that I worked there, I got a great insight into the constraints on delivery that cause client frustration to happen, and – worse luck – I was responsible for helping to deliver change within it.

This is quite a long post, so I’ve broken this up into several parts to make it easier to digest:

  1. Thought Experiment
    • What would happen if an enterprise acted like a startup?
  2. Reducing Risk
    • Some ways enterprises reduce risk
    • The principle underlying these methods
  3. Cumulative Constraints
    • Consequences of the culture of risk reduction
  4. A New Hope?
    • What can be done?

1) Thought Experiment

Before we start, let’s imagine a counterfactual situation – imagine an enterprise acted like a startup. Showing how this doesn’t work (and therefore why it generally doesn’t happen) will help illustrate why some of the constraints that cause the slowdowns we see in large organisations exist.

First, let’s look at what a small team might do to change some software. We’ll make it a really simple example, and one you might well do routinely at home – upgrading a Linux distribution.

In both cases, the relationship is:

  • IT person
    • Manager

Here’s how the conversation might go at a really small startup:

‘Lean’ OS Upgrade – Small Company

  • Shall we upgrade the OS?
    • Yes, ok.
  • Oh, I’ve hit a problem. One of the falanges have stopped working.
    • OK, do some work to fix the transpondster.
  • Might take me a few hours
    • OK
  • … OK, done. Can you test?
    • Yup, looks good.
  • Great.

‘Lean’ OS Upgrade – Enterprise

  • Shall we upgrade the OS?
    • Yes.
  • OK, done.
    • Um, you brought down the payments system.
  • Whoops. I’ll roll back
    • OK.
  • Done. We’ll look into it.
    • Hi. The regulator called. They saw something on the news about payments being down. They want to know what happened.
  • Um, OK. I’ll write something up.
    • Thanks.
    • They read your write-up and have asked for evidence of who decided what when. They want a timeline.
  • I’ll check the emails.
    • By the way, you’re going to be audited in a couple of months. We’ll have to cancel all projects until then?
  • But we’ve got so much technical debt!
    • If we don’t get this right, they’ll shut us down and we’ll be fired.
    • OK, we have the results of the audit.
    • Audit has uncovered 59 other problems you need to solve.
  • OK…
    • We’ll have to drop other projects, and maybe lose some people.
  • Um, OK…
    • Oh, and my boss is being hauled in front of the regulator to justify what happened. If it doesn’t go well he’s out of a job and his boss might go to prison if they think something fishy is going on.

Now that’s a bad release…

That’s a worst-case scenario, but let’s unpick what these regulated enterprises do to mitigate both the risk and the consequences of the above scenario.

Specifically:

  • ‘Who owns this?’
  • ‘How is this maintained?’
  • ‘Who buys it?’
  • ‘Who’s signed off the deployment?’

2) Reducing Risk

 

‘Who Owns This?’ / ‘One Throat to Choke’

This is a big one. One of the most commonly-asked questions when architecting a solution within an enterprise is: ‘Who is responsible for that component/service/system?

In our enterprise ‘Lean OS Upgrade’ scenario above one of the first questions that will be asked is: ‘Who owns the operating system?’

That group will be identifiable through some internal system which tracks ownership of tools and technologies. Those identified as responsible will be responsible for some or all of the lifecycle management for that technology. This might include:

  • Upgrade management
  • Support (directly or via a vendor)
  • Security patching
  • Deciding who can and can’t use it
  • Overall policy on usage (expand/deprecate/continue usage)

This ownership results in ‘one throat to choke’ for audit functions. Much like the police will go after the drug dealer rather than the casual user, the audit functions of an enterprise will go after the formally responsible person or team than the (potentially thousands of) teams using an outdated version of a particular technology. There’s richer pickings there.

From ownership comes responsibility. A lot of the political footwork in an enterprise revolves around trying to not own technologies. Who wants to be responsible for Java usage across a technology function of dozens of thousands of staff, any of whom might be doing crazy stuff? You first, mate.

Enterprises and Vendors

This also explains enterprises’ love of vendor software over pure open source. If you’ve paid someone to maintain and support a technical stack, then they become responsible for that whole stack. That doesn’t solve all your problems (you still will need to integrate their software with your IT infrastructure, and things get fuzzier the closer you look at the resulting solution), but from a governance point of view you’ve successfully passed the buck.

What is governance?
IT Governance is a term that covers all the processes and structures that ensures the IT is appropriately managed in a way that satisfies those that govern the organisation. Being ‘out of governance’ (ie not conforming to standards) is considered a dangerous place to be, because you may be forced to spend money to get back ‘in’ to governance.

‘How is this maintained?’

Another aspect of managing software in an enterprise context is its maintenance. In our idealised startup above ‘Dev’ and ‘Ops’ were the same thing (ie, one person). Lo and behold you have DevOps!

Unfortunately, the DevOps slogan ‘you built it, you run it’ doesn’t usually work in an Enterprise context for a few reasons.

Partly it’s historical ie ‘it’s the way things have been done’ for decades, so there is a strong institutional bias towards not changing this. Jobs and heavily-invested-in processes depend on its persistence. But further bolstering this conservatism is the regulatory framework that governs how software is managed.

Regulations

Regulations are rules created by regulators, who in turn are groups of people with power ultimately derived from government or other controlling authorities. So, effectively, they have the force of law as far as your business is concerned.

Regulators are not inclined to embrace fashionable new software deployment methods, and their paradigms are rooted in the experiences of software built in previous decades.

What does this mean? If your software is regulated, then it’s likely that your engineering (dev) and operations teams (ops) will be separate groups of people specialising in those roles, and one of the drivers of this is the regulations, which demand a separation to ensure that changes are under some kind of control and oversight.

Now, there is (arguably) a loophole here that some have exploited: regulations often talk about ‘separation of roles’ between engineering and operations, and don’t explicitly say that these roles need to be fulfilled by different people.

But if you’re a really big enterprise, that might be technically correct but effectively irrelevant. Why? Because, to ‘simplify’ things, these large enterprise often create a set of rules that cover all the regulations that may ever apply to their business across all jurisdictions. And those rules are generally the strictest you can imagine.

Added to that, those rules develop a life and culture of their own within the organisation independent of the regulator such that they can’t easily be brought into question.

Resistance is futile. Dev and Ops must be separate because that’s what we wrote down years ago.

So you can end up in a situation where you are forced to work in a way prescribed years ago by your internal regulations, which are in turn based on interpretations of regulations which were written years before that!

And if you want to change that, it will itself likely take years and agreement from multiple parties who are unlikely to want to risk losing their job so you can deliver your app slightly faster.

Obviously, this separation slows things down as engineering must make the code more tolerant to mistakes and failure so that another team can pick it up and carry it through to production. Or you just throw it over the wall and hope for the best. Either way, parties become more resistant to change.

Change Control

That’s not the only way in which the speed of change is reduced in an enterprise.

In order to ensure that changes to systems can be attributed to responsible individuals, there is usually some kind of system that tracks and audits changes. One person will raise a ‘change record’, which will usually involve filling out an enormous form, and then this change must be ‘signed off’ by one or more other person to ensure that changes don’t happen without due oversight.

In theory, the person signing off must carefully examine the change to ensure it is sensible and valid. In reality, most of the time trust relationships build up between change raiser and change validator which can speed things up. If the change is large and significant, then it is more likely to be closely scrutinised. There might also exist ‘standard changes’ or ‘templated changes’, which codify more routine and lower-risk updates and are pre-authorised. These must also be signed off before being deployed (usually at a higher level of responsibility, making it harder to achieve).

While in theory the change can be signed off in minutes, in reality change requests can take months as obscure fields in forms are filled out wrongly (‘you put the wrong code in field 44B! Start again.’), sign-off deadlines expire, change freezes come and go, and so on.

All this makes the effort of making changes far more onerous than it is elsewhere.

Security ‘Sign-Off’

trial-by-fire-1

If you’re working on something significant, such as a new product, or major release of a large-scale product, then it may become necessary to get what most people informally call ‘security sign-off’.

Processes around this vary from place to place, but essentially, one or more security experts descend at some point on your project and audit it.

I had imagined such reviews to be a very scientific process, but in reality it’s more like a medieval trial by ordeal. You get poked and prodded in various ways while questions are asked to determine weaknesses in your story.

This might involve a penetration test, a look at your code and documentation, or an interview with the engineers. There will likely be references to various ‘security standards’ you may or may not have read, which in turn are enforced with differing degrees of severity.

The outcome of this is usually some kind of report and a set of risks that have been identified. These risks (depending on their severity – I’ve never heard of there being none) may need to be ‘signed off’ by someone senior so that responsibility lies with them if there is a breach. That process in itself is arduous (especially when the senior doesn’t fully understand the risk) and can be repeated on a regular basis until it is sufficiently ‘mitigated’ through further engineering effort or process controls. After which it’s then re-reviewed. None of this is quick.

Summary: Corporate, not Individual Responsibility

If there’s a common thread to these factors in reducing risk, it is to shift responsibility and power from the individual to the corporate entity. If you’re a regulated, systemically-significant enterprise, then the last thing you or the public wants is for one person to wield too much power, either through knowledge of a system, or ability to alter that system in their own interests.

The corollary of this is that it is very hard for one person to make change by themselves. And, as we all know, if a task is given to multiple people to achieve together, then things get complicated and change slows up pretty fast as everyone must keep each other informed as to what everyone else is doing.

Once this principle of corporate responsibility is understood, then many other processes start to make sense. An example of one of these is sourcing (aka procurement: the process of buying software or other IT services).

Example – Sourcing

Working for such an enterprise, and before I stopped answering, I would get phoned up by salespeople all the time who seemed to imagine that I had a chequebook ready to sign for any technology I happened to like. The reality could not have been further from the truth.

What many people don’t expect is that to prevent a situation where one person could get too much power it can be the case that technical people have no direct control over the negotiation (or ‘sourcing process’) at all. What often happens is something close to this:

  • You go to senior person to get sign-off for a budget for purpose X
  • They agree
  • You document at least two options for products that fulfil that purpose
  • The ‘sourcing team’ take that document and negotiate with the suppliers
  • Some magic happens
  • You get told which supplier ‘won’

You can see why this process helps reduce the risk that someone takes a bribe to push a particular vendor solution (there’s also often strict rules around accepting so much as a coffee from a potential supplier), which is a good thing. On the other hand, this process can take months or even years. And might need to be repeated if the process takes so long that funding has disappeared or teams have been disbanded.

To complicate matters further, sourcing might have its own ‘preferred supplier lists‘ of companies that have been vetted and audited in the past. If your preferred supplier isn’t on that list (and hasn’t made a deal with one on those list), the process could take even longer.


3) Cumulative Constraints

What we have learned so far is that enterprises are fundamentally slowed down by attempts to reduce the power and individual responsibility in favour of corporate responsibility.

This usually results in:

  • More onerous change control
  • Higher bars for change planning
  • Higher bars for buying solutions
  • Higher bars for security requirements
  • Separation of engineering and ops functions

all of which slow down delivery. It’s like entropy. You can fight it, but in the end physics wins.

Now we’ll take a step outside these individual constraints to look at what happens when you structure a large scale enterprise organisation where its component groups are all fighting these same challenges.

Dependency Constraints

When you try and deliver in an enterprise you will find that your team has dependencies on other teams to provide you IT services.

The classic example of this is firewall changes. You, as a developer decide – in classic agile microservices/’all the shiny’ fashion – to create a new service running on a particular port on a set of hosts. You gulp Coke Zero all night and daub the code together to get a working prototype.

To allow connectivity, you need to open up some ports on the firewall. You raise a change, and discover the process involves updating a spreadsheet by hand and then raising a change request which requires at least a week’s notice. Your one night’s development is now going to elapse a week before you can try it out. And that’s hoping you filled everything out correctly and didn’t miss anything. If you did, then you have to go round again…

One of the joyous things about working in an unregulated startup is that if you see a problem in one of your dependencies you have the option of taking it over and running it yourself. Don’t like your cloud provider? Switch. Think your app might work better in erlang? Rewrite. Fed up with the firewall process? Write a script to do that, and move to gitops.

So why not do the same in the enterprise? Why not just ‘find your dependencies and eliminate them‘?

Some do indeed take this approach, and it costs them dearly. Either they have to spend great sums of money managing the processes required to maintain and stay ‘within governance’ for the technology they’ve decided to own, or they get hit with an audit sooner or later and get found out. At that point, they might go cap in hand to the infrastructure team, whose sympathy to their plight is in proportion to the amount of funding infrastructure is being offered to solve the problems for them…

The reality is that – as I said above – taking responsibility and owning a technology or layer of your stack brings with it real costs and risks that you may not be able to bear and stay in business.

So however great you are as a team, you’re delivery cadence is constrained to a local maxima based on your external dependencies, which are (effectively) non-negotiable.

This is a scaling up of the same constraints on individuals in favour of corporate power and responsibility. Just as it is significantly harder for you to make that much difference, it is harder for your team to make much difference, for the same structural reasons.


If you like this, you might like one of my books:

Learn Git the Hard WayLearn Terraform the Hard Way

Learn Bash the Hard Way

LearnGitBashandTerraformtheHardWay


Cultural Constraints

Now that the ingredients for slow delivery are already there in a static, structural sense, let’s look at what happens when you ‘bake’ that structure over decades into the organisation and then try to make change within it.

Calcified Paradigms

Since reasoning about technology on a corporate scale is hard, creating change within it can only work at all if there are collective paradigms around which processes and functions can reason.

These paradigms become ingrained, and surfacing and reshaping these conceptual frameworks can be an effort that must repeated over and over across an organisation if you are to successfully make change.

The two big examples of this I’ve been aware of are the ‘machine paradigm’ and charging models, but one might add ‘secrets are used manually’ or many others that may also be bubbling under my conscious awareness.

The ‘machine paradigm’

Since von Neumann outlined the architecture of the computer, the view of the fundamental unit of computation as being a single discrete physical entity has held sway. Yes, you can share workloads on a single machine (mainframes still exist, for example, and two applications might use the same physical device), but for the broad mass of applications, the idea of needing a separate physical machine to run on (for performance or security reasons) has underpinned assumptions of applications’ design, build, test, and deploy phases.

Recently (mostly in the last 10 years), this paradigm has been modified by virtual machines, multiples of which sit on one larger machine that runs a hypervisor. Ironically, this has reinforced the ‘machine paradigm’, since for backward compatibility each VM has all the trappings of a physical machine, such as network interfaces, mac addresses, numbers of CPUs and so on. Whether you fill out a form and wait for a physical machine or a virtual machine to be provisioned makes little difference – you’re still in the machine paradigm.

Recently, aPaaSes, Kubernetes, and cloud computing have overthrown the idea that an application need sit on a ‘machine’, but the penetration of this novel (or old, if you used mainframes) idea, like the future, is unevenly distributed.

Charging models

Another paradigm that’s very hard to get traction on changing is charging models. How money moves around within an enterprise is a huge subject in itself, and has all sorts of secondary effects that are of no small interest to IT.

To grossly generalise, IT is moving from a ‘capex’ model to an ‘opex’ model. Instead of buying kit and software and then running it until it wears out (capex), the ‘new’ model is to rent software and services which can be easily scaled up and down as business demand requires.

Now, if you think IT in an enterprise is conservative, then prepare to deal with those that manage and handle the money! For good reason, they are as a rule very disinclined to change payment models within an organisation, since any change in process will result in bugs (old and new) being surfaced, institutional upheaval, and who knows what else.

The end result is that moving to these new models can be painful. Trying to cross-charge within an organisation of any size can result in surreal conversations about ‘wooden dollars’ (ie non-existent money exchanged in lieu of real money) or services being charged out to other parts of the business, but never paid for due to conversations that may or may not have been had outside your control.

Learned Helplessness

After decades of these habits of thoughts, you end up with several consequences:

  • Those who don’t like the way of working leave
  • Those that remain calcify into whole generations of employees
  • Those that remain tend to prize and prefer those that agree with their views

Suggestions of change to these groups of people result in entire generations, nay armies of employees that resist change.

The irony is that they are completely right. Most efforts to change do fail, and therefore most efforts to do so are wasted. The reasons are arguably circular, ie that change is resisted because it won’t work, and it won’t work because it’s resisted. But it’s also quite rational, since the reasons it won’t work are based on the external constraints that exist we have discussed above. But it’s simple game theory to follow the logic.

This has previously been described as the ‘square of despair‘:

square_despair

Although I’d prefer to call it the ‘polygon of despair‘, since these four are fairly arbitrary. You could add to this list, for example:

  • Internal charging models
  • Change control
  • Institutional inertia
  • Audit
  • Regulation
  • Outdated paradigms

all of which have been discussed above.

PolygonofDespair.png

The Decagon of Despair


4) A New Hope?

Is it all a lost cause? Is there really no hope for change? Does it always end up looking like this, at best a mass of compromises that feel like failure?

manifesto

Well, no. But it is bloody hard. Here’s the things I think will stack the deck in your favour:

Senior Leadership Support

I think this is the big one. If you’re looking to swim against habits of thought, then stiff resolve is required. If senior management aren’t willing to sacrifice, aren’t united in favour of it, then all sorts of primary decisions and (equally important) second-guessed decisions made by underlings from different branches of the management tree that have conflicting aims.

People don’t like to talk about it, but it helps if people get fired for not constructively working with the changes. That tends to focus the mind. The classic precedent of this is point 6 of Jeff Bezos’s ‘API Mandate’:

mandate

You senior leadership will also need buckets of patience as the work to do this is very front-loaded, the pain being felt far earlier benefits being felt far later than the pain.

Reduce Complexity

Talking of pain, you will do yourself favours if you fight tooth and nail to reduce complexity. This may involve taking some risks as you call out that the entire effort may be ruined by compromises that defeat the purpose, or create bureaucratic or technical quicksand that your project will flounder in later.

Calling those dangers may get you a reputation, or even cost you your job. As the title of A Seat at the Table (a book I highly recommend on the subject) implies, it’s very close to a poker game.

Cross-functional Team

It might sound obvious to those that work in smaller companies, but it’s much easier to achieve change if you have a team of people that span the functions of your organisation working together. The collaboration not only benefits from seeing how things need be designed to fulfil requirements at an earlier stage, but more creative solutions are found by people who understand their function’s needs better, and the requirements of the project. If you want to go the skunkworks route, then the representatives of the other functions can tell you where your MVP shortcuts are going to bite you later on.

The alternative – and this is almost invariably much, much slower – is to ‘build, then check’. So you might spent several months building your solution before you find it’s fundamentally flawed based on some corporate rule or principle that can’t be questioned.

Use Your Cynical Old Hands

The flip side of those that constitute the ‘institutional inertia’ I described above is that many of those people know the organisation inside out. These people often lose heart regarding change not because they no longer care, but because they believe that when push comes to shove the changes won’t get support.

These people can be your biggest asset. The key is to persuade them that it’s possible, and that you need their help.

That can be hard for both sides, as your enthusiasm for change hits their brick wall, cemented by their hard-won (or lost) experience. They may give you messages that are hard to hear about how hard it will be. But don’t underestimate the loyalty and resilience you get if they are heard.

 


If you liked this post, you might also like:

Five Things I Did to Change a Team’s Culture

My 20-Year Experience of Software Development Methodologies

Things I Learned Managing Site Reliability for Some of the World’s Busiest Gambling Sites

A Checklist for Docker in the Enterprise (Updated)


Or one of my books:

Learn Git the Hard Way
Learn Terraform the Hard Way
Learn Bash the Hard Way

LearnGitBashandTerraformtheHardWay.png