Create Your Own Git Diagrams

 

Ever wondered how to create your own git diagrams?

You know, the ones that look like this?

2.5.4.tex

I’ve created a Docker image to allow you to easily create your own.

$ docker pull imiell/gitdags

The git repo is here.

How To Run

The examples folder is a good place to start.

A good way to get started is to run this:

$ git clone https://github.com/ianmiell/gitdags
$ cd gitdags/examples
$ docker run -v $(pwd):/files imiell/gitdags /convert_files.sh

It will convert the *.tex (LaTeX) files into example .png images more or less complex than the one at the top.


If you want to learn more about git, read my book Learn Git the Hard Way, available at $8:

learngitthehardway


How To Make Your Own

To show you how to make your own images, let’s break down this example:

\documentclass[preview]{standalone} 
\usepackage{subcaption} 
\usepackage{gitdags} 
\begin{document} 
\begin{figure} 
  \begin{subfigure}[b]{\textwidth} 
    \centering 
    \begin{tikzpicture} 
      % Commit DAG 
      \gitDAG[grow right sep = 2em]{ 
        A -- { 
          C, 
          B, 
        } 
      }; 
      % Branch 
      \gitbranch 
        {experimental} % node name and text 
        {above=of C} % node placement 
        {C} % target 
      \gitbranch 
        {master} % node name and text 
        {below=of B} % node placement 
        {B} % target 
      % HEAD reference 
      \gitHEAD 
        {below=of master} % node placement 
        {master} % target     
    \end{tikzpicture} 
  \end{subfigure} 
\end{figure} 
\end{document}

Breaking it down into chunks, the content is framed by what is more or less boilerplate:

\documentclass[preview]{standalone} 
\usepackage{subcaption} 
\usepackage{gitdags} 
\begin{document} 
\begin{figure} 
  \begin{subfigure}[b]{\textwidth} 
  \centering 
    \begin{tikzpicture} 
[...] 
    \end{tikzpicture} 
  \end{subfigure} 
\end{figure} 
\end{document}

The subcaption package might be used by a more advanced diagram, but is not necessary to this particular one.

Next, the nodes and their links are specified in a \gitDAG section:

\gitDAG[grow right sep = 2em]{ 
   A -- { 
     C, 
     B, 
   } 
 };

The nodes are linked by a simple pair of dashes. The arrows are put in for you.

The curlies indicate a division into branches, where each line represents one line of development.

If you want to merge two branches together, then you can give them the same name, like this:

A -- { 
  C -- D, 
  B -- D, 
}

You can ‘grow’ the graph down, up, or left (as well as right, above), and make the separation larger or smaller by changing the value of 2em.

You can also ‘fade them out’ by marking them as ‘unreachable’:

[nodes=unreachable] D -- E

There are four main types of ‘external pointer’ node:

\gittag
\gitremotebranch
\gitbranch
\gitHEAD

The comments on the \gittag example here are mostly self-explanatory, and apply to all four (apart from for \gitHEAD – see below):

\gittag
  [v0p1]       % node name
  {v0.1}       % node text
  {above=of A} % node placement
  {A}          % target

The ‘target’ line refers to where the arrow points, and the ‘node placement’ line refers to where the ‘arrow node’ is positioned – it can be above=of or below=of as well as left=of or right=of.

gitHEAD ignores the node text, and just puts HEAD as the text.

Other types of node are available, but there’s no documentation on them that I can find, and literally no mention of them on GitHub anywhere outside the original source. I may try and figure them out later.

Credit

This work is based on the great work of Chris Freeman here.

And of course the original gitdags work here.

See also here for a StackOverflow discussion.

Without the above I’d have been floundering in a sea of LaTeX ignorance.


If you want to learn more about git, read my book Learn Git the Hard Way, available at $8:

learngitthehardway


If you liked this post, you might also like these:

Five Key Git Concepts Explained the Hard Way

Ten More Things I Wish I’d Known About bash

Project Management as Code with Graphviz

A Non-Cloud Serverless Application Pattern Using Git and Docker

Power ‘git log’ graphing


 

Advertisements

Five Things I Did to Change a Team’s Culture

Culture – Be Specific!

People often talk about culture being the barrier to adoption of DevOps, but they are rarely specific about this.

This was succinctly put by Charity Majors here:

CharityMajorsonTwitter.png

What to Do?

Here I discuss a few things I did to try and change a culture a few years ago in a demoralised and dysfunctional centralised IT team that I managed following the sudden departure of the IT Director.

Whether it worked or not I don’t know – you’d have to ask the team (and I was poached a couple of months after I started), but I felt a big difference pretty quickly.

1) Get on the Floor

The first thing I did was spend two weeks doing triage of incoming requests. This had a few useful effects.

  • I saw one of the two main pipelines of work into the team

The IT team was working on 1) Requests received via tickets and 2) Out-of-band requests from management (“Can you just implement a new video conferencing system? Thanks.”)

Getting a handle on 1) was the shortest path to get savings fast, so I started there. Number 2) was going to be a tougher nut to crack (mostly finding ways to say ‘no’ without getting fired). Improving 1) would help with 2).

  • I discovered the triage process was broken

The triage process was not serving its purpose. It had been given to a weaker member of staff because no-one else wanted to do it, and he was not adding any value by thinking about what was being presented to him.

I put some controls into the process from above and moved the duty around the team.

  • The ticket count dropped by 75%

I cut the open tickets by 75% in a week by deduplicating and applying simple call queue techniques to the backlog. Dropping that number didn’t drop the work by 75% (probably more like 30-40%), but it improved morale and focus significantly. I also implemented some of the techniques talked about here to reduce running costs.

  • I was seen as someone who wanted to get involved

While I had to be careful not to get into the weeds, by getting my hands dirty my credibility with the team grew.

More importantly, I could start to challenge them when I didn’t buy what they were saying. They had become used to pulling out certain excuses for failure. This wasn’t because there were not good reasons, but because they had felt ignored for so long they had stopped trying to engage openly. That culture needed to change, and being able to argue from within was critical to achieving that.

2) Move People to Other Teams

One of the things I’m absolutely certain of is that a critical feature of effective complex organisations is that they make people do all the jobs.

Only when people have seen things from all angles can they make real and effective adaptations to changing circumstances or effect real change within a complex organisation.

There’s an incredibly powerful talk here by John Allspaw where he discusses how the Navy does this to help solve the challenges aircraft carriers face:

‘So you want to understand an aircraft carrier. Imagine a busy day, and you shrink San Francisco airport to one short runway, one ramp, and one gate. Make planes take off and land at the same time at half the present time interval, rock the runway from side to side, and require that everyone that leaves returns that same day. Make sure the equipment is so close to the edge of the envelope that it’s fragile, then turn off the radar to avoid detection, impose strict controls on radios, fuel the aircraft in place with their engines running, have enemies in the air and scatter live bombs and rockets around. Now wet the whole thing down with salt water and oil and man it with 20 year olds, half of whom have never seen a plane close up. Oh, and by the way: try not to kill anyone.

(See 19 minutes in for this part of the talk.)

I made the IT staff go and sit with the developers for a couple of weeks as soon as I could. The resistance I got to this idea, even among the keen ones, was deeply surprising to me. There was a profound tendency to put others on a pedestal and fear humiliation by going outside their comfort zone.

The results, however, were immediate. Relations between teams improved dramatically, and areas of tension that had been bubbling for years got resolved as IT staff had seen things ‘from the other side’, which changed their view of why blockers should be removed, and – equally important – how they could be removed by more creative means that the ‘other side’ could not see. Once staff saw the drivers of frustration, they could implement solutions for the problem itself, and not necessarily what was being asked for.

3) Remove Bad Influences

People don’t like to talk about this, but one of the most effective ways to change culture is to fire people.

There’s a probably apocryphal story about an Orson Welles trick, where he would get a stooge to show up to work on the first day on a shoot, do something Welles didn’t want, and Welles would fire him.

The message to the crew would be unambiguous: my way, or the highway.

That’s obviously an extreme example, but I’ve seen the powerful effects of removing people who are obstructing change. That doesn’t mean you don’t follow due process, or give people clear warnings, or help them to mend their ways, but nothing sends a message of ‘I disapprove of this bad behaviour’ better than dealing with it firmly.

And check point 6 on this deck about Jeff Bezos’ mandate to change the way Amazon worked:

Anyone who doesn’t do this will be fired.

One of the first questions I generally ask myself when considering the latest attempt from on high to bring cultural change to my group is: what changes here that gets me fired?

4) Take Responsibility for Hiring

As with firing, who comes into the team is vital. I was shocked to discover that it was not considered standard to have the overall manager personally vet new hires.

While I didn’t know my Active Directory from my LDAP, I did know the difference between a bright young thing and an irritating know-all, so I took responsibility for any new hires. I deferred to my colleagues on knowledge calls, but that was not often a deciding factor either way. Far more important was how useful they would make themselves.

5) Take Responsibility for Training

There’s a great quote from Andy Grove, founder of Intel about training:

Training is the manager’s job. Training is the highest leverage activity a manager can do to increase the output of an organization. If a manager spends 12 hours preparing training for 10 team members that increases their output by 1% on average, the result is 200 hours of increased output from the 10 employees (each works about 2000 hours a year). Don’t leave training to outsiders, do it yourself.

And training isn’t just about being in a room and explaining things to people – it’s about getting in the field and showing people how to respond to problems, how to think about things, and where they need to go next. The point is: take ownership of it.

I personally trained people in things like Git and Docker and basic programming whenever I got the chance to. This can demystify these skills and empower your staff to go further. It also sends a message about what’s important – if the boss spends time on triage, training and hiring, then they must be important.

Anything else?

What have you done to change culture in a group? Let me know.

 


If you want to learn more about bash, read my book Learn Bash the Hard Way, available at $5:

hero

Or my book on Docker:

Centralise Your Bash History

Why?

Have you ever run a command on one of your hosts and then wanted to retrieve it later? Then couldn’t remember where you ran it, or it’s lost from your history?

This happens to me all the time. The other day I was hunting for a command I was convinced I’d run, but wasn’t sure where it was or whether it was stored.

So I finally wrote a service that records my every command centrally.

Here’s an overview of its (simple) architecture.

 

history-server

 

What?

This stores your bash history on a server in a file.

The service runs on a port of your choosing, and if you have some lines in your ~.bashrc file then it will all work seamlessly for you.

There’s some basic authentication (shared key) to prevent abuse of the service.

How?

To set up, see the README.

Requirements

Needs socat installed, and bash version 4+.

Related posts

Ten Things About Bash
Ten More Things About Bash

PS

I need help to improve this – see the README.

PPS

If you ask nicely I might host it for you, without warranty etc..

 


If you want to learn more about bash, read my book Learn Bash the Hard Way, available at $5:

hero

How (and Why) I Run My Own DNS Servers

 

Introduction

Despite my woeful knowledge of networking, I run my own DNS servers on my own websites run from home.

I achieved this through trial and error and now it requires almost zero maintenance, even though I don’t have a static IP at home.

Here I share how (and why) I persist in this endeavour.

Overview

This is an overview of the setup:

DNSSetup

This is how I set up my DNS. I:

  • got a domain from an authority (a .tk domain in my case)
  • set up glue records to defer DNS queries to my nameservers
  • set up nameservers with static IPs
  • set up a dynamic DNS updater from home

How?

Walking through step-by-step how I did it:

1) Set up two Virtual Private Servers (VPSes)

You will need two stable machines with static IP addresses.

If you’re not lucky enough to have these in your possession, then you can set one up on the cloud. I used this site, but there are plenty out there.  NB I asked them, and their IPs are static per VPS. I use the cheapest cloud VPS (1$/month) and set up debian on there.

NOTE: Replace any mention of DNSIP1 and DNSIP2 below with the first and second static IP addresses you are given.

Log on and set up root password

SSH to the servers and set up a strong root password.

2) Set up domains

You will need two domains: one for your dns servers, and one for the application running on your host.

I use dot.tk to get free throwaway domains. In this case, I might setup a myuniquedns.tk DNS domain and a myuniquesite.tk site domain.

Whatever you choose, replace your DNS domain when you see YOURDNSDOMAIN below. Similarly, replace your app domain when you see YOURSITEDOMAIN below.

3) Set up a ‘glue’ record

If you use dot.tk as above, then to allow you to manage the YOURDNSDOMAIN domain you will need to set up a ‘glue’ record.

What this does is tell the current domain authority (dot.tk) to defer to your nameservers (the two servers you’ve set up) for this specific domain. Otherwise it keeps referring back to the .tk domain for the IP.

See here for a fuller explanation.

Another good explanation is here.

To do this you need to check with the authority responsible how this is done, or become the authority yourself.

dot.tk has a web interface for setting up a glue record, so I used that.

There, you need to go to ‘Manage Domains’ => ‘Manage Domain’ => ‘Management Tools’ => ‘Register Glue Records’ and fill out the form.

Your two hosts will be called ns1.YOURDNSDOMAIN and ns2.YOURDNSDOMAIN, and the glue records will point to either IP address.

Note, you may need to wait a few hours (or longer) for this to take effect. If really unsure, give it a day.


If you like this post, you might be interested in my book Learn Bash the Hard Way, available here for just $5.

hero


4) Install bind on the DNS Servers

On a Debian machine (for example), and as root, type:

apt install bind9

bind is the domain name server software you will be running.

5) Configure bind on the DNS Servers

Now, this is the hairy bit.

There are two parts this with two files involved: named.conf.local, and the db.YOURDNSDOMAIN file.

They are both in the /etc/bind folder. Navigate there and edit these files.

Part 1 – named.conf.local

This file lists the ‘zone’s (domains) served by your DNS servers.

It also defines whether this bind instance is the ‘master’ or the ‘slave’. I’ll assume ns1.YOURDNSDOMAIN is the ‘master’ and ns2.YOURDNSDOMAIN is the ‘slave.

Part 1a – the master

On the master/ns1.YOURNDSDOMAIN, the named.conf.local should be changed to look like this:

zone "YOURDNSDOMAIN" {
 type master;
 file "/etc/bind/db.YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
};
zone "YOURSITEDOMAIN" {
 type master;
 file "/etc/bind/YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
};

zone "14.127.75.in-addr.arpa" {
 type master;
 notify no;
 file "/etc/bind/db.75";
 allow-transfer { DNSIP2; };
};

logging {
 channel query.log {
 file "/var/log/query.log";
 // Set the severity to dynamic to see all the debug messages.
 severity debug 3;
 };
category queries { query.log; };
};

The logging at the bottom is optional (I think). I added it a while ago, and I leave it in here for interest. I don’t know what the 14.127 zone stanza is about.

Part 1b – the slave

On the slave/ns2.YOURNDSDOMAIN, the named.conf.local should be changed to look like this:

zone "YOURDNSDOMAIN" {
 type slave;
 file "/var/cache/bind/db.YOURDNSDOMAIN";
 masters { DNSIP1; };
};

zone "YOURSITEDOMAIN" {
 type slave;
 file "/var/cache/bind/db.YOURSITEDOMAIN";
 masters { DNSIP1; };
};

zone "14.127.75.in-addr.arpa" {
 type slave;
 file "/var/cache/bind/db.75";
 masters { DNSIP1; };
};

 

Part 2 – db.YOURDNSDOMAIN

Now we get to the meat – your DNS database is stored in this file.

On the master/ns1.YOURDNSDOMAIN the db.YOURDNSDOMAIN file looks like this:

$TTL 4800
@ IN SOA ns1.YOURDNSDOMAIN. YOUREMAIL.YOUREMAILDOMAIN. (
  2018011615 ; Serial
  604800 ; Refresh
  86400 ; Retry
  2419200 ; Expire
  604800 ) ; Negative Cache TTL
;
@ IN NS ns1.YOURDNSDOMAIN.
@ IN NS ns2.YOURDNSDOMAIN.
ns1 IN A DNSIP1
ns2 IN A DNSIP2
YOURSITEDOMAIN. IN A YOURDYNAMICIP

On the slave/ns2.YOURDNSDOMAIN it’s very similar, but has ns1 in the SOA line, and the IN NS lines reversed. I can’t remember if this reversal is needed or not…:

$TTL 4800 @ IN SOA ns1.YOURDNSDOMAIN. YOUREMAIL.YOUREMAILDOMAIN. (
  2018011615 ; Serial
 604800 ; Refresh
 86400 ; Retry
 2419200 ; Expire
 604800 ) ; Negative Cache TTL
;
@ IN NS ns1.YOURDNSDOMAIN.
@ IN NS ns2.YOURDNSDOMAIN.
ns1 IN A DNSIP1
ns2IN A DNSIP2
YOURSITEDOMAIN. IN A YOURDYNAMICIP

A few notes on the above:

  • The dots at the end of lines are not typos – this is how domains are written in bind files. So google.com is written google.com.
  • The YOUREMAIL.YOUREMAILDOMAIN. part must be replaced by your own email. For example, my email address: ian.miell@gmail.com becomes ianmiell.gmail.com..  Note also that the dot between first and last name is dropped. email ignores those anyway!
  • YOURDYNAMICIP is the IP address your domain should be pointed to (ie the IP address returned by the DNS server). It doesn’t matter what it is at this point, because….

the next step is to dynamically update the DNS server with your dynamic IP address whenever it changes.

6) Copy ssh keys

Before setting up your dynamic DNS you need to set up your ssh keys so that your home server can access the DNS servers.

NOTE: This is not security advice. Use at your own risk.

First, check whether you already have an ssh key generated:

ls ~/.ssh/id_rsa

If that returns a file, you’re all set up. Otherwise, type:

ssh-keygen

and accept the defaults.

Then, once you have a key set up, copy your ssh ID to the nameservers:

ssh-copy-id root@DNSIP1
ssh-copy-id root@DNSIP2

Inputting your root password on each command.

7) Create an IP updater script

Now ssh to both servers and place this script in /root/update_ip.sh:

#!/bin/bash
set -o nounset
sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN
sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN
/etc/init.d/bind9 restart

Make it executable by running:

chmod +x /root/update_ip.sh

Going through it line by line:

  • set -o nounset

This line throws an error if the IP is not passed in as the argument to the script.

  • sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN

Replaces the IP address with the contents of the first argument to the script.

  • ​​​sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN

Ups the ‘serial number’

  • /etc/init.d/bind9 restart

Restart the bind service on the host.

8) Cron Your Dynamic DNS

At this point you’ve got access to update the IP when your dynamic IP changes, and the script to do the update.

Here’s the raw cron entry:

* * * * * curl ifconfig.co 2>/dev/null > /tmp/ip.tmp && (diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)")); curl ifconfig.co 2>/dev/null > /tmp/ip.tmp2 && (diff /tmp/ip.tmp2 /tmp/ip2 || (mv /tmp/ip.tmp2 /tmp/ip2 && ssh root@192.210.238.236 "/root/update_ip.sh $(cat /tmp/ip2)"))

Breaking this command down step by step:

curl ifconfig.co 2>/dev/null > /tmp/ip.tmp

This curls a ‘what is my IP address’ site, and deposits the output to /tmp/ip.tmp

diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)"))

This diffs the contents of /tmp/ip.tmp with /tmp/ip (which is yet to be created, and holds the last-updated ip address). If there is an error (ie there is a new IP address to update on the DNS server), then the subshell is run. This overwrites the ip address, and then ssh’es onto the

The same process is then repeated for DNSIP2 using separate files (/tmp/ip.tmp2 and /tmp/ip2).

 

Why!?

You may be wondering why I do this in the age of cloud services and outsourcing. There’s a few reasons.

It’s Cheap

The cost of running this stays at the cost of the two nameservers (24$/year) no matter  how many domains I manage and whatever I want to do with them.

Learning

I’ve learned a lot by doing this, probably far more than any course would have taught me.

More Control

I can do what I like with these domains: set up any number of subdomains, try my hand at secure mail techniques, experiment with obscure DNS records and so on.

I could extend this into a service. If you’re interested, my rates are very low :)


If you like this post, you might be interested in my book Learn Bash the Hard Way, available here for just $5.

hero


If you liked this post, you might also like:

Create Your Own Git Diagrams

Ten Things I Wish I’d Known About bash

Ten More Things I Wish I’d Known About bash

Project Management as Code with Graphviz

Ten Things I Wish I’d Known Before Using Jenkins Pipelines


 

Ten More Things I Wish I'd Known About bash

Intro

My previous post took off far more than I expected, so I thought I’d write another piece on less well-known bash features.

As I said before, because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it while studying it up. It’s really gratifying to know that other people think it’s important too, despite being un-hip.

A preview of the book is available here. It focusses more than these articles on ensuring you are drilled in and understand the concepts you need to take your bash usage to a higher level. This is written more for ‘fun’.

HN discussion here.

hero

 

1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit ‘up’, ‘left’ until at the ‘p’ and type ‘e’ and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile
$

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you’ll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile
$

I won’t try and explain this one here…

 

2) pushd / popd

This one comes in very handy for scripts, especially when operating within a loop.

Let’s say you’re in a for loop moving in and out of folders like this:

for d1 in $(ls -d */)
do
  # Store original working directory.
  original_wd="$(pwd)"
  cd "$d1"
  for d2 in $(ls -d */)
  do
    pushd "$d2"
    # Do something
    popd
  done
  # Return to original working directory
  cd "${original_wd}"
done

You can rewrite the above using the pushd stack like this:

for d1 in $(ls -d *)
do
  pushd "$d1"
  for d2 in $(ls  -d */)
  do
    pushd "$d2"
    # Do something
    popd
  done
  popd
done

Which tracks the folders you’ve pushed and popped as you go.

Note that if there’s an error in a pushd you may lose track of the stack and popd too many time. You probably want to set -e in your script as well (see previous post)

There’s also cd -, but that doesn’t ‘stack’ – it just returns you to the previous folder:

cd ~
cd /tmp
cd blah
cd - # Back to /tmp
cd - # Back to 'blah'
cd - # Back to /tmp
cd - # Back to 'blah' ...

3) shopt vs set

This one bothered me for a while.

What’s the difference between set and shopt?

sets we saw before, but shopts look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here.

Essentially, it looks like it’s a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options.

But I’m still unsure… if you know the answer, let me know.

4) Here Docs and Here Strings

‘Here docs’ are files created inline in the shell.

The ‘trick’ is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc
$

Notice that:

  • the string could be included in the file if it was not ‘alone’ on the line
  • the string SOMEENDSTRING is more normally END, but that is just convention

Lesser known is the ‘here string’:

$ cat > asd <<< 'This file has one line'

 

5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash.

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS
  • The # means ‘match and remove the following pattern from the start of the string’
  • The % means ‘match and remove the following pattern from the end of the string

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script.

If you want to use glob patterns that are greedy (see globbing here) then you double up:

VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}

$ echo ${VAR%%*FOOTER}

$

 

6) ​Variable Defaults

These are very handy for knocking up scripts.

If you have a variable that’s not set, you can ‘default’ them by using this. Create a file called default.sh with these contents

#!/bin/bash
FIRST_ARG="${1:-no_first_arg}"
SECOND_ARG="${2:-no_second_arg}"
THIRD_ARG="${3:-no_third_arg}"
echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second.

Observer how the third argument’s default has been assigned, but not the first two.

You can also assign directly with ${VAR:=defaultval} (equals sign, not dash) but note that this won’t work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap builtin can be used to ‘catch’ when a signal is sent to your script.

Here’s an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
} 
trap cleanup TERM INT QUIT

Any attempt to CTRL-C, CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can’t be trapped in this way

But mostly I’ve used this for ‘cleanups’ like the above, which serve their purpose.

8) Shell Variables

It’s well worth getting to know the standard shell variables available to you. Here are some of my favourites:

RANDOM

Don’t rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
16313
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
113610703
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE

REPLY

No need to give a variable name for read

$ read
my input
$ echo ${REPLY}

LINENO and SECONDS

Handy for debugging

echo ${LINENO}
115
echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
174380
174381
116

Note that there are two ‘lines’ above, even though you used ; to separate the commands.

TMOUT

You can timeout reads, which can be really handy in some scripts

#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}

 

9) Extglobs

If you’re really knee-deep in bash, then you might want to power up your globbing. You can do this by setting the extglob shell option. Here’s the setup:

shopt -s extglob
A="12345678901234567890"
B="  ${A}  "

Now see if you can figure out what each of these does:

echo "B      |${B}|"
echo "B#+( ) |${B#+( )}|"
echo "B#?( ) |${B#?( )}|"
echo "B#*( ) |${B#*( )}|"
echo "B##+( )|${B##+( )}|"
echo "B##*( )|${B##*( )}|"
echo "B##?( )|${B##?( )}|"

Now, potentially useful as it is, it’s hard to think of a situation where you’d absolutely want to do it this way. Normally you’d use a tool better suited to the task (like sed) or just drop bash and go to a ‘proper’ programming language like python.

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here).

What I didn’t know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.

 


This is based on some of the contents of my book Learn Bash the Hard Way, available at $5:

hero

Preview available here.


I also wrote Docker in Practice 

Get 39% off with the code: 39miell2


If you liked this post, you might also like these:

Ten Things I Wish I’d Known About bash

Centralise Your Bash History

How (and Why) I Run My Own DNS Servers

My Favourite Secret Weapon – strace

A Complete Chef Infrastructure on Your Laptop


 

Ten Things I Wish I’d Known About bash

Intro

Recently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it.

A preview is available here.

You don’t have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.

Here I’ve focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.

Enjoy!

hero

1)  `` vs $()

These two operators do the same thing. Compare these two lines:

$ echo `ls`
$ echo $(ls)

Why these two forms existed confused me for a long time.

If you don’t know, both forms substitute the output of the command contained within it into the command.

The principal difference is that nesting is simpler.

Which of these is easier to read (and write)?

    $ echo `echo \`echo \\\`echo inside\\\`\``

or:

    $ echo $(echo $(echo $(echo inside)))

If you’re interested in going deeper, see here or here.

2) globbing vs regexps

Another one that can confuse if never thought about or researched.

While globs and regexps can look similar, they are not the same.

Consider this command:

$ rename -n 's/(.*)/new$1/' *

The two asterisks are interpreted in different ways.

The first is ignored by the shell (because it is in quotes), and is interpreted as ‘0 or more characters’ by the rename application. So it’s interpreted as a regular expression.

The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.

So by looking at man bash can you figure out why these two commands produce different output?

$ ls *
$ ls .*

The second looks even more like a regular expression. But it isn’t!

3) Exit Codes

Not everyone knows that every time you run a shell command in bash, an ‘exit code’ is returned to bash.

Generally, if a command ‘succeeds’ you get an error code of 0. If it doesn’t succeed, you get a non-zero code. 1 is a ‘general error’, and others can give you more information (eg which signal killed it, for example).

But these rules don’t always hold:

$ grep not_there /dev/null
$ echo $?

$? is a special bash variable that’s set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0?

Grok this and a lot will click into place in what follows.

4) if statements, [ and [[

Here’s another ‘spot the difference’ similar to the backticks one above.

What will this output?

if grep not_there /dev/null
then
    echo hi
else
    echo lo
fi

grep’s return code makes code like this work more intuitively as a side effect of its use of exit codes.

Now what will this output?

a) hihi
b) lolo
c) something else

if [ $(grep not_there /dev/null) = '' ]
then
    echo -n hi
else
    echo -n lo
fi
if [[ $(grep not_there /dev/null) = '' ]]
then
    echo -n hi
else
    echo -n lo
fi

The difference between [ and [[ was another thing I never really understood. [ is the original form for tests, and then [[ was introduced, which is more flexible and intuitive. In the first if block above, the if statement barfs because the $(grep not_there /dev/null) is evaluated to nothing, resulting in this comparison:

[ = '' ]

which makes no sense. The double bracket form handles this for you.

This is why you occasionally see comparisons like this in bash scripts:

if [ x$(grep not_there /dev/null) = 'x' ]

so that if the command returns nothing it still runs. There’s no need for it, but that’s why it exists.

5) sets

Bash has configurable options which can be set on the fly. I use two of these all the time:

set -e

exits from a script if any command returned a non-zero exit code (see above).

This outputs the commands that get run as they run:

set -x

So a script might start like this:

#!/bin/bash
set -e
set -x
grep not_there /dev/null
echo $?

What would that script output?

6) ​​<()

This is my favourite. It’s so under-used, perhaps because it can be initially baffling, but I use it all the time.

It’s similar to $() in that the output of the command inside is re-used.

In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.

Confused? Here’s an example.

Have you ever done something like this?

$ grep somestring file1 > /tmp/a
$ grep somestring file2 > /tmp/b
$ diff /tmp/a /tmp/b

That works, but instead you can write:

diff <(grep somestring file1) <(grep somestring file2)

Isn’t that neater?

7) Quoting

Quoting’s a knotty subject in bash, as it is in many software contexts.

Firstly, variables in quotes:

A='123'  
echo "$A"
echo '$A'

Pretty simple – double quotes dereference variables, while single quotes go literal.

So what will this output?

mkdir -p tmp
cd tmp
touch a
echo "*"
echo '*'

Surprised? I was.

8) Top three shortcuts

There are plenty of shortcuts listed in man bash, and it’s not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.

Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I’ll skip over the most obvious ones (eg !! – repeat last command, and ~ – your home directory).

!$

I use this dozens of times a day. It repeats the last argument of the last command. If you’re working on a file, and can’t be bothered to re-type it command after command it can save a lot of work:

grep somestring /long/path/to/some/file/or/other.txt
vi !$

 

​​!:1-$

This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:

grep isthere /long/path/to/some/file/or/other.txt
egrep !:1-$
fgrep !:1-$

The ! means ‘look at the previous command’, the : is a separator, and the 1 means ‘take the first word’, the - means ‘until’ and the $ means ‘the last word’.

Note: you can achieve the same thing with !*. Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with !:2-3.

:h

I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:

grep isthere /long/path/to/some/file/or/other.txt
cd !$:h

which can save a lot of work in the course of the day.

9) startup order

The order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):

shell-startup-actual

It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).

So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the ‘green’ line, and these are the order of files read:

/etc/bash.bashrc
~/.bashrc
[bash runs, then terminates]
~/.bash_logout

This can save you a hell of a lot of time debugging.

10) getopts (cheapci)

If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with getopts can pay large dividends.

For fun, I once wrote a script called cheapci which I used to work like a Jenkins job.

The code here implements the reading of the two required, and 14 non-required arguments. Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.


This is based on some of the contents of my book Learn Bash the Hard Way, available at $7:

hero

Preview available here.


I also wrote Docker in Practice 

Get 39% off with the code: 39miell2