Download a Free Sample of Learn Bash the Hard Way

Click here to download a free copy of Learn Bash the Hard Way.

The full book is available for $5 on leanpub. eBook purchases entitle you to any future updates and editions also.

hero

Feedback welcome @ianmiell or on LinkedIn.

See also Ten Things I Wish I’d Known About bash

Advertisements

Ten Things I Wish I’d Known About bash

Intro

Recently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it.

A preview is available here.

You don’t have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.

Here I’ve focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.

Enjoy!

hero

1)  `` vs $()

These two operators do the same thing. Compare these two lines:

$ echo `ls`
$ echo $(ls)

Why these two forms existed confused me for a long time.

If you don’t know, both forms substitute the output of the command contained within it into the command.

The principal difference is that nesting is simpler.

Which of these is easier to read (and write)?

    $ echo `echo \`echo \\\`echo inside\\\`\``

or:

    $ echo $(echo $(echo $(echo inside)))

If you’re interested in going deeper, see here or here.

2) globbing vs regexps

Another one that can confuse if never thought about or researched.

While globs and regexps can look similar, they are not the same.

Consider this command:

$ rename -n 's/(.*)/new$1/' *

The two asterisks are interpreted in different ways.

The first is ignored by the shell (because it is in quotes), and is interpreted as ‘0 or more characters’ by the rename application. So it’s interpreted as a regular expression.

The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.

So by looking at man bash can you figure out why these two commands produce different output?

$ ls *
$ ls .*

The second looks even more like a regular expression. But it isn’t!

3) Exit Codes

Not everyone knows that every time you run a shell command in bash, an ‘exit code’ is returned to bash.

Generally, if a command ‘succeeds’ you get an error code of 0. If it doesn’t succeed, you get a non-zero code. 1 is a ‘general error’, and others can give you more information (eg which signal killed it, for example).

But these rules don’t always hold:

$ grep not_there /dev/null
$ echo $?

$? is a special bash variable that’s set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0?

Grok this and a lot will click into place in what follows.

4) if statements, [ and [[

Here’s another ‘spot the difference’ similar to the backticks one above.

What will this output?

if grep not_there /dev/null
then
    echo hi
else
    echo lo
fi

grep’s return code makes code like this work more intuitively as a side effect of its use of exit codes.

Now what will this output?

a) hihi
b) lolo
c) something else

if [ $(grep not_there /dev/null) = '' ]
then
    echo -n hi
else
    echo -n lo
fi
if [[ $(grep not_there /dev/null) = '' ]]
then
    echo -n hi
else
    echo -n lo
fi

The difference between [ and [[ was another thing I never really understood. [ is the original form for tests, and then [[ was introduced, which is more flexible and intuitive. In the first if block above, the if statement barfs because the $(grep not_there /dev/null) is evaluated to nothing, resulting in this comparison:

[ = '' ]

which makes no sense. The double bracket form handles this for you.

This is why you occasionally see comparisons like this in bash scripts:

if [ x$(grep not_there /dev/null) = 'x' ]

so that if the command returns nothing it still runs. There’s no need for it, but that’s why it exists.

5) sets

Bash has configurable options which can be set on the fly. I use two of these all the time:

set -e

exits from a script if any command returned a non-zero exit code (see above).

This outputs the commands that get run as they run:

set -x

So a script might start like this:

#!/bin/bash
set -e
set -x
grep not_there /dev/null
echo $?

What would that script output?

6) ​​<()

This is my favourite. It’s so under-used, perhaps because it can be initially baffling, but I use it all the time.

It’s similar to $() in that the output of the command inside is re-used.

In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.

Confused? Here’s an example.

Have you ever done something like this?

$ grep somestring file1 > /tmp/a
$ grep somestring file2 > /tmp/b
$ diff /tmp/a /tmp/b

That works, but instead you can write:

diff <(grep somestring file1) <(grep somestring file2)

Isn’t that neater?

7) Quoting

Quoting’s a knotty subject in bash, as it is in many software contexts.

Firstly, variables in quotes:

A='123'  
echo "$A"
echo '$A'

Pretty simple – double quotes dereference variables, while single quotes go literal.

So what will this output?

mkdir -p tmp
cd tmp
touch a
echo "*"
echo '*'

Surprised? I was.

8) Top three shortcuts

There are plenty of shortcuts listed in man bash, and it’s not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.

Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I’ll skip over the most obvious ones (eg !! – repeat last command, and ~ – your home directory).

!$

I use this dozens of times a day. It repeats the last argument of the last command. If you’re working on a file, and can’t be bothered to re-type it command after command it can save a lot of work:

grep somestring /long/path/to/some/file/or/other.txt
vi !$

 

​​!:1-$

This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:

grep isthere /long/path/to/some/file/or/other.txt
egrep !:1-$
fgrep !:1-$

The ! means ‘look at the previous command’, the : is a separator, and the 1 means ‘take the first word’, the - means ‘until’ and the $ means ‘the last word’.

Note: you can achieve the same thing with !*. Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with !:2-3.

:h

I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:

grep isthere /long/path/to/some/file/or/other.txt
cd !$:h

which can save a lot of work in the course of the day.

9) startup order

The order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):

shell-startup-actual

It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).

So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the ‘green’ line, and these are the order of files read:

/etc/bash.bashrc
~/.bashrc
[bash runs, then terminates]
~/.bash_logout

This can save you a hell of a lot of time debugging.

10) getopts (cheapci)

If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with getopts can pay large dividends.

For fun, I once wrote a script called cheapci which I used to work like a Jenkins job.

The code here implements the reading of the two required, and 14 non-required arguments. Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.


This is based on some of the contents of my book Learn Bash the Hard Way, available at $5:

hero

Preview available here.


I also wrote Docker in Practice 

Get 39% off with the code: 39miell2

 

Project Management as Code with Graphviz

tl;dr

My team and I have been using graphviz and git to perform project management tasks.

It has numerous benefits, including:

  • Asynchronous project updates (ie fewer meetings)
  • Improved updates for users
  • Visualisation of complexity of project for stakeholders and team
  • Assumptions challenged. Progress can be measured using git itself (eg log)

HackerNews Discussion here

 

 

 

Background

Recently I’ve had to take on some project management tasks, managing engineering for a relatively large-scale project in a large enterprise covering a wide variety of use cases and demands.

One of the biggest challenges was how to express the dependencies that needed to be overcome to get the business outcomes the stakeholders wanted. Cries of ‘we just want x’ were answered by me with varying degrees of quality repeatedly, and generally left the stakeholders unsatisfied.

Being a software engineer – and not a project manager – by background or training, I naturally used graphviz instead to create dependency diagrams, and git to manage them.

The examples here are in source here and I welcome PRs and are based on the ‘project’ of preparing for a holiday.

Simple

We start with a simple graph with a couple of dependencies:

digraph G {
 "Enjoy Holiday" -> "Book tickets"
 "Enjoy Holiday" -> "Pack suitcase night before"
 "Pack suitcase night before" -> "Buy guide book"
 "Pack suitcase night before" -> "Buy electric converter"
}

The emboldening is mine for illustration; the file is plain text.

This file can be saved as simple.gv (.gv is for ‘graphviz’) and will generate this graph as a .png if you run dot -Tpng simple.gv > simple.png:

simple

Looking closer at simple.gv:

digraph – Tells graphviz that this is a directed graph, ie the relationships have a direction, indicated by the -> arrows. The arrow can be read as ‘depends on’.

Enjoy Holiday is the name of a node. Whenever this node is referenced in future it is the ‘same’ node. In the case of Pack suitcase the night before you can see that two nodes depend on it, and it depends on one. These relationships are expressed through the arrows in the file.

The dot program is part of the graphviz package. Other commands include neato, circo and others. Check man dot to read more about them.

This shows how easy it is to create a simple graph from a text file easily stored in git.

Layouts

That top-down layout can be a bit restrictive to some eyes. If so, you can change the layout by using another command in the graphviz package. For example, running neato -Tpng simple.gv > simple.png produces this graph:

simple_undirected

Note how:

  • Enjoy holiday is now nearer the ‘centre’ of the graph
  • The nodes are overlapping (we’ll deal with this later)
  • The arrows have shortened (we’ll deal with this later too)

If you’re fussy about your diagrams you can spend a lot of time fiddling with them like this, so it’s useful to get a feel for what the different commands do.

Colours

We can get more project information into a node by colorizing the nodes. I do this with a simple scheme of:

  • green = done
  • orange = in progress
  • red = not started

Here’s an updated .gv file:

digraph G { 
 "EH" [label="Enjoy Holiday",color="red"] 
 "BT" [label="Book tickets",color="green"] 
 "PSNB" [label="Pack suitcase night before",color="red"] 
 "BGB" [label="Buy guide book",color="orange"] 
 "BEC" [label="Buy electric converter",color="orange"] 
 
 "EH" -> "BT" 
 "EH" -> "PSNB" 
 "PSNB" -> "BGB" 
 "PSNB" -> "BEC" 
}

Running the command

dot -Tpng simple_colors.gv > simple_colors.png

on this results in this graph:

simple_colors

Two things have changed here. Referring to the full description of the node can get tiresome, so ‘Enjoy holiday’ has been referenced with ‘EH’, and associated with a ‘label’, and ‘color’.

"EH" [label="Enjoy Holiday",color="red"]

The nodes are defined in this way at the top, and then referred to with their relationships at the end. All sorts of attributes are available.

Nodes

Similarly, you can change the attributes of nodes in the graph, and their relationships in code.

I find that with a complex graph with some text in each node, a rectangular node makes for better  layouts. Also, I like to specify the distance between nodes, and prevent them from overlapping (two ‘problems’ we saw before).

digraph G { 
 ranksep=2.0 
 nodesep=2.0 
 overlap="false" 
 
 node [color="black", shape="rectangle"] 
 
 "EH" [label="Enjoy Holiday",color="red"] 
 "BT" [label="Book tickets",color="green"] 
 "PSNB" [label="Pack suitcase night before",color="red"] 
 "BGB" [label="Buy guide book",color="orange"] 
 "BEC" [label="Buy electric converter",color="orange"] 
 
 "EH" -> "BT" 
 "EH" -> "PSNB" 
 "PSNB" -> "BGB" 
 "PSNB" -> "BEC" 
}

By adding the ranksep and nodesep attributes, we can influence the layout of the graph by specifying the distance between nodes in their rank in the hierarchy, and separation between them. Similarly, overlap prevents the problem we saw earlier with overlapping nodes.

The node line specifies the characteristics of the nodes – in this case rectangular and black by default.

Running the same dot command as above results in this graph:

simple_node

which is arguably uglier than previous ones, but these changes help us as the graphs become more complex.

More Complex Graphs

Compiling this more complex graph with dot:

digraph G {
 ranksep=2.0
 nodesep=2.0
 overlap="false"

 node [color="black", shape="rectangle"]

 "EH" [label="ENJOY HOLIDAY\nWe want to have a good time",color="red"]
 "BTOW" [label="Book time off\nCheck with boss that time off is OK, put in system",color="red"]
 "BFR" [label="Book fancy restaurant\nThe one overlooking the river",color="red"]
 "BPB" [label="Buy phrase book\nThey don't speak English, so need to know how to book",color="red"]
 "BT" [label="Book tickets\nDo this using Expedia",color="green"]
 "PSNB" [label="Pack suitcase night before\nSuitcase in understairs cupboard",color="red"]
 "BGB" [label="Buy guide book\nIdeally the Time Out one",color="orange"]
 "BEC" [label="Buy electric converter\nDon't want to get ripped off at airport",color="orange"]
 "GTS" [label="Go to the shops\nNeed to go to town",color="orange"]
 "GCG" [label="Get cash (GBP)\nAbout 200 quid",color="green"]
 "GCD" [label="Get cash (DOLLARS)\nFrom bureau de change under arches",color="orange"]
 
 "EH" -> "BT"
 "EH" -> "BFR"
 "EH" -> "BTOW"
 "BFR" -> "BPB"
 "BPB" -> "GTS"
 "BPB" -> "GCG"
 "EH" -> "PSNB"
 "EH" -> "GCD"
 "PSNB" -> "BGB"
 "BGB" -> "GTS"
 "PSNB" -> "BEC"
 "BGB" -> "GCG"
 "BEC" -> "GCG"
}

gives this graph:

complex

And with neato:

complex1

You can see the graphs look quite different depending on which layout engine/binary you use. Some may suit your purpose better than others.

Project Planning with PRs

Now that you have a feel for graphing as code, you can check these into git and share them with your team. In our team, each node represents a JIRA ticket, and shows its ID and summary.

A big benefit of this is that project updates can be asynchronous. Like many people, I work with engineers across the world, and their ability to communicate updates by this method reduces communication friction considerably.

For example, the other day we had a graph representing our next phase of work that was looking too heavy for one sprint. Rather than calling a meeting and go over each line item, I just asked him to update the graph file and raise a PR for me to review.

We then workshopped the changes over the PR, and only discussed a couple of points over the phone. Fewer meetings, and more content-rich discussions.

Surface Assumptions

Beyond fewer and more effective meetings, another benefit is the objective recording of assumptions within the team. Surprisingly often, I have discovered hidden dependencies through this method that had either not been fully understood or discussed.

It’s also surfaced further items of work required to reach the solution, which has resulted in more and more clear tickets being raised that relate to the target solution. The discipline of coding these up helps force these into the open.

Happier Stakeholders

While inside the team, the understanding of what needs to happen is clearer, stakeholders clamouring for updates are clearer on what’s blocking the outcomes they want.

Another benefit is an increased confidence in the process. There’s a document that’s readily comprehensible they can dig into if they want to find out more. But the fact that there’s a transparent graph of dependencies usually suffices to persuade people that things are under control.

Alternate Views

Finally, here are some alternate views of the same graph. We’ve already seen dot and neato. Here are the others. I’m not going to explain them technically as I’ve read the man page definitions and am none the wiser. They use words like ‘outerplanar’ and ‘force-directed’. Graph rendering is a complicated affair.

circo

complex_circo

fdp

complex_fdp

twopi

complex_twopi

patchwork

complex_patchwork

Code

Is here.

If you know more than me about graphviz and have any improvements/interesting tweaks/suggestions then please contribute.


Author is currently working on the second edition of Docker in Practice 

Get 39% off with the code: 39miell2

How to Manually Clear Locks in Jenkins

Problem

Recently I got into a situation where I hit a bug with Jenkins where Jenkinsfile locks were not released if the job was terminated.

I tried:

  • Restarting Jenkins
  • Reinstalling the plugin
  • Removing the locks manually from the top level Jenkins page
  • Raising a bug

None of these worked.

I found a solution that involved manually hacking files.

Solution

  1. Find the file named:
    org.jenkins.plugins.lockableresources.LockableResourcesManager.xml

    In the /var/jenkins_home​ folder (or wherever Jenkins is installed).

    It will look like this:

     

  2. <org.jenkins.plugins.lockableresources.LockableResource>
      <name>cookbook_openshift3_test_lock_1</name>
      <description></description>
      <labels></labels>
      <queueItemId>0</queueItemId>
      <buildExternalizableId>cookbook-openshift3/master#208</buildExternalizableId>
      <queuingStarted>1512325412</queuingStarted>
      <queuedContexts/>
    </org.jenkins.plugins.lockableresources.LockableResource>

    Remove the line in bold containing the buildExternalizableId attribute.

  3. Change the queuingStarted item
    <queuingStarted>1512325412</queuingStarted>

    to

    <queuingStarted>0</queuingStarted>

  4. Restart Jenkins

Author is currently working on the second edition of Docker in Practice 

Get 39% off with the code: 39miell2

How I Manage My Time

tl;dr

I see a lot of posts like this or this or this on HackerNews asking about time management

I was disorganised until my 30s.

Then I got organised and changed my life with:

  • JIRA
  • Notes in Git
  • Automating environment setup

What I’ve ‘got done’ since is listed below.

The Phone Call

About 6 years ago I missed something important at work. While working in ops, a customer had asked me to do something while I was busy, I’d moved onto another fire and clean forgot to do it.

Monday I got a phone call from her: “Ian, the payments went through over the weekend. Did you remember to switch off the cron job?”

Fuck.

She had to go and sort it out. I could only give my apologies to help her with the grief she was going to get.

I had the excuse that I was busy, that I’d been distracted, that I’ve got two kids and lots on.

But deep down I know something was wrong, that it was my fault. That I’d made a contract with someone and not honoured it.

It hurt.

A Chance

A few weeks later I was on a rare day off, and happened to be in a bookshop, and in this bookshop I saw Getting Things Done. I guess my unconscious led me to it, and though I’d always mocked books like this, I picked it up and scanned it. To my surprise, the advice was flexible, human, and practicable. I inhaled it, and my life changed from there.

 

What Happened?

It’s fair to say a lot in my life changed since that phone call 6 years ago. Since then, I’ve:

Also (and no less importantly), I’ve still got a job, and am a happily married father of two (I overlook the fact that my wife refuses to use JIRA).

I also generally feel less stressed, and more productive. I don’t know if all of it can be attributed to getting organised, but it certainly feels that way. Here’s what I did.

 

What I Did – JIRA

The first thing I did was set up a home JIRA instance. I’m a dinosaur, a control freak, and a cheapskate, so I buy the license and run it from home.

It doesn’t matter that it’s JIRA, the point is that all the things that impinge on my consciousness get put in here, and get out of my mind, giving me a clear head.

When the board gets too full, I start ditching things. Most of these are articles I intended to read, or little ideas my ardour has cooled on. That stops me getting overwhelmed.

Over time, I made a few tweaks that helped me be a little more efficient:

  • I created my own workflow that matched the way I thought about tasks:
    • Open/New
    • To-Do
    • Waiting for Something
    • Reminder Set
    • Closed
  • I set up a gmail account and linked it to JIRA so I could create tickets
  • I use mail this link to send links I’m interested in to my JIRA
  • I use send to kindle to mail articles directly to my kindle, so I can batch-read them asynchronously

There’s no separation between work and home tasks. Tax returns and birthday reminders sit right next to work tasks I want to stay on top of.

If it takes up space in your head, it goes in one place.

What I Did – Notes

That’s what I did for tasks – I had another frustration that I wanted to address. I would work on something, then either:

  • forget it
  • make notes and forget where they were
  • make notes, remember where they were, but couldn’t find them

I did something really simple to solve this: I created a git repo for all my notes.

imiell@Ians-Air-2:/space/git/work/notes ⑂ master +  ls | head
R
X
actiona
aircrack-ng
algorithms
alpacajs
angularjs
ansible
ant
anyorigin
[...]

Then, I wrote some helper scripts. For example, mk_notes.sh creates a folder in this repo with some file pre-created:

#!/bin/bash 
if [[ $1 = '' ]] 
then 
 echo folder name needed 
 exit 1 
fi 
BASE=/space/git/work 
NOTES=${BASE}/notes 
LEARNING=${BASE}/learning 
mkdir -p ${NOTES}/$1 
mkdir -p ${LEARNING}/$1 
touch ${NOTES}/$1/cheat_sheet.asciidoc 
touch ${NOTES}/$1/links 
touch ${NOTES}/$1/git_repos 
touch ${NOTES}/$1/$1.asciidoc 
touch ${LEARNING}/$1/$1.asciidoc 
git add ${NOTES}/$1 
git add ${LEARNING}/$1

This creates a folder and adds it to git with a file for related links, a cheat_sheet, any related git_repos and a file that has the subject name in it.

Now if I pick up a new skill and then pick up my learning later, I can track my notes up to where I left off. I’ve used these notes to compiled blog posts like these.

I create asciidocs because I like the format, and it works well with vim.

I did try other methods (google docs, email, JIRA tickets), but this works best for me because:

  • It is available offline (git being a distributed note-taking tool)
  • It is text only
  • The current content is easily searched (grep)
  • A history is maintained that I can also search if needed
  • I can control/extend this system the way that makes sense to me

These things are important to me. They might be more or less important to you, so choose a tool accordingly. The vital thing is that it’s all in one place.

For example, here’s a link I literally just saw on Twitter while writing this: Organizing your life using GitHub

Work Environment Setup

Another constant niggle was setting up work environments. Like many people, I work on Linux servers, Mac laptops, and occasionally a Windows machine.

Mostly I dial into my home servers, but not infrequently I have to work on other servers.

To save time I wrote a ShutitFile to set up a server the way I like it. Here’s an abbreviated version of the full version:

# We assert here that we are running as root
SEND whoami
ASSERT_OUTPUT root

SEND lsb_release -d -s | awk '{print $1}'
ASSERT_OUTPUT Ubuntu

# We assert here the user imiell was set up by the OS installation process
SEND cut -d: -f1 /etc/passwd | grep imiell | wc -l
ASSERT_OUTPUT 1

# Install required packages
INSTALL openssh-server
INSTALL run-one

[...]

# Install docker
IF_NOT RUN docker version
 INSTALL apt-transport-https
 INSTALL ca-certificates
 INSTALL curl
 INSTALL software-properties-common
 RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
 RUN apt-get update
 RUN apt install -y docker-ce
ENDIF
# Add imiell to the docker user group
RUN usermod -G docker -a imiell

# Create space folder and chown it to imiell
RUN mkdir -p /space && chown imiell: /space
RUN mkdir -p /space/git

# Generate an ssh key
IF_NOT FILE_EXISTS /home/imiell/.ssh/id_rsa.pub
 RUN ssh-keygen
 # Note that the response to 'already exists' below prevents overwrite here.
 EXPECT_MULTI ['file in which=','empty for no passphrase=','Enter same passphrase again=','already exists=n']
ENDIF

# Log me in as imiell
USER imiell
# If it's not been done before, check out my dotfiles and set it up
IF_NOT FILE_EXISTS /home/imiell/.dotfiles
 RUN cd /home/imiell
 RUN git clone --depth=1 https://github.com/ianmiell/dotfiles ~imiell/.dotfiles
 RUN cd .dotfiles
 RUN ./script/bootstrap
 EXPECT_MULTI ['What is your github author name=Ian Miell','What is your github author email=ian.miell@gmail.com','verwrite=O']
ENDIF
LOGOUT

 

ShutIt is a tool I wrote for simple automation of interactive sessions. Like traditional CM tools, but simpler.

Exceptions/Difficulties/Lessons Learned

This method works, for me, but there are limitations. I can’t keep all my work Confluence notes and JIRAs on my home JIRA or Git repo (not least for security reasons), so there is some separation between work and home notes and information.

That can’t be helped, but what’s more interesting are the downsides of this approach.

Is It Productive?

Sometimes it feels like managing this is a tax on my attention. I do wonder whether sometimes I’m just shuffling tickets around rather than tackling the hard stuff that happens over much longer time periods than individual tasks.

I Have to Remember to Let Go

Managing your workload more formally like this can make it hard to let go. There’s always something to do, but sometimes you need to take time out and smell the flowers. That’s when other good things can happen. Being productive is not everything, by a long chalk.

Or, as Lennon didn’t put it: Life is what happens when you are busy grooming your backlog.

Any Suggestions?

I’m always open to improving my workflow, so please let me know below if you have any suggestions.


Author is currently working on the second edition of Docker in Practice 

Get 39% off with the code: 39miell2

Ten Things I Wish I’d Known About Chef

1) Understand How Chef Works

This sounds obvious, but is important to call out.

Chef’s structure can be bewildering to newcomers. There are so many concepts that may be new to you to get to grips with all at once. Server, chef-client, knife, chefdk, recipe, role, environment, run list, node, cookbook… the list goes on and on.

I don’t have great advice here, but I would avoid doing too many theoretical tutorials, and just focus on getting an environment that you can experiment on to embed the concepts in your mind. I automated an environment in Vagrant for this purpose for myself here. Maybe you’ve got a test env at work you can use. Either way, unless you’re particularly gifted you’re not going to get conversant with these things overnight.

Then keep the chef docs close to hand, and occasionally browse them to pick up things you might need to know about.

 

2) A Powerful Debugger in Two Lines

This is less well known than it should be, and has saved me a ton of time. Adding these two lines to your recipes will give you a breakpoint when you run chef-client.

require 'pry'
binding.pry

You’re presented with a ruby shell you can interact with mid-run. Here’s a typical session:

root@chefnode1:~# chef-client
Starting Chef Client, version 12.16.42
resolving cookbooks for run list: ["chef-repo"]
Synchronizing Cookbooks:
 - chef-repo (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...

Frame number: 0/22

From: /opt/chef/embedded/lib/ruby/gems/2.3.0/gems/chef-12.16.42/lib/chef/cookbook_version.rb @ line 234 Chef::CookbookVersion#load_recipe:

220: def load_recipe(recipe_name, run_context)
 221: unless recipe_filenames_by_name.has_key?(recipe_name)
 222: raise Chef::Exceptions::RecipeNotFound, "could not find recipe #{recipe_name} for cookbook #{name}"
 223: end
 224: 
 225: Chef::Log.debug("Found recipe #{recipe_name} in cookbook #{name}")
 226: recipe = Chef::Recipe.new(name, recipe_name, run_context)
 227: recipe_filename = recipe_filenames_by_name[recipe_name]
 228: 
 229: unless recipe_filename
 230: raise Chef::Exceptions::RecipeNotFound, "could not find #{recipe_name} files for cookbook #{name}"
 231: end
 232: 
 233: recipe.from_file(recipe_filename)
 => 234: recipe
 235: end
[1] pry(#<Chef::CookbookVersion>)> 

The last line above is a prompt from which you can inspect the local state, similar to other breakpoint debuggers.

CTRL-D continues the run.

See here for more.

3) Run Locally-Modified Cookbooks

I spent a long time being frustrated by my inability to re-run chef-client with a slightly modified set of cookbooks in the local cache (in /var/chef/cache...).

Then the chef client we were using was upgraded, and the
--skip-cookbook-sync option was available. This did exactly what I wanted: use the cache, but run the recipes in exactly the same way, run list and all.

The -z flag can do similar, but you need to specify the run-list by hand.
--skip-cookbook-sync ‘just works’ if you want to keep everything exactly the same and add a log line or something.

4) Learn Ruby

Ruby is the language Chef uses, so learning it is very useful.

I used Learn Ruby the Hard Way to quickly get a feel for the language.

5) Libraries

It isn’t immediately obvious how you avoid re-using the same code recipe after recipe.

Here’s a sample of a ‘ruby library’ embedded in a Chef recipe. It handles the figuring out of the roles of the nodes.

One thing to note is that because you are outside the Chef recipe, to access the standard Chef functions, you need to explicitly refer to its namespace. For example, this line calls the standard search​:

Chef::Search::Query.new.search(:node, "role:rolename")

The library is used eg here. The library object is created:

server_info = OpenShiftHelper::NodeHelper.new(node)

and then the object is referenced as items are needed, eg:

first_master = server_info.first_master
master_servers = server_info.master_servers

Note that the node object is passed in, so it’s visible within the library.

6) Logging and .to_s

If you want to ‘quickly’ log something, it’s easy:

log 'my log message do
  level :debug
end

and then run at debug level with:

chef-client -l debug

To turn a value into a string, try the .to_s function, eg:

log 'This is a string: ' + node.to_s do
  level :debug
end

 

7) Search and Introspection Functions

The ‘search’ function in Chef is a very powerful tool that allows you to write code that switches based on queries to the Chef server.

Some examples are here, and look like this:

graphite_servers = search(:node, 'role:graphite-server')

Similarly, you can introspect the client’s node using its attributes and standard Ruby functions.

For example, to introspect a node’s run list to determine whether it has the webserver role assigned to it, you can run:

node.run_list.roles.include?("webserver")

This technique is also used in the example code mentioned above.

8) Attribute precedence and force_override

Attribute precedence becomes important pretty quickly.

Quite often I have had to refer to this section of the docs to remind myself of the order that attributes are set.

Also, force_override is something you should never have to use as it’s a filthy hack, but occasionally it can get you out of a spot. But it can’t override everything (see 10 below)!

9) Chef’s Two-Pass model

This can be the cause of great confusion. If the order of events in Chef seems counter-intuitive in a run, it’s likely that you’ve not understood the way Chef processes its code.

The best explanation of this I’ve found is here. For me, this is the key sentence:

This also means that any Ruby code in the file not explicitly delayed (ruby_blocklazynot_if/only_if) is run when the file is run, during the compile phase.

Don’t feel you need to understand this from day one, just keep it in mind when you’re scratching your head about why things are happening in the wrong order, and come back to that page.

10) Ohai and IP Addresses

This one caused me quite a lot of grief. I needed to override the IP address that ohai (the tool that gathers information about each Chef node and places in the node object) gets from the node.

It takes the default route’s interface’s IP address by default, but this caused me lots of grief when using Vagrant. force_override​ (see 8) above) doesn’t work because it’s an automatic ohai variable.

I am not the only one with this problem, but I never found a ‘correct’ solution.

In the end I used this hack.

Find the ruby file that sets the ip and mac address. Depending on the version this may differ for you:

RUBYFILE='/opt/chef/embedded/lib/ruby/gems/2.4.0/gems/ohai-13.5.0/lib/ohai/plugins/linux/network.rb'

Then get the ip address and mac address of the interface you want to use (in this case the eth1 interface:

IPADDR=$(ip addr show eth1 | grep -w inet | awk '{print $2}' | sed 's/\(.*\).24/\1/'""")
MACADDRESS=$(ip addr show eth1 | grep -w link.ether | awk '{print $2}'""")

Finally, use sed (or gsed if you are on a mac) to hard-code the ruby file that gets the details to return the information you want:

sed -i "s/\(.*${IPADDR} \).*/\1 \"\"/" $RUBBYFILE
sed -i "s/\(.*macaddress \)m.*/\1 \"${MACADDRESS}\"/" $RUBYFILE

 


Author is currently working on the second edition of Docker in Practice 

Get 39% off with the code: 39miell2

 

 

Vagrant and Ohai / Chef IP Address Hack

 

Problem

In vagrant, ohai returns the eth0 ip.

This is a PITA, since if you run clusters of Vagrant that use Chef (as I do), then Chef and ohai thinks the IP address of the node is always:

10.0.1.15

or whatever Vagrant attaches to the default interface (usually eth0).

 

It’s Not Just Me

Plenty of people appear to have this problem:

How to change ip address of node after added to chef server?

Chef and ohai retrieving a droplets private ip address

http://chef.opscode.narkive.com/TFd0kvF0/force-main-ip-on-multi-homed-chef-nodes

OHAI-287

How to have chef use a different-ip?

Solution

I haven’t found a ‘proper’ solution for this. The most elegant (or least inelegant) one I could find involves:

  • Finding the ruby file involved in determining network information
  • Getting the IP address associated with the interface that you want Chef to ‘see’
  • (Optional) get the mac address associated with the interface that you want Chef to ‘see’
  • Hard-code the IP address and macaddress with these values directly in the network.rb file

Here’s an example:

RUBYFILE='/opt/chef/embedded/lib/ruby/gems/2.4.0/gems/ohai-13.5.0/lib/ohai/plugins/linux/network.rb' 

IPADDR=$(ip addr show eth1 | grep -w inet | awk '{print $2}' | sed 's/\(.*\).24/\1/'""")

MACADDRESS=$(ip addr show eth1 | grep -w link.ether | awk '{print $2}'""")

sed -i "s/\(.*${IPADDR} \).*/\1 \"\"/;s/\(.*macaddress \)m.*/\1 \"${MACADDRESS}\"/" $RUBYFILE

 

Got a Better Way?

Please let me know!


Author is currently working on the second edition of Docker in Practice 

Get 39% off with the code: 39miell2