The Lazy Person’s Guide to the Info Command

Most people who use Linux pretty quickly learn about man pages, and how to navigate them with their preferred pager (usually less these days).

Less well known are the info pages. If you’ve never come across them, these look like man pages, and contain similar information, but are invoked like this:

info grep

Over the past couple of decades I often found myself looking at an info page and wondering how to navigate it, hitting various keys and getting lost and frustrated.

What Do I Do Now?

I tried man info, but that didn’t tell me how to navigate the pages. More rarely I would try info info, but didn’t have the time or patience to do follow the tutorial there and then as I was busy trying to get some information, stat.

The other day I finally had enough and decided to take the time to sit down and learn it properly. It didn’t take that long, but I figured there was a case for writing down a helpful guide for new users that just want to get going.

The Bare Minimum

Here’s the bare minimum you need to read through an info page without ever getting lost:

  • ] – next page
  • [ – previous page
  • space – page down within page
  • b – page up within page
  • q – quit

If you want to get commands into your muscle memory as fast as possible, focus on these. It won’t get you round pages efficiently, but you won’t wonder how to get back to where you were, or how you got where you are. If you’re a very casual user, stop here and come back later when you get fed up of spinning forwards and backwards through pages to find something.

Try it with something like info sed.

Levelling Up

If you want to get to the next level with info, then these commands will help:

  • n – next page in this level
  • p – previous page in this level
  • return – jump to page ‘lower down’
  • l – go back to the last node seen
  • u – go ‘up’ a level

info has a hierarchical structure. There is a top-level page, and then ‘child’ pages that can have other pages at the same ‘level’. To go to the next page at the same level you can hit the n key. To go back to the previous page at the same level you hit p.

Occasionally you will get an item that allows you ‘jump down’ a level by hitting the return key. For example, by placing the cursor on the ‘Definitions’ line below and hitting return you will be taken to

* Introduction::                An introduction to the shell.
* Definitions::                 Some definitions used.

To return to the page you were last on at any point, you can hit l (for ‘last page’) and you will be returned to the top of that page. Or if you want to go ‘up’ a level, type u.

Still Interested?

If you’re still interested then you might want to read through info info carefully, but before you do here’s a couple of final tips to help avoid getting lost in that set of pages (which I have done more than once).

First, when you get stuck or want to dig in further, you can get help:

  • ? – show the info commands window
  • h – open the general help window

Confusingly, these options opens up a half-window that, in the case of h at least, gives no indication of how to close it down again. Here’s how:

  • C-x 0 – close the window

Hitting CTRL and x together, followed by 0 gets you out.

Why Bother?

You might wonder what the point of learning to read info pages is.

For me, the main reasons are:

  • They are often far more detailed (and more structured) than man pages
  • They are more definitive and complete. The grep info page, for example, contains a great set of examples, a discussion on performance, and an introduction to regular expressions. In fact, they’re intended to be mini books that can be printed off when converted to the appropriate format
  • You can irritate and/or intimidate colleagues by dismissing man page usage as ‘inferior’ and asserting that real engineers use info (joke)

Aside from anything else, I find getting fluent with these pieces of relative arcana satisfying. Maybe it’s just me.


Learn Bash the Hard Way

Learn Git the Hard Way

Learn Terraform the Hard Way


Get 39% off Docker in Practice with the code: 39miell


Advertisements

A Hot Take on GitHub Actions

A couple of days ago I got access to GitHub Actions in Beta. I felt vaguely interested in it when I briefly read up on it, but now I’m like Holt geeking out on Moneyball:

This is not a considered post, so may contain errors, both egregious and small. I’ll edit them if I’m corrected.

What is it?

GitHub Actions can be described in many ways, but for most people that use GitHub its immediate power will lie in it enabling you to remove the need for any separate CI tooling.

You create a YAML file in .github/workflows/ within your repo that might look like this:

 name: Application
 on: push
 jobs:
   build:
     name: Shares run
     runs-on: ubuntu-latest
     steps:
     - uses: actions/checkout@master
     - uses: ./
       env:
         GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 

It’s a pipeline definition file similar to GoCD’s, or other definition formats for Jenkins et al. You can trigger workflows based on (for example) a crontab schedule, or repository push, or repository pull-request, or when a URL is hit. I’m sure more triggers are to come, assuming they don’t exist already.

The format isn’t 100% intuitive, but is as easy to pick up as anything else, and I’m sure the docs will improve (right now there seems to be two sets of docs, one more formal and in the old (deprecated) HCL format, and the other less formal and in the new YAML format. I’m not entirely sure of the status of the ‘older’ documentation, but it hasn’t failed me yet).

GitHub Actions doesn’t just consist of this functionality in your repo. GitHub is providing a curated set of canned actions here that you can reference in your workflows. You needn’t use theirs, either, you can use any you can find on GitHub (or maybe anywhere else; I haven’t tried).

So What?

For me, the big deal is that this co-locates the actions with your code. So you can trigger a rebuild on a push, or on a schedule, or from an external URL. Just like CI tools do, but with less hassle and zero setup.

But it doesn’t just co-locate code and CI.

It is also threatening to take over CD, secrets management (there’s a ‘Secrets’ tab in the repo’s settings now), artifact store (there’s a supported ‘upload-artifact’ action that pushes arbitrary files to your repo), and user identity. Add in the vulnerability detection functionality and the whole package is as compelling as hell.

An Azure Gateway Drug? An AWS Killer?

When the possibilities of this start to dawn on you, it’s truly dizzying.

GitHub effectively gives you, for free, a CI/CD platform to run more or less whatever you like (but see limits, below). You can extend it to manage your code workflow in however sophisticated a way you like, as you have access to the repository’s GitHub token.

The tradeoff is that it’s all so easy that your business is soon going to depend on GitHub so much Microsoft will have a grip on you as tight as Windows used to.

I think the real trojan horse here is user identity. By re-using the identity management your business might already trust in GitHub, and extending its scope to help solve the challenges of secrets management and artifact stores, whole swathes of existing work could be cut away from your operational costs.

Some Detail

The default ‘hello-github-action’ setup demonstrates a Docker container that runs on an Ubuntu VM base. I found this quite confusing. Is access to the VM possible? If it’s not, why do I care whether it’s running on Ubuntu 18 or Ubuntu 16? I did some wrangling with this but ran into apparently undocumented requirements for an action.yml file, and haven’t had time to bottom them out.

(As an aside, the auto-created lab that GitHub makes for new users is one of the best UX’s I’ve ever seen for onboarding to a new product.)

What you do get is root within the container. Nice. And you can use an arbitrary container, from DockerHub or wherever.

You also get direct access back to GitHub without any faff. By default you get access to a github secret.

As with all these remote build environments, debugging can be a PITA. You can rig up a local Docker container to behave as it would on the server, but it’s a little fiddly to get the conventions right, as not everything about the setup is documented.

Limits and Restrictions

Limits are listed here, and includes a stern warning not to use this for ‘serverless computing’, or “Any other activity unrelated to the production, testing, deployment, or publication of the software project associated with the repository where GitHub Actions are used. In other words, be cool, don’t use GitHub Actions in ways you know you shouldn’t.”

Which makes me wonder: are they missing an opportunity here? I have serverless applications I could run on here, and (depending on the cost) might be willing to pay GitHub to host them for me. I suspect that they are not going to sit on that opportunity for long.

Each virtual machine has the same hardware resources available, which I assume are freely available to the running container:

  • 2 core CPUs
  • 7 GB of RAM memory
  • 14 GB of SSD disk space

which seems generous to me.

The free tier gives you 2000 minutes (about a day and a half) of runtime, which also seems generous.

Conclusion

GitHub Actions is a set of features with enormous potential for using your codebase as a lever into your entire compute infrastructure. It flips the traditional view of code as just something to store, and compute where the interesting stuff happens on its head: the code is now the centre of gravity for your compute, and it’s only a matter of time before everything else follows.

I’m starting to think Microsoft got a bargain.

Links

GitHub Actions help

Curated actions

Developer Docs


Learn Bash the Hard Way

Learn Git the Hard Way

Learn Terraform the Hard Way


Get 39% off Docker in Practice with the code: 39miell


Seven God-Like Bash History Shortcuts You Will Actually Use

Intro

Most guides to bash history shortcuts exhaustively list all of the shortcuts available to you.

The problem I always had with that was that I would use them once, and then glaze over as I tried out all the possibilities. Then I’d move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using bash.

So most never got committed to memory.

Here I outline the shortcuts I actually use every day. When people see me use them they often ask me “what the hell did you do there!?”, conferring God-like status on me with minimal effort or intelligence required.

I recommend using one a day for a week, then moving onto the next one. It’s worth taking your time to get them under your fingers, as the time you save will be significant in the long run.

1) !$ – The ‘Last Argument’ One

If you only take one shortcut from this article, make it this one.

It substitutes in the last argument of the last command into your line.

Consider this scenario:

$ mv /path/to/wrongfile /some/other/place
mv: cannot stat '/path/to/wrongfile': No such file or directory

Ach, I put the wrongfile filename in my command. I should have put rightfile instead.

You might decide to fully re-type the last command, and replace wrongfile with rightfile.

Instead, you can type:

$ mv /path/to/rightfile !$
mv /path/to/rightfile /some/other/place

and the command will work.

There are other ways to achieve the above in bash with shortcuts, but this trick of re-using the last argument of the last command is one I use the most.

https://www.educative.io/courses/master-the-bash-shell

2) !:2 – The ‘nth Argument’ One

Ever done anything like this?

$ tar -cvf afolder afolder.tar
tar: failed to open

Like others, I get the arguments to tar (and ln) wrong more than I would like to admit:

When you mix up arguments like that, you can run:

$ !:0 !:1 !:3 !:2
tar -cvf afolder.tar afolder

and your reputation will be saved.

The last command’s items are zero-indexed, and can be substituted in with the number after the !:.

Obviously, you can also use this to re-use specific arguments from the last command rather than all of them.

3) !:1-$ – The ‘All The Arguments’ One

Imagine you run a command, and realise that the arguments were correct, but

$ grep '(ping|pong)' afile

I wanted to match ping or pong in a file, but I used grep rather than egrep.

I start typing egrep, but I don’t want to re-type the other arguments, so I can use the !:1-$ shortcut to ask for all the arguments to the previous command from the second one (remember they’re zero-indexed) to the last one (represented by the $ sign):

$ egrep !:1-$
egrep '(ping|pong)' afile
ping

You don’t need to pick 1-$, you can pick a subset like 1-2, or 3-9 if you had that many arguments in the previous command.


This is based on some of the contents of my book Learn Bash the Hard Way

hero

Preview available here.


4) !-2:$ – The ‘Last But n‘ One

The above shortcuts are great when I know immediately how to correct my last command, but often I run commands after the orignal one which mean that the last command is no longer the one I want to reference.

For example, using the mv example from before, if I follow up my mistake with an ls check of the folder’s contents:

$ mv /path/to/wrongfile /some/other/place
mv: cannot stat '/path/to/wrongfile': No such file or directory
$ ls /path/to/
rightfile

…I can no longer use the !$ shortcut.

In these cases, you can insert a -n: (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:

$ mv /path/to/rightfile !-2:$
mv /path/to/rightfile /some/other/place

Again, once learned, you may be surprised at how often you need it.

5) !$:h – The ‘Get Me The Folder’ One

This one looks less promising on the face of it, but is something I use dozens of times daily.

Imagine I run a command like this:

$ tar -cvf system.tar /etc/system
 tar: /etc/system: Cannot stat: No such file or directory
 tar: Error exit delayed from previous errors. 

The first thing I might want to do is go to the /etc folder to see what’s in there and work out what I’ve got wrong.

I can do this at a stroke with:

$ cd !$:h
cd /etc

What this one does is say: get the last argument to the last command (/etc/system), and take off its last filename component, leaving only the /etc.

6) !#:1 – The ‘The Current Line’ One

I spent years occasionally wondering if I could reference an argument on the current line before finally looking it up and learning it. I wish I’d done so well before.

I most commonly use it to make backup files

$ cp /path/to/some/file !#:1.bak
cp /path/to/some/file /path/to/some/file.bak

but once under the fingers it can be a very quick alternative to

7) !!:gs – The ‘Search and Replace’ One

This one searches across the referenced command, and replaces what’s in the first two / characters with what’s in the second two.

Say I want to tell the world that mys key does not work, and outputs f instead.

$ echo my f key doef not work
my f key doef not work

Then I realise that I was just wrongly hitting the f key by accident.

To replace all the fs with ses, I can type:

$ !!:gs/f /s /
echo my s key does not work
my s key does not work

It doesn’t just work on single characters. I can replace words or sentences too:

$ !!:gs/does/did/
echo my s key did not work
my s key did not work

Test

Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?

$ ping !#:0:gs/i/o
$ vi /tmp/!:0.txt
$ ls !$:h
$ cd !-2:h
$ touch !$!-3:$ !! !$.txt
$ cat !:1-$

Learn bash interactively in the browser here.


How Long Will It Take For The Leavers To Leave?

This piece seeks to answer a simple question: how long would it take for enough people to die that the Brexit decision would be reversed?

This has been informally speculated on before, but I haven’t seen any analysis done on the numbers, so I decided to do it myself.

The tl;dr is that the turning point is around July/August 2020:

Assumptions

To arrive at this number I had to make some assumptions:

  • Everyone that voted in June 2016 would vote exactly the same way again (or not vote again)
  • Everyone that comes of age to vote would vote in the same proportions (by age group) as in June 2016

Obviously, these assumptions don’t make a realistic prediction of the result of any second referendum, not least because the question itself would likely be different.

The Numbers

To arrive at the number, I first took the raw votes from June 2016:

  • Leave: 17,410,742
  • Remain: 16,141,241

Then I got the breakdown of votes by age group, based on the figures from Lord Ashcroft’s site here:

LeaveRemain
18-240.270.73
25-340.380.62
35-440.520.48
45-540.560.44
55-640.570.43
65+0.600.40

From here, what we need to work out is:

  • How many people will come ‘of age’ to vote per month
  • How many people will die per month, by age group

Fortunately the ONS collects data on births and deaths by age group, so we can estimate these values.

How Many New Remainers Will There Be?

These are the population figures broken down by age group at the time of the 2016 vote, taken from ‘ukmidyearestimates.xls 2012-2016, UK population counts for mid 2016’.

0-44014300
5-94037500
10-143625100
15-193778900
20-244253800
25-294510600
30-344408200
35-394179500
40-444174100
45-494619100
50-544632000
55-594066700
60-643534200
65-693636500
70-742852100
75-792154500
80-841606700
85-89993000
90 and over571200
65648000

Unfortunately the age groups do not align with Lord Ashcroft’s figures in the first table, but we can estimate the number of people who get the vote every month by taking the number of people in the 15-19 age group (3778900), and multiplying them by 3/5ths to get the number of people who could not vote in 2016 that can three years later.

This gives us a number of 2267340. Over the three years, this is 62982 people per month that can vote.

If we assume that the proportions voting for either side remain the same for the 18-24 age group, then 46% more of these votes will go to remain than leave (73% – 27%).

This gives us a final figure of 18,754 extra remain votes per month.

How Many Leavers Die Per Month?

Deaths by age group vary little over the years, so I took the numbers recorded in 2016, 2017 and 2018:

2016
15-4415128
45-6462,679
65+442,767
2017
15-4414514
45-6462,517
65+452,329
2018
15-4415140
45-6463,913
65+456,731

Looking at these numbers gives roughly 450,000 people in the 65+ age bracket dying per year. Deaths between 15-64 are relatively speaking negligible, and the voting proportions by age group mean that votes lost and gained roughly cancel one another out (the exact numbers give a few dozen more to remain per month, but this can be ignored).

Dividing 450,000 by 12 gives a figure of 37,500 deaths per month in the 65% age group.

Taking the net leave vote in that age group (20%) and multiplying out gives a figure of roughly 7,500 leave votes lost per month.

Taking the net of the two numbers gives a gain for leave votes of about 26,000 per month, resulting in this graph:

which gives a rough crossover point of mid-2020.

Conclusion

I’ve made many crude assumptions here, and one could argue on both sides for tweaks to the numbers here and there. For example, you could argue that those in the 15-18 age bracket in 2016 would be even more likely to vote remain than the 18-24 cohort.

And of course, this analysis makes assumptions that won’t hold true in reality, such as that everyone would vote the same way as in 2016, and the age group analysis of voting patterns was accurate and uniform within the groups.

Broadly, though, the demographics point to a majority for remain happening around mid-2020 if nothing else changed from 2016.


Sources

Analysis: https://docs.google.com/spreadsheets/d/1n5r6W951DDBvGhD00Ou2Q19aBbAxtOx-n3Chs6RDeek/edit#gid=13089380

ONS numbers: https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/datasets/populationestimatesforukenglandandwalesscotlandandnorthernireland

https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths

Ashcroft Polls:

https://www.jrf.org.uk/report/brexit-vote-explained-poverty-low-skills-and-lack-opportunities

Goodbye Docker: Purging is Such Sweet Sorrow

After 6 years, I removed Docker from all my home servers.

apt purge -y docker-ce

Why?

This was triggered by a recurring incident I faced where the Docker daemon was using 100% CPU on multiple cores that made the host effectively unusable.

This had happened a few times before, and was likely due to a script that had got out of hand starting up too many containers. I’d never really got to the bottom of it, as I had to run a command to kill off all the containers and restart the daemon. This time, the daemon wouldn’t restart without a kill -9, so I figured enough was enough.

Anyway, I didn’t necessarily blame Docker for it, but it did add force to an argument I’d heard before:

Why does Docker need a daemon at all?

Podman, Skopeo, and Buildah

These three tools are an effort mostly pushed by RedHat that do everything I need Docker to do. They don’t require a daemon or access to a group with root privileges.

Podman

Podman replaces the Docker command for most of its sub-commands (run, push, pull etc). Because it doesn’t need a daemon, and uses user namespacing to simulate root in the container, there’s no need to attach to a socket with root privileges, which was a long-standing concern with Docker.

Buildah

Buildah builds OCI images. Confusingly, podman build can also be used to build Docker images also, but it’s incredibly slow and used up a lot of disk space by using the vfs storage driver by default. buildah bud (‘build using Dockerfile’) was much faster for me, and uses the overlay storage driver.

The user namespacing allowing rootless builds was the other killer feature that made me want to move. I wrote a piece about trying to get rootless builds going last year, and now it comes out of the box with /etc/subuid and /etc/subgid set up for you, on Ubuntu at least.

Skopeo

Skopeo is a tool that allows you to work with Docker and OCI images by pushing, pulling, and copying images.

The code for these three are open source and available here:

Podman

Buildah

Skopeo

Steps to Move

Installing these tools on Ubuntu was a lot easier than it was 6 months ago.

I did seem to have to install runc independently of those instructions. Not sure why it wasn’t a pre-existing dependency.

First, I replaced all instances of docker in my cron and CI jobs with podman. That was relatively easy as it’s all in my Ansible scripts, and anything else was a quick search through my GitHub repos.

Once that was bedded in, I could see if anything else was calling docker by using sysdig to catch any references to it:

sysdig | grep -w docker

This may slow down your system considerably if you’re performance-sensitive.

Once happy that nothing was trying to run docker, I could run:

apt remove -y docker-ce

I didn’t actually purge in case there was some config I needed.

Once everything was deemed stable, the final cleanup could take place:

  • Remove any left-over sources in /etc/apt/* that point to Docker apt repos
  • Remove the docker group from the system with delgroup docker
  • Remove any left-over files in etc/docker/*, /etc/default/docker and /var/lib/docker

A few people asked what I did about Docker Compose, but I don’t use it, so that wasn’t an issue for me.

Edit: there exists a podman-compose project,
but it’s not considered mature.

Differences?

So far, and aside from the ‘no daemon’ and ‘no sudo access required’, I haven’t noticed many differences.

Builds are local to my user (in ~/.local/containers) rather than global (in /var/lib/docker), in keeping with the general philosophy of these tools as user-oriented rather than daemon-oriented. But since my home servers have only one user using Docker, that wasn’t much of an issue.

The other big difference I noticed was that podman pull downloads get all layers in parallel, in contrast to Docker’s. I don’t know if this causes problems if too many images are being pulled at once, but that wasn’t a concern for me.


If you like this, you might like one of my books:
Learn Bash the Hard Way

Learn Git the Hard Way
Learn Terraform the Hard Way

LearnGitBashandTerraformtheHardWay

Get 39% off Docker in Practice with the code: 39miell2

Seven Surprising Bash Variables

Continuing in the series of posts about lesser-known bash features, here I take you through seven variables that bash makes available that you may not have known about.

1) PROMPT_COMMAND

You might already know that you can manipulate your prompt to show all sorts of useful information, but what fewer people know is that you can run a shell command every time your prompt is displayed.

In fact many sophisticated prompt manipulators use this variable to run the commands required to gather the information to display on the prompt.

Try running this in a fresh shell to see what happens to your session:

$ PROMPT_COMMAND='echo -n "writing the prompt at " && date'

2) HISTTIMEFORMAT

If you run history in your terminal you should get a list of commands previous run by your account.

$ HISTTIMEFORMAT='I ran this at: %d/%m/%y %T '

Once this variable is set, new history entries record the time along with the command, so your history output can look like this:

1871  I ran this at: 01/05/19 13:38:07 cat /etc/resolv.conf
1872  I ran this at: 01/05/19 13:38:19 curl bbc.co.uk
1873  I ran this at: 01/05/19 13:38:41 sudo vi /etc/resolv.conf
1874  I ran this at: 01/05/19 13:39:18 curl -vvv bbc.co.uk
1876  I ran this at: 01/05/19 13:39:25 sudo su -

The formatting symbols are as per the symbols found in man date.

3) CDPATH

If you’re all about saving time at the command line, then you can use this variable to change directories as easily as you can call commands.

As with the PATH variable, the CDPATH variable is a colon-separated list of paths. When you run a cd command with a relative path (ie one without a leading slash), by default the shell looks in your local folder for matching names. CDPATH will look in the paths you give it for the directory you want to change to.

If you set CDPATH up like this:

$ CDPATH=/:/lib

Then typing in:

$ cd /home
$ cd tmp

will always take you to /tmp no matter where you are.

Watch out, though, as if you don’t put the local (.) folder in the list, then you won’t be able to create any other tmp folder and move to it as you normally would:

$ cd /home
$ mkdir tmp
$ cd tmp
$ pwd
/tmp

Oops!

This is similar to the confusion I felt when I realised the dot folder was not included in my more familiar PATH variable… but you should do that in the PATH variable because you can get tricked into running a ‘fake’ command from some downloaded code.

Mine is set with a leading .:

CDPATH=.:/space:/etc:/var/lib:/usr/share:/opt

This is based on some of the contents of my book Learn Bash the Hard Way, available at $6.99.

hero

4) SHLVL

Do you ever find yourself wondering whether typing exit will take you out of your current bash shell and into another ‘parent’ shell, or just close the terminal window entirely?

This variable tracks how deeply nested you are in the bash shell. If you create a fresh terminal you should see that it’s set to 1:

$ echo $SHLVL
1

Then, if you trigger another shell process, the number increments:

$ bash
$ echo $SHLVL
2

This can be very useful in scripts where you’re not sure whether you should exit or not, or keeping track of where you are in a nest of scripts.

5) LINENO

Also useful for introspection and debugging is the LINENO variable, which reports the number of commands that have been run in the session so far:

$ bash
$ echo $LINENO
1
$ echo $LINENO
2

This is most often used in debugging scripts. By inserting lines like: echo DEBUG:$LINENO you can quickly determine where in the script you are (or are not) getting to.

6) REPLY

If, like me, you routinely write code like this:

$ read input
echo do something with $input

then it may come as a surprise that you don’t need to bother with creating a variable at all:

$ read
echo do something with $REPLY

does exactly the same thing.

7) TMOUT

If you’re worried about staying on production servers for too long for security purposes, or worried that you’ll absent-mindedly run something harmful on the wrong terminal, then setting this variable can act as a protective factor.

If nothing is typed in for the number of seconds this is set to, then the shell will exit.

So this is an alternative to running sleep 1 && exit:

$ TMOUT=1

If you like this, you might like one of my books:
Learn Bash the Hard Way

Learn Git the Hard Way
Learn Terraform the Hard Way

LearnGitBashandTerraformtheHardWay

Get 39% off Docker in Practice with the code: 39miell2


The Missing Readline Primer

Readline is one of those technologies that is so commonly used many users don’t realise it’s there.

I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I’ve managed to glean as I’ve tried to research and experiment with it over the years.

Bash Without Readline

First you’re going to see what bash looks like without readline.

In your ‘normal’ bash shell, hit the TAB key twice. You should see something like this:

    Display all 2335 possibilities? (y or n)

That’s because bash normally has an ‘autocomplete’ function that allows you to see what commands are available to you if you tap tab twice.

Hit n to get out of that autocomplete.

Another useful function that’s commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.

Now type:

$ bash --noediting

The --noediting flag starts up bash without the readline library enabled.

If you hitTAB twice now you will see something different: the shell no longer ‘sees’ your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.

Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.

Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:

$ exit

Other Shortcuts

There are a great many shortcuts like autocomplete available to you if readline is enabled. I’ll quickly outline four of the most commonly-used of these before explaining how you can find out more.

$ echo 'some command'

There should not be many surprises there. Now if you hit the ‘up’ arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.

If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of the line. Another way of representing this ‘multi-key’ way of inputting is to write it like this: \C-a. This is one conventional way to represent this kind of input. The \C represents the control key, and the -a represents that the a key is depressed at the same time.

Now if you hit \C-e (ctrl and e) then your cursor has moved to the end of the line. I use these two dozens of times a day.

Another frequently useful one is \C-l, which clears the screen, but leaves your command line intact.

The last one I’ll show you allows you to search your history to find matching commands while you type. Hit \C-r, and then type ec. You should see theecho command you just ran like this:

    (reverse-i-search)`ec': echo echo

Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that you’ve input before (if you’ve only got one echo command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.

There are many more shortcuts that you can use that readline gives you. Next I’ll show you how to view these.


This is based on some of the contents of my book Learn Bash the Hard Way, available at $6.99.

hero

Using `bind` to Show Readline Shortcuts

If you type:

$ bind -p

You will see a list of bindings that readline is capable of. There’s a lot of them!

Have a read through if you’re interested, but don’t worry about understanding them all yet.

If you type:

$ bind -p | grep C-a

you’ll pick out the ‘beginning-of-line’ binding you used before, and see the \C-anotation I showed you before.

As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.

If you want to look through the entirety of the bind -p output, then you will want to know that \M refers to the Meta key (which you might also know as the Alt key), and \erefers to the Esc key on your keyboard. The ‘escape’ key bindings are different in that you don’t hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also tries to auto-complete the command you are typing. This is documented as:

    "\e?": possible-completions

in the bind -p output.

Readline and Terminal Options

If you’ve looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked at earlier:

    "\C-r": reverse-search-history

You might also have seen that there is another binding that allows you to search forward through your history too:

    "\C-s": forward-search-history

What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.

Watch out though! Hitting \C-s to search forward through the history might well not work for you.

Why is this, if the binding is there and readline is switched on?

It’s because something picked up the \C-s before it got to the readline library: the terminal settings.

The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.

If you type:

$ stty -e

you should get output similar to this:

speed 9600 baud; 47 rows; 202 columns;
lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc
iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk
oflags: opost onlcr -oxtabs -onocr -onlret
cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf
discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      ^S      ^Z      0       ^W

You can see on the last four lines (discard dsusp [...]) there is a table of key bindings that your terminal will pick up before readline sees them. The ^ character (known as the ‘caret’) here represents the ctrl key that we previously represented with a \C.

If you think this is confusing I won’t disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.

If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the stty program to unset that binding. In this case, we want to unset the ‘stop’ binding.

If you are in the same situation, type:

$ stty stop undef

Now, if you re-run stty -e, the last two lines might look like this:

[...]
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

where the stop entry now has<undef> underneath it.

Strangely, for me C-r is also bound to ‘reprint’ above (^R).

But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven’t been able to figure out. I suspect that reprint is ignored by modern terminals that don’t need to ‘reprint’ the current line.

While we are looking at this table:

discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

it’s worth noting a few other key bindings that are used regularly.

First, one you may well already be familiar with is \C-c, which interrupts a program, terminating it:

$ sleep 99
[[Hit \C-c]]
^C
$

Similarly,\C-z suspends a program, allowing you to ‘foreground’ it again and continue with the fg builtin.

$ sleep 10
[[ Hit \C-z]]
^Z
[1]+  Stopped                 sleep 10
$ fg
sleep 10

\C-d sends an ‘end of file’ character. It’s often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.

Finally, \C-w deletes the word before the cursor

These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.


If you like this, you might like one of my books:
Learn Bash the Hard Way

Learn Git the Hard Way
Learn Terraform the Hard Way

LearnGitBashandTerraformtheHardWay

Get 39% off Docker in Practice with the code: 39miell2