Studying and awk came up.
Spent about an hour and I see some useful commands that extend past what “cut” can do. But really when dealing with printf() format statements is anyone using awk scripts for this?
Or is everyone just using their familiar scripting language. I’d reach for Python for the problems being presented as useful for awk.
I use awk all the time, nothing too fancy, but when you need to pull out elements of text it’s usually way easier than using cut.
awk {’ print $3 '} will pull the third element based on your IFS variable (internal field separater, default is whitespace)
awk {’ print $NF ‘} gets you the last element, and awk {’ print $(NF-1) '} gets you one element from the last, and so on.
Basic usage but so fast and easy for so many everyday command line things.
You can also add to the output. I use it frequently to pull a list of files, etc, from another file, and then do something like generate another script from that output. This is a weak example, but one I can think of off my head. Not firing up my work laptop to search for better examples until after the holidays. LOL.
awk {‘print "ls -l "$1’}
And then send that to a file that I can then execute.
awk will always have a soft spot for me, but I can see why not many take the time to learn it. It tends to be needed right there at the border of problem complexity where you are probably better using a full-fledged scripting tool.
But learning awk is great for that “now you’re thinking in pipes” ah-hah moment.
All the time. Not always by choice!
A lot of my work involves writing scripts for systems I do not control, using as light a touch as is realistically possible. I know for a fact Python is NOT installed on many of my targets, and it doesn’t make sense to push out a whole Python environment of my own for something as trivial as string manipulation.
awk is super powerful, but IMHO not powerful enough to justify its complexity, relative to other languages. If you have the freedom to use Python, then I suggest using that for anything advanced. Python skills will serve you better in a wider variety of use cases.
awk predates perl as well as python by a pretty large margin (1978); it’s useful, of course, for processing things in a pipeline, but as it became obsolete as a general-purpose scripting language, users have had less and less of a reason to learn its syntax in detail – so nowadays it shows up in one-liners where it could be replaced by a tiny bit of
cut
.I had worked through a good bit of the O’Reilly ‘sed & awk’ book – the first programming book I got, after being enticed by shell scripting in general. Once I learned a bit of Python, & got better at vim scripting, though, I started using it less and less; today I barely remember its syntax.
I use awk constantly at work. Very useful in my opinion, and really powerful if you dig into it.
Yes, for things too complex to do in sed but not complex enough to need a “normal” programming language like python.
That’s normally perl for me.
I use
awk
on the daily. It has a wider and more consistent install base than perl.Nearly every day. There was a time when I’d reach for Ruby, but in the end, the stability, ubiquity, and portability of the traditional Unix tools - among whom awk is counted - turned out to be more useful. I mainly underuse its power, though; it serves as a column aggregator or re-arranger, for the most part.
I use awk all the time. a very common and probably simplest reason I use it is it’s ability to handle variable column locations for data.
if you know you always want the last field you can do something like
awk '{print $NF}'
but usually using it as for performing more advanced operations all in one go without having to pipe something three times.
sure you can do grep cut grep printf, but you can instead do the pattern matching, the field, the formatting, whatever you need to all in one place.
it’s also got a bunch of more advanced usage of course since it’s its own language. one of my favorite advanced one liners is one that will recognize if it is going to print a duplicate line anywhere in your output and prevent it. in cases where you don’t want to sort your output but you also want to remove duplicates it is extremely helpful and quick rather than running post-processing on the output in another way.
all that said main reason I use it is because I know it and it’s fast, there’s nothing you can do in awk that you can’t do in Python or whatever else you’re more comfortable with. The best tool for the job is the one that gets it done quickly and accurately. unless your environment is limited and it prevents the installation of tools you’re more familiar with then there’s no real reason to use this over Python.
I used to use the command line, Bash, Awk, Sed, Cut, Grep, and Find (often piped to one another) quite often. I can recall that the few times I used Awk was usually for collating lines from logs or CSV files.
But then I switched to using Emacs as my editor, and it gathers together the functionality of all of those tools into one, nice, neat little bundle of APIs that you can easily program in the Emacs Lisp programming language, either as code or by recording keystrokes as a “macro.”
Now I don’t use shell pipelines hardly at all anymore. Mostly I run a process, buffer its output, and edit it interactively. I first edit by hand, then record a macro once I know what I want to do, then apply the macro to every line of the buffer. After that, I might save the buffer to a file, or maybe stream it to another process, recapturing its output. This technique is much more interactive, with the ability to undo mistakes, and so it is easier to manipulate data than with Awk and shell pipelines.
This is fascinating to me. Do you have any links or suggestions for this workflow to learn more?
This is fascinating to me. Do you have any links or suggestions for this workflow to learn more?
I am glad you asked, because I actually wrote a series of blog posts on the topic of how Emacs replaced my old Tmux+Bash CLI-based workflow. The link there is to the introductory article, in the “contents” section there are links to each of the 4 articles in the series. The “Shell Basics” (titled “Emacs as a Shell”) might be of particular interest to you.
If you have any specific questions, or if you have recommendations for something you think you would like to learn from one of my blog posts, please let me know. I would like to write a few more entries in this blog series.
Yes! Awk is great, I use it all the time for text processing problems that are beyond the scope of normal filters but aren’t worth writing a whole program for. It’s pretty versatile, and you can split expressions up and chain them together when they get too complicated. Try piping the output into
sh
sometime. It can be messy though and myawk
programs tend to be write-onlyI use it multiple times a day. I only know basic usage, but it’s super useful as part of an awk/grep/sort/uniq pipeline, basically just extracting a field to work on.
Grep is fiiiiine.
sed is okay but a little nasty, when your sed script is longer that one search-replace command you gotta ask yourself what you’re doing really (yes, sed is a full-featured Turing-complete programming language, if you go far enough into the man page).
When I see awk in any stackoverflow recipe, I just say ‘fuck it’ and rewrite the whole thing in Python. Python is included into the minimal system image in Debian, the same as awk, but is way less esoteric, and you can do
python -e 'import os, sys; commands;'
for a one-liner console script.And if you want to talk about portability, try writing scripts for Android 4.4 ash shell. There’s no
[ ]
command. You doswitch/case
to compare strings.Have you tried ripgrep?
No, and I don’t think I will learn another tool for something that I can already do using grep/sed/find commands, which I know by heart.
That’s fair
Anyone interested in awk make sure to check the just published awk book second revision by original authors. Kernigan’s writings are a joy to read.
I think it’s pretty niche but is a great tool for parsing / converting data into a format that is more easily digested by another program.
Think for example a report from an 80’s system that spits out many tab separated values in a different format based on some code. Then these tables are all separated by two blank lines and order of them is randomised. To top that off you need to then pipe it all to a different program that only accepts a specific format.
You could do it in Python by doing a parse, process, stringify code but if you know awk you can do all those steps at the same time with less code.
Sure, in the age of REST the Python approach is better but awk is a very powerful tool for the “I have a specific output but need a specific input” problem.