LJ Archive CD

Filters

Paul Dunne

Issue #65, September 1999

This article is about filtering, a very powerful facility available to every Linux user, but one which migrants from other operating systems may find new and unusual.

At its most basic level, a filter is a program that accepts input, transforms it and outputs the transformed data. The idea of the filter is closely associated with several ideas that are part of the UNIX operating system: standard input and output, input/output redirection and pipes.

Standard input and output refer to default locations from which a program will take input and to which it will write output. The standard input (STDIN) for a program running interactively at the command line is the keyboard; the standard output (STDOUT) is the terminal screen.

With input/output redirection, a program can take input or send output using a location other than standard input or output—a file, for example. Redirection of STDIN is accomplished using the < symbol, redirection of STDOUT by >. For example,

ls > list

redirects the output of the ls command, which would normally go to the screen, into a file called list. Similarly,

cat < list
redirects the input for cat, which in the absence of a file name would be expected from the keyboard, to come from the file list--so we output the contents of that file to the screen.

Pipes are a means of connecting programs together through I/O redirection. The symbol for pipe is |. For example,

ls | less

is a common way of comfortably viewing the output from a directory listing where there are more files than will fit on the screen.

Simple programs provided as standard with your Linux system can be enhanced by using them as filters for other similar programs. I'll also show how simple programs of your own can be built to meet custom filtering needs.

One program I don't look at in this article is Perl. Perl is a programming language in its own right, and filters are language-independent.

grep

The program grep, “Get Regular Expression and Print”, is a good place to begin. (See “Take Command: grep” by Jan Rooijackers, March 1999.) The principle of grep is quite simple: search the input for a pattern, then output the pattern. For example,

grep 'Linus Torvalds' *

searches all files in the current directory for Linus' name.

Various command-line switches may be used to modify grep's behaviour. For example, if we aren't sure about case, we can write

grep -y 'linus torvalds' *

The -y switch tells grep to match without considering case. If you use any upper-case letters in the pattern, however, they will still match only upper-case. (This is broken in GNU grep, which simply ignores case when given the -y switch—that's what the -i switch does).

With just this bit of information about grep, it is easy to construct a practical application. For example, you could store name and address details in a file to create a searchable address book.

Extended Grep

Sometimes, basic grep won't do. For instance, suppose we want to find all occurrences of a text string which could possibly be a reference to Linus. Clearly, searching for 'Linus Torvalds' is not enough—that won't find just Linus or Torvalds. We need some way of saying, “This or this or this”. Here is where egrep (extended grep) comes in. This handy program modifies standard grep to provide just such a conditional syntax by using the | character to denote “or”.

egrep 'Linus Torvalds|L\. Torvalds|Mr\. Torvalds' *

will now find most ways of naming the inventor of Linux. Note the backslash to “escape” the period. Since it is a special character in regular expressions, we must tell egrep not to interpret it as a “magic” character.

A Note on Regular Expressions

tr

tr is perhaps the epitome of filters. (See “Take Command: A Little Devil Called tr” by Hans de Vreught, September, 1998.) Short for translate, tr basically does what its full name suggests: it changes a given character or set of characters to another character or set of characters. This is done by mapping input characters to output characters. An example will make this clear:

tr A-Z a-z

changes upper-case letters to lower-case. A-Z is shorthand for “all the letters from A to Z”.

sort

Sorting is a very basic computer operation. It is commonly used on text, to get lists in alphabetical order or to sort a numbered list. Linux has a powerful filter for sorting called, logically enough, sort.

head and tail

These two very simple filters have a surprising variety of uses. As their names suggest, head shows the head of a file, while tail shows the end. By default, both show the first or last ten lines respectively, and tail in particular has a number of other useful options. (See the man pages.)

Programmable Filters

Sometimes we need to do something a bit more complex than the relatively simple command lines of the above examples. For this, we need something I'll call a “programmable filter”, that is, a filter with a scripting language that allows us to specify complex operations.

sed

sed, the stream editor, is a filter typically used to operate on lines of text as an alternative to using an interactive editor. (See “Take Command: Good Ol' sed” by Hans de Vreught, April 1999.) There are times when firing up vi or Emacs and making the change, whether manually or using vi/ex commands, is not appropriate. For example, what if you have to make the same changes to fifty files? What if you need to change a string, but are not sure exactly in which files it occurs?

As is common in the UNIX world, where tools are often duplicated in different ways, sed can do most things grep does. Here is a simple grep in sed:

sed -n '/Linus Torvalds/p' filename

All this does is read standard input and print only those lines containing the string “Linus Torvalds”.

The default with sed is to pass standard input to standard output unchanged. To make it do anything useful, you must give it instructions. In our first example, we searched for the string by enclosing it in forward slashes (//) and told sed to print any line with that string in it with the p option. The -n option ensured that no other lines would be printed. Remember, the default behaviour is to print everything.

If this were all sed could do, we would be better off sticking with grep. However, sed's forte is as a stream editor, changing text files according to the rules you supply. Let's take a simple example.

sed 's/Torvuls/Torvalds/g' filename

This uses the sed “substitute” (s option) and applies it globally (g option). It looks for every occurrence of “Torvuls” and changes it to “Torvalds”. Without the g command at the end, it would change only the first occurrence of “Torvuls” on each line.

sed '/^From /,/^$/d' filename
This searches the standard input for the word “From” at the beginning of a line, followed by a space, and deletes all the lines from the line containing that pattern up to and including the first blank line, which is represented by ^$, i.e., a beginning of line (^) followed immediately by an end of line ($). In plain English, it strips out the header from a Usenet posting you have saved in a file.

Double-spacing a text file takes just one command:

sed G filename > file.doublespaced

According to our manual page, all this does is “append the contents of the hold space to the current text buffer”. That is, for each line, we output the contents of a buffer sed uses to store text. Since we haven't put anything in there, it is empty. However, in sed, appending this buffer adds a new line, regardless of whether there is anything in the buffer. So, the effect is to add an extra new line to each line, thus double-spacing the output.

AWK

Another very useful filter is the AWK programming language. (See “The AWK Tools” by Lou Iacona, May 1999.) Despite the weird name, it is an everyday tool.

To start with, let's look again at yet another way to do a grep: 'grep'. Fast

awk '/Linus Torvalds/'

Like grep and sed, AWK can search for text patterns. As with sed, each pattern can be associated with an action. If no action is supplied as in the above example, the default is to print each line where the pattern is matched. Alternatively, if no pattern is supplied, then the default action is to apply the action to every line. An AWK script for centering lines in a file is shown in Listing 1.

Listing 1.

AWK's strength is in its ability to treat data as tabular, that is, arranged in rows and columns. Each input line is automatically split into fields. The default field separator is “white space”, i.e., blanks and tabs, but can be changed to any character you want. Many UNIX utilities produce this sort of tabular output. In our next section, we'll see how this tabular format can be sent as input to AWK using a shell construction we haven't seen yet.

Pipes: When One Filter Isn't Enough

The basic principle of the pipe (|) is that it allows us to connect the standard output of one program with the standard input of another. (See “Introduction to Named Pipes” by Andy Vaught, September 1997.) A moment's thought should make the usefulness of this when combined with filters quite obvious. We can build complex instructions 'programs', on the command line or in a shell script, simply by stringing filters together.

The filter wc (word count) puts its output in four columns by default. Instead of specifying the -c switch to count only characters, give this command:

wc lj.filters | awk ' { print $3 } '

This takes the output of wc:

258    1558    8921 lj.filters
and filters it to print only the third column, the character count, to the screen:
8921
If you want to print the whole input line, use $0 instead of $3.

Another handy filtering pipe is one that does a simple filtering of ls -a output in order to see only the hidden files:

ls -a| grep ^[.].*

Of course, pipes greatly increase the power of programmable filters such as sed and awk.

Data stored in simple ASCII tables can be manipulated by AWK. As a simple example, consider the weights and measures converter shown in Listing 2. We have a simple text file of conversions:

From    To      Rate---     ---     ----
kg      lb      2.20
lb      kg      0.4536
st      lb      14
lb      st      0.07
kg      st      0.15
st      kg      6.35
in      cm      2.54
cm      in      0.394

To execute the script, give the command:

weightconv 100 kg lb
The result returned is:
220
Listing 2.

Power Filters

The classic example of “filtered pipelines” is from the book The UNIX Programming Environment:

cat $* |tr -sc A-Za-z '\012' |
sort |
uniq -c |
sort -n |
tail

First, we concatenate all the input into one file using cat. Next, we put each word on a separate line using tr: the -s squeezes, the -c means to use the complement of the pattern given, i.e., anything that's not A-Za-z. Together, they strip out all characters that don't make up words and replace them with a new line; this has the effect of putting each word on a separate line. Then we feed the output of tr into uniq, which strips out duplicates and, with the -c argument, prints a count of the number of times a duplicate word has been found. We then sort numerically (-n), which gives us a list of words ordered by frequency. Finally, we print only the last ten lines of the output. We now have a simple word frequency counter. For any text input, it will output a list of the ten most frequently used words.

Conclusion

The combination of filters and pipes is very powerful, because it allows you to break down tasks and then pick the best tool for each task. Many jobs that would otherwise have to be handled in a programming language can be done under Linux by stringing together a few simple filters on the command line. Even when a programming language must be used for a particularly complicated filter, you still save a lot of development effort by doing as much as possible using existing tools.

I hope this article has given you some idea of this power. Working with your Linux box should be both easier and more productive using filters and pipes.

All listings referred to in this article are available by anonymous download in the file ftp.linuxjournal.com/pub/lj/listings/issue65/2479.tgz.

Paul Dunne (paul@dunne.ie.eu.org) is an Irish writer and consultant who specializes in Linux. The only deadline he has ever met was the one for his very first article. His home page is at http://www.cix.co.uk/~dunnp/

LJ Archive CD