Now we'll zoom back in to the string I/O level and examine the print, printf, and read statements, which give the shell I/O capabilities that are more analogous to those of conventional programming languages.
As we've seen countless times in this book, print simply prints its arguments to standard output. You should use it instead of the echo command, whose functionality differs from system to system.[96] (The Korn shell's built-in version of echo emulates whatever the system's standard version of echo does.) Now we'll explore the print command in greater detail.
[96] Specifically, there is a difference between System V and BSD versions. The latter accepts options similar to those of print, while the former accepts C language-style escape sequences.
print accepts a number of options, as well as several escape sequences that start with a backslash. (You must use a double backslash if you don't surround the string that contains them with quotes; otherwise, the shell itself "steals" a backslash before passing the arguments to print.) These are similar to the escape sequences recognized by echo and the C language; they are listed in Table 7-2.
These sequences exhibit fairly predictable behavior, except for \f. On some displays, it causes a screen clear, while on others it causes a line feed. It ejects the page on most printers. \v is somewhat obsolete; it usually causes a line feed.
Sequence | Character printed |
---|---|
\a | ALERT or CTRL-G |
\b | BACKSPACE or CTRL-H |
\c | Omit final newline and discontinue processing the string |
\E | ESCAPE or CTRL-[ |
\f | FORMFEED or CTRL-L |
\n | newline (not at end of command) or CTRL-J |
\r | ENTER (RETURN) or CTRL-M |
\t | TAB or CTRL-I |
\v | VERTICAL TAB or CTRL-K |
\0n | ASCII character with octal (base-8) value n, where n is 1 to 3 digits. Unlike C, C++, and many other languages, the initial 0 is required. |
\\ | Single backslash |
The \0n sequence is even more device-dependent and can be used for complex I/O, such as cursor control and special graphics characters.
print also accepts a few dash options; we've already seen -n for omitting the final newline. The options are listed in Table 7-3.
Option | Function |
---|---|
-e | Process escape sequences in the arguments (this is the default). |
-f format | Print as if via printf with the given format (see the next section). |
-n | Omit the final newline (same as the \c escape sequence). |
-p | Print on pipe to coroutine; see Chapter 8. |
-r | Raw; ignore the escape sequences listed above. |
-R | Like -r, but furthermore ignore any other options except -n. |
-s | Print to command history file (see Chapter 2). |
-un | Print to file descriptor n. |
Notice that some of these are redundant: print -n is the same as print with \c at the end of a line; print -un ... is equivalent to print ... >&n (though the former is slightly more efficient).
However, print -s is not the same as print ... >> $HISTFILE. The latter command renders the vi and emacs editing modes temporarily inoperable; you must use print -s if you want to print to your history file.
Printing to your history file is useful if you want to edit something that the shell expands when it processes a command line, for example, a complex environment variable such as PATH. If you enter the command print -s PATH=$PATH, hit ENTER, and then press CTRL-P in emacs-mode (or ESC k in vi-mode), you will see something like this:
$ PATH=/bin:/usr/bin:/etc:/usr/ucb:/usr/local/bin:/home/billr/bin
That is, the shell expands the variable (and anything else, like command substitutions, wildcards, etc.) before it writes the line to the history file. Your cursor will be at the end of the line (or at the beginning of the line in vi-mode), and you can edit your PATH without having to type in the whole thing again.
If you need to produce formatted reports, the shell's print command can be combined with formatting attributes for variables to produce output data that lines up reasonably. But you can only do so much with these facilities.
The C language's printf(3) library routine provides powerful formatting facilities for total control of output. It is so useful that many other Unix-derived programming languages, such as awk and perl, support similar or identical facilities. Primarily because the behavior of echo on different Unix systems could not be reconciled, and recognizing printf's utility, the POSIX shell standard mandates a printf shell-level command that provides the same functionality as the printf(3) library routine. This section describes how the printf command works and examines additional capabilities unique to the Korn shell's version of printf.
The printf command can output a simple string just like the print command.
printf "Hello, world\n"
The main difference that you will notice at the outset is that, unlike print, printf does not automatically supply a newline. You must specify it explicitly as \n.
The full syntax of the printf command has two parts:
printf format-string [arguments ...]
The first part is a string that describes the format specifications; this is best supplied as a string constant in quotes. The second part is an argument list, such as a list of strings or variable values, that correspond to the format specifications. (If there are more arguments than format specifications, ksh cycles through the format specifications in the format string, reusing them in order, until done.) A format specification is preceded by a percent sign (%), and the specifier is one of the characters described shortly. Two of the main format specifiers are %s for strings and %d for decimal integers.
The format string combines text to be output literally with specifications describing how to format subsequent arguments on the printf command line. For example:
$ printf "Hello, %s\n" World Hello, World
Because the printf command is built-in, you are not limited to absolute numbers:
$ printf "The answer is %d.\n" 12+10+20 The answer is 42.
The allowed specifiers are shown in Table 7-4.
Specifier | Description |
---|---|
%c | ASCII character (prints first character of corresponding argument) |
%d | Decimal integer |
%i | Decimal integer |
%e | Floating-point format ([-]d.precisione[+-]dd) (see following text for meaning of precision) |
%E | Floating-point format ([-]d.precisionE[+-]dd) |
%f | Floating-point format ([-]ddd.precision) |
%g | %e or %f conversion, whichever is shorter, with trailing zeros removed |
%G | %E or %f conversion, whichever is shortest, with trailing zeros removed |
%o | Unsigned octal value |
%s | String |
%u | Unsigned decimal value |
%x | Unsigned hexadecimal number. Uses a-f for 10 to 15 |
%X | Unsigned hexadecimal number. Uses A-F for 10 to 15 |
%% | Literal % |
The printf command can be used to specify the width and alignment of output fields. A format expression can take three optional modifiers following % and preceding the format specifier:
%flags width.precision format-specifier
The width of the output field is a numeric value. When you specify a field width, the contents of the field are right-justified by default. You must specify a flag of "-" to get left-justification. (The rest of the flags are discussed shortly.) Thus, "%-20s" outputs a left-justified string in a field 20 characters wide. If the string is less than 20 characters, the field is padded with whitespace to fill. In the following examples, a | is output to indicate the actual width of the field. The first example right-justifies the text:
printf "|%10s|\n" hello
It produces:
| hello|
The next example left-justifies the text:
printf "|%-10s|\n" hello
It produces:
|hello |
The precision modifier, used for decimal or floating-point values, controls the number of digits that appear in the result. For string values, it controls the maximum number of characters from the string that will be printed.
You can specify both the width and precision dynamically, via values in the printf argument list. You do this by specifying asterisks, instead of literal values.
$ myvar=42.123456 $ printf "|%*.*G|\n" 5 6 $myvar |42.1235|
In this example, the width is 5, the precision is 6, and the value to print comes from the value of myvar.
The precision is optional. Its exact meaning varies by control letter, as shown in Table 7-5:
Conversion | Precision means |
---|---|
%d, %i, %o, %u, %x, %X | The minimum number of digits to print. When the value has fewer digits, it is padded with leading zeros. The default precision is 1. |
%e, %E | The minimum number of digits to print. When the value has fewer digits, it is padded with zeros after the decimal point. The default precision is 10. A precision of 0 inhibits printing of the decimal point. |
%f | The number of digits to the right of the decimal point. |
%g, %G | The maximum number of significant digits. |
%s | The maximum number of characters to print. |
Finally, one or more flags may precede the field width and the precision. We've already seen the "-" flag for left-justification. The rest of the flags are shown in Table 7-6.
Character |
Description |
---|---|
- | Left-justify the formatted value within the field. |
space | Prefix positive values with a space and negative values with a minus. |
+ | Always prefix numeric values with a sign, even if the value is positive. |
# | Use an alternate form: %o has a preceding 0; %x and %X are prefixed with 0x and 0X, respectively; %e, %E and %f always have a decimal point in the result; and %g and %G do not have trailing zeros removed. |
0 | Pad output with zeros, not spaces. This only happens when the field width is wider than the converted result. In the C language, this flag applies to all output formats, even non-numeric ones. For ksh, it only applies to the numeric formats. |
If printf cannot perform a format conversion, it returns a non-zero exit status.
Similar to print, the built-in printf command interprets escape sequences within the format string. However, printf accepts a larger range of escape sequences; they are the same as for the $'...' string. These sequences are listed later in Table 7-9.
Besides the standard specifiers just described, the Korn shell accepts a number of additional specifiers. These provide useful features at the expense of nonportability to other versions of the printf command.
$ printf "%s\n" 'hello\nworld' hello\nworld $ printf "%b\n" 'hello\nworld' hello world
$ printf "%s\n" "Here are real < and > characters" Here are real < and > characters $ printf "%H\n" "Here are real < and > characters" Here are real < and > characters
Interestingly enough, spaces are turned into , the unbreakable literal HTML and XML space character.
$ printf "hello, world\n%n" msglen hello, world $ print $msglen 13
$ printf "%P\n" '(.*\.o|.*\.obj|core)+' *+(*\.o|*\.obj|core)*
$ printf "print %q\n" "a string with ' and \" in it" print $'a string with \' and " in it'
(The $'...' notation is explained in Section 7.3.3.1, later in this chapter.)
$ printf "%R\n" '+(*.o|*.c)' ^(.*\.o|.*\.c)+$
$ date Wed Jan 30 15:46:01 IST 2002 $ printf "%(It is now %m/%d/%Y %H:%M:%S)T\n" "$(date)" It is now 01/30/2002 15:46:07
Unix systems keep time in "seconds since the Epoch." The Epoch is midnight, January 1, 1970, UTC. If you have a time value in this format, you can use it with the %T conversion specifier by preceding it with a # character, like so:
$ printf "%(It is now %m/%d/%Y %H:%M:%S)T\n" '#1012398411' It is now 01/30/2002 15:46:51
Finally, for the %d format, after the precision you may supply an additional period and a number indicating the output base:
$ printf '42 is %.3.5d in base 5\n' 42 42 is 132 in base 5
The other side of the shell's string I/O facilities is the read command, which allows you to read values into shell variables. The basic syntax is:
read var1 var2 ...
There are a few options, which we cover in Section 7.2.3.5, later in this chapter. This statement takes a line from the standard input and breaks it down into words delimited by any of the characters in the value of the variable IFS (see Chapter 4; these are usually a space, a TAB, and newline). The words are assigned to variables var1, var2, etc. For example:
$ read fred bob dave pete $ print "$fred" dave
$ print "$bob" pete
If there are more words than variables, excess words are assigned to the last variable. If you omit the variables altogether, the entire line of input is assigned to the variable REPLY.
You may have identified this as the missing ingredient in the shell programming capabilities we've seen so far. It resembles input statements in conventional languages, like its namesake in Pascal. So why did we wait this long to introduce it?
Actually, read is sort of an escape hatch from traditional shell programming philosophy, which dictates that the most important unit of data to process is a text file, and that Unix utilities such as cut, grep, sort, etc., should be used as building blocks for writing programs.
read, on the other hand, implies line-by-line processing. You could use it to write a shell script that does what a pipeline of utilities would normally do, but such a script would inevitably look like:
while (read a line) do process the line print the processed line end
This type of script is usually much slower than a pipeline; furthermore, it has the same form as a program someone might write in C (or some similar language) that does the same thing much, much faster. In other words, if you are going to write it in this line-by-line way, there is no point in writing a shell script. (The authors have gone for years without writing a script with read in it.)
Nevertheless, shell scripts with read are useful for certain kinds of tasks. One is when you are reading data from a file small enough so that efficiency isn't a concern (say a few hundred lines or less), and it's really necessary to get bits of input into shell variables.
One task that we have already seen fits this description: Task 5-4, the script that a system administrator could use to set a user's TERM environment variable according to which terminal line he or she is using. The code in Chapter 5 used a case statement to select the correct value for TERM.
This code would presumably reside in /etc/profile, the system-wide initialization file that the Korn shell runs before running a user's .profile. If the terminals on the system change over time -- as surely they must -- then the code would have to be changed. It would be better to store the information in a file and change just the file instead.
Assume we put the information in a file whose format is typical of such Unix "system configuration" files: each line contains a device name, a TAB, and a TERM value. If the file, which we'll call /etc/terms, contained the same data as the case statement in Chapter 5, it would look like this:
console s531 tty01 gl35a tty03 gl35a tty04 gl35a tty07 t2000 tty08 s531
We can use read to get the data from this file, but first we need to know how to test for the end-of-file condition. Simple: read's exit status is 1 (i.e., nonzero) when there is nothing to read. This leads to a clean while loop:
TERM=vt99 # assume this as a default line=$(tty) while read dev termtype; do if [[ $dev == $line ]]; then TERM=$termtype export TERM print "TERM set to $TERM." break fi done
The while loop reads each line of the input into the variables dev and termtype. In each pass through the loop, the if looks for a match between $dev and the user's tty ($line, obtained by command substitution from the tty command). If a match is found, TERM is set and exported, a message is printed, and the loop exits; otherwise TERM remains at the default setting of vt99.
We're not quite done, though: this code reads from the standard input, not from /etc/terms! We need to know how to redirect input to multiple commands. There are a few ways of doing this.
One way to solve the problem is with a subshell, as we'll see in the next chapter. This involves creating a separate process to do the reading. However, it is usually more efficient to do it in the same process; the Korn shell gives us three ways of doing this.
The first, which we have seen already, is with a function:
function findterm { TERM=vt99 # assume this as a default line=$(tty) while read dev termtype; do if [[ $dev == $line ]]; then TERM=$termtype export TERM print "TERM set to $TERM." break fi done } findterm < /etc/terms
A function acts like a script in that it has its own set of standard I/O descriptors, which can be redirected in the line of code that calls the function. In other words, you can think of this code as if findterm were a script and you typed findterm < /etc/terms on the command line. The read statement takes input from /etc/terms a line at a time, and the function runs correctly.
The second way is by putting the I/O redirector at the end of the loop, like this:
TERM=vt99 # assume this as a default line=$(tty) while read dev termtype; do if [[ $dev == $line ]]; then TERM=$termtype export TERM print "TERM set to $TERM." break fi done < /etc/terms
You can use this technique with any flow-control construct, including if...fi, case...esac, for...done, select...done, and until...done. This makes sense because these are all compound statements that the shell treats as single commands for these purposes. This technique works fine -- the read command reads a line at a time -- as long as all of the input is done within the compound statement.
Putting the I/O redirector at the end is particularly important for making loops work correctly. Suppose you place the redirector after the read command, like so:
while read dev termtype < /etc/terms do ... done
In this case, the shell reopens /etc/terms each time around the loop, reading the first line over and over again. This effectively creates an infinite loop, something you probably don't want.
Occasionally, you may want to redirect I/O to or from an arbitrary group of commands without creating a separate process. To do that, you need to use a construct that we haven't seen yet. If you surround some code with { and },[97] the code will behave like a function that has no name. This is another type of compound statement. In accordance with the equivalent concept in the C language, we'll call this a block of code.[98]
[97] For obscure, historical syntactic reasons, the braces are shell keywords. In practice, this means that the closing } must be preceded by either a newline or a semicolon. Caveat emptor!
[98] LISP programmers may prefer to think of this as an anonymous function or lambda-function.
What good is a block? In this case, it means that the code within the curly braces ({ }) will take standard I/O descriptors just as we described for functions. This construct is also appropriate for the current example because the code needs to be called only once, and the entire script is not really large enough to merit breaking down into functions. Here is how we use a block in the example:
{ TERM=vt99 # assume this as a default line=$(tty) while read dev termtype; do if [[ $dev == $line ]]; then TERM=$termtype export TERM print "TERM set to $TERM." break fi done } < /etc/terms
To help you understand how this works, think of the curly braces and the code inside them as if they were one command, i.e.:
{ TERM=vt99; line=$(tty); while ... ; } < /etc/terms
Configuration files for system administration tasks like this one are actually fairly common; a prominent example is /etc/hosts, which lists machines that are accessible in a TCP/IP network. We can make /etc/terms more like these standard files by allowing comment lines in the file that start with #, just as in shell scripts. This way /etc/terms can look like this:
# # System Console is a Shande 531s console s531 # # Prof. Subramaniam's line has a Givalt GL35a tty01 gl35a ...
We can handle comment lines in two ways. First, we could modify the while loop so that it ignores lines beginning with #. We would take advantage of the fact that the equality and inequality operators (== and !=) under [[...]] do pattern matching, not just equality testing:
if [[ $dev != \#* && $dev == $line ]]; then ...
The pattern is #*, which matches any string beginning with #. We must precede # with a backslash so that the shell doesn't treat the rest of the line as a comment. Also, remember from Chapter 5 that the && combines the two conditions so that both must be true for the entire condition to be true.
This would certainly work, but the usual way to filter out comment lines is to use a pipeline with grep. We give grep the regular expression ^[^#], which matches anything except lines beginning with #. Then we change the call to the block so that it reads from the output of the pipeline instead of directly from the file.[99]
[99] Unfortunately, using read with input from a pipe is often very inefficient, because of issues in the design of the shell that aren't relevant here.
grep "^[^#]" /etc/terms | { TERM=vt99 ... }
We can also use read to improve our solution to Task 6-3, in which we emulate the multicolumn output of ls. In the solution in the previous chapter, we assumed for simplicity that filenames are limited to 14 characters, and we used 14 as a fixed column width. We'll improve the solution so that it allows any filename length (as in modern Unix versions) and uses the length of the longest filename (plus 2) as the column width.
In order to display the list of files in multicolumn format, we need to read through the output of ls twice. In the first pass, we find the longest filename and use that to set the number of columns as well as their width; the second pass does the actual output. Here is a block of code for the first pass:
ls "$@" | { let width=0 while read fname; do if (( ${#fname} > $width )); then let width=${#fname} fi done let "width += 2" let numcols="int(${COLUMNS:-80} / $width)" }
This code looks a bit like an exercise from a first-semester programming class. The while loop goes through the input looking for files with names that are longer than the longest found so far; if a longer one is found, its length is saved as the new longest length.
After the loop finishes, we add 2 to the width to allow for space between columns. Then we divide the width of the terminal by the column width to get the number of columns. As the shell does division in floating-point, the result is passed to the int function to produce an integer final result. Recall from Chapter 3 that the built-in variable COLUMNS often contains the display width; the construct ${COLUMNS:-80} gives a default of 80 if this variable is not set.
The results of the block are the variables width and numcols. These are global variables, so they are accessible by the rest of the code inside our (eventual) script. In particular, we need them in our second pass through the filenames. The code for this resembles the code to our original solution; all we need to do is replace the fixed column width and number of columns with the variables:
set -A filenames $(ls "$@") typeset -L$width fname let count=0 while (( $count < ${#filenames[*]} )); do fname=${filenames[$count]} print "$fname \c" let count++ if [[ $((count % numcols)) == 0 ]]; then print # output a newline fi done if (( count % numcols != 0 )); then print fi
The entire script consists of both pieces of code. As yet another "exercise for the reader," consider how you might rearrange the code to only invoke the ls command once. (Hint: use at least one arithmetic for loop.)
The other type of task to which read is suited is prompting a user for input. Think about it: we have hardly seen any such scripts so far in this book. In fact, the only ones were the modified solutions to Task 5-4, which involved select.
As you've probably figured out, read can be used to get user input into shell variables. We can use print to prompt the user, like this:
print -n 'terminal? ' read TERM print "TERM is $TERM"
Here is what this looks like when it runs:
terminal? vt99 TERM is vt99
However, in order that prompts don't get lost down a pipeline, shell convention dictates that prompts should go to standard error, not standard output. (Recall that select prompts to standard error.) We could just use file descriptor 2 with the output redirector we saw earlier in this chapter:
print -n 'terminal? ' >&2 read TERM print TERM is $TERM
The shell provides a better way of doing the same thing: if you follow the first variable name in a read statement with a question mark (?) and a string, the shell uses that string as a prompt to standard error. In other words:
read TERM?'terminal? ' print "TERM is $TERM"
does the same as the above. The shell's way is better for the following reasons. First, this looks a bit nicer; second, the shell knows not to generate the prompt if the input is redirected to come from a file; and finally, this scheme allows you to use vi- or emacs-mode on your input line.
We'll flesh out this simple example by showing how Task 5-4 would be done if select didn't exist. Compare this with the code in Chapter 6:
set -A termnames gl35a t2000 s531 vt99 print 'Select your terminal type:' while true; do { print '1) gl35a' print '2) t2000' print '3) s531' print '4) vt99' } >&2 read REPLY?'terminal? ' if (( REPLY >= 1 && REPLY <= 4 )); then TERM=${termnames[REPLY-1]} print "TERM is $TERM" export TERM break fi done
The while loop is necessary so that the code repeats if the user makes an invalid choice.
This is roughly twice as many lines of code as the first solution in Chapter 5 -- but exactly as many as the later, more user-friendly version! This shows that select saves you code only if you don't mind using the same strings to display your menu choices as you use inside your script.
However, select has other advantages, including the ability to construct multicolumn menus if there are many choices, and better handling of empty user input.
read takes a set of options that are similar to those for print. Table 7-7 lists them.
Option | Function |
---|---|
-A | Read words into indexed array, starting at index 0. Unsets all elements of the array first. |
-d delimiter | Read up to character delimiter, instead of the default character, which is a newline. |
-n count | Read at most count bytes.[100] |
-p | Read from pipe to coroutine; see Chapter 8. |
-r | Raw; do not use \ as line continuation character. |
-s | Save input in command history file; see Chapter 1. |
-t nseconds | Wait up to nseconds seconds for the input to come in. If nseconds elapses, return a failure exit status. |
-un | Read from file descriptor n. |
[100] This option was added in ksh93e.
Having to type read word[0] word[1] word[2] ... to read words into an array is painful. It is also error-prone; if the user types more words than you've provided array variables, the remaining words are all assigned to the last array variable. The -A option gets around this, reading each word one at a time into the corresponding entries in the named array.
The -d option lets you read up to some other character than a newline. In practical terms, you will probably never need to do this, but the shell wants to make it possible for you to do it in case you ever need to.
Similarly, the -n option frees you from the default line-oriented way that read consumes input; it allows you to read a fixed number of bytes. This is very useful if you're processing legacy fixed-width data, although this is not very common on Unix systems.
read lets you input lines that are longer than the width of your display device by providing backslash (\) as a continuation character, just as in shell scripts. The -r option to read overrides this, in case your script reads from a file that may contain lines that happen to end in backslashes.
read -r also preserves any other escape sequences the input might contain. For example, if the file fred contains this line:
A line with a\n escape sequence
read -r fredline will include the backslash in the variable fredline, whereas without the -r, read will "eat" the backslash. As a result:
$ read -r fredline < fred $ print "$fredline" A line with a escape sequence $
(Here, print interpreted the \n escape sequence and turned it into a newline.) However:
$ read fredline < fred $ print "$fredline" A line with an escape sequence $
The -s option helps you if you are writing a highly interactive script and you want to provide the same command-history capability as the shell itself has. For example, say you are writing a new version of mail as a shell script. Your basic command loop might look like this:
while read -s cmd; do # process the command done
Using read -s allows the user to retrieve previous commands to your program with the emacs-mode CTRL-P command or the vi-mode ESC k command. The kshdb debugger in Chapter 9 uses this feature.
The -t option is quite useful. It allows you to recover in case your user has "gone out to lunch," but your script has better things to do than just wait around for input. You tell it how many seconds you're willing to wait before deciding that the user just doesn't care anymore:
print -n "OK, Mr. $prisoner, enter your name, rank and serial number: " # wait two hours, no more if read -t $((60 * 60 * 2)) name rank serial then # process information ... else # prisoner is being silent print 'The silent treatment, eh? Just you wait.' call_evil_colonel -p $prisoner ... fi
If the user enters data before the timeout expires, read returns 0 (success), and the then part of the if is processed. On the other hand, when the user enters nothing, the timeout expires and read returns 1 (failure), executing the else part of the statement.
Although not an option to the read command, the TMOUT variable can affect it. Just as for select, if TMOUT is set to a number representing some number of seconds, the read command times out if nothing is entered within that time, and returns a failure exit status. The -t option overrides the setting of TMOUT.
Finally, the -un option is useful in scripts that read from more than one file at the same time.
Task 7-4 is an example of this that also uses the n< I/O redirector that we saw earlier in this chapter.
Task 7-4
Write a script that prints the contents of two files side by side.
We'll format the output so the two output columns are fixed at 30 characters wide. Here is the code:
typeset -L30 f1 f2 while read -u3 f1 && read -u4 f2; do print "$f1$f2" done 3<$1 4<$2
read -u3 reads from file descriptor 3, and 3<$1 directs the file given as first argument to be input on that file descriptor; the same is true for the second argument and file descriptor 4. Remember that file descriptors 0, 1, and 2 are already used for standard I/O. We use file descriptors 3 and 4 for our two input files; it's best to start from 3 and work upwards to the shell's limit, which is 9.
The typeset command and the quotes around the argument to print ensure that the output columns are 30 characters wide and that trailing whitespace in the lines from the file is preserved. The while loop reads one line from each file until at least one of them runs out of input.
Assume the file dave contains the following:
DAVE Height: 177.8 cm. Weight: 79.5 kg. Hair: brown Eyes: brown
And the file shirley contains this:
SHIRLEY Height: 167.6 cm. Weight: 65.5 kg. Hair: blonde Eyes: blue
If the script is called twocols, then twocols dave shirley produces this output:
DAVE SHIRLEY Height: 177.8 cm. Height: 167.6 cm. Weight: 79.5 kg. Weight: 65.5 kg. Hair: brown Hair: blonde Eyes: brown Eyes: blue
Copyright © 2003 O'Reilly & Associates. All rights reserved.