It's a simple job to do a cron job.
This month, I thought I'd take another sidetrack. (You knew that entrepreneurs all have ADD, right?) So, it should be no surprise that to me, the fastest way from point A to point B is, um, what were we talking about?
Reader Peter Anderson sent in a code snippet that offers up a considerably shorter way to convert a really big byte count into kilobytes, megabytes and gigabytes than the one I shared in my December 2006 column.
His question: “Why so much extra code?”
His snippet of code to do this takes advantage of the built-in math capabilities of the Bash shell:
value=$1 ((kilo=value/1024)) ((mega=kilo/1024)) ((giga=mega/1024)) echo $value bytes = $kilo Kb, $mega Mb and $giga Gb
Peter, you're right. This is a succinct way of solving this problem, and it's clear that a shell function to convert, say, bytes into megabytes easily can be produced as a one-liner. Thanks!
As I've said in the past, I don't always write the most concise code in the world, but my goal with this column is to write maintainable code and to get that prototype out the door and be ready to go to the next thing as fast as possible. That practice isn't always compatible with the quest for elegance and perfection in the coding world, to say the least!
On an admin mailing list, I bumped into an interesting question that makes for a perfect second part to this column—a simple script that's really just a one-line invocation, but because it involves the cron facility, becomes worth our time.
The question: “I need to run a cron job that looks in a certain directory at the top of every hour and deletes any file that is more than one hour old.”
Generally, this is a job for the powerful find command, and on first glance, it can be solved simply by using an hourly cron invocation of the correct find command.
For neophyte admins, however, there are two huge steps involved that can be overwhelming: figuring out how to add a new cron job and figuring out the correct predicates for find to accomplish what they seek.
Let's start with find. A good place to learn more about find, of course, is the man page ( man find), wherein you'll see there are three timestamps that find can examine. ctime is the last changed time, mtime is the last modified time and atime is the last accessed time. None of them, however, are creation time, so if a file was created 90 minutes ago but touched or changed eight minutes ago, all three will report eight minutes, not 90. That's probably not a huge problem, but it's worth realizing as a very typical compromise required to get this admin script working properly.
For the sake of simplicity, I'll actually change this example to deleting files that haven't been accessed in the last 60 minutes, not worrying about how much earlier they might have been created. For this task, I need ctime.
find has this baffling syntax of +x, x and -x for specifying 60 minutes, and it would read as “more than x”, “exactly x” and “less than x”, respectively. If we use the sequence -ctime -60, we'll get exactly the opposite of what we want; we'll get files that have been changed in the last 60 minutes.
Or is that what we are specifying? Without a unit indicated, the default time unit is really days, so -60 is actually files that have been changed in the last 60 days—not what we want!
To specify minutes, we want to use cmin rather than ctime (I told you find was confusing). Here's how that might look:
find . -cmin +60
The above also matches directories, however; so another predicate we'll want to add is one that constrains the results only to files:
(type d is only directories, and so forth).
But, that's not exactly right either, because we probably want to ensure that we only ever go one level deeper instead of spending a lot of time traversing a complex file tree. This is done with the little-used maxdepth parameter, which is described as “True if the depth of the current file into the tree is less than or equal to n.” Now, let's put this all together:
find . -cmin +60 -type f -maxdepth 1
See how that all fits together?
Now, the last part of this requirement is actually to delete the matching file or files, and I have to admit that this gives me some cause for anxiety, because if you make even the slightest mistake with the find command, you can end up deleting tons of files you didn't want removed—not good. So, rather than just use -delete, I suggest you use -print, and for a day or so, let it run and have cron automatically e-mail the resulting report to you.
Speaking of which, the way that you get to the data file that defines which jobs you want run when from the crontab facility is the crontab command. Log in as the desired user (probably root in this case), then type:
You'll now be editing a file with comments (lines starting with #) and lines composed of five space-separated values followed by an sh command, like this:
* * * * * /home/taylor/every-minute.sh
This is rather brutal on the system. It invokes this script every single minute of every day—probably overkill for just about any process, but it illustrates the basic format of crontab entries.
The fields are, in order, minute, hour, day of month, month and day of year. To have our job run every hour, for example, we can simply set the minute field to a specific value. For example:
10 * * * * /home/taylor/every-hour.sh
Every hour, at ten minutes after the hour, the script is run. That works.
Now, to stitch it all together, the best bet is to drop the find command into a very short shell script and invoke the script with cron, rather than having the command itself in the crontab file. Why? Because it gives you lots of flexibility and makes it very easy to expand or modify the script at any time.
Put everything in this column together and you should be able to really start exploiting some of the recurring job capabilities of your Linux box. I am a big fan of cron and have many, many jobs running on a nightly basis on my servers. It's well worth learning more about, as is the find command.
Now, what were we talking about earlier?