LJ Archive

Work the Shell

How Do People Find You on Google?

Dave Taylor

Issue #153, January 2007

Getting back to Apache log analysis by ending with a cliffhanger.

I admit it. I got sidetracked last month talking about how you can use a simple shell script function to convert big scary numbers into more readable values that are understandable. Sidetracked because we were in the middle of looking at how shell scripts can help you dig through your Apache Web server logs and extract useful and interesting information.

This time, I show how you can ascertain the most common search terms that people are using to find your site—with a few invocations of grep and maybe a few lines of awk for good measure.

Understanding Google

For this to work, your log has to be saving referrer information, which Apache does by default. You'll know if you peek at your access_log and see lines like this: - - [11/Oct/2006:04:04:19 -0600] "GET
↪/blog/images/rdf.png HTTP/1.0" 304 -
↪"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"

It's a bit hard to read, but this is a log entry for someone requesting the file /blog/images/rdf.png, and the referrer, the page that produced the request, is also shown as being date_math_in_linux_shell_script.html from my askdavetaylor.com site.

If we look at a log file entry for an HTML hit, we see a more interesting referrer: - - [11/Oct/2006:07:32:32 -0600]
 ↪"GET /wicked/wicked-cool-shell-script-library.shtml
 ↪HTTP/1.1" 200 15656 "http://www.google.com/
 ↪"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0;
 ↪.NET CLR 1.0.3705)"

Let me unwrap that just a bit too. The request here is for wicked-cool-shell-script-library.html on my site (intuitive.com), based on a Google search (the referrer is google.com/search). Dig into the arguments on the Google referrer entry, and you can see that the search was “Shell+Scripting”. Recall that + represents a space in a URL, so the search was actually for “Shell Scripting”.

(Bonus tip: because we're at start=10, this means they're on the second page of results. So, we know the match that led this person to my site is somewhere between #11 and #20.)

Okay, so now the question is, can we extract only these searches and somehow disassemble them so we can identify the search terms quickly? Of course we can!

Extracting Google Searches

For now, let's focus only on Google's search results, but it's easy to extend this to other search engines too. They all use the same basic URL structure, fortunately:

$ grep 'google.com/search' access_log | head -1 - - [11/Oct/2006:04:08:05 -0600]
 ↪"GET /coolweb/chap14.html HTTP/1.1" 200 31508
↪Attribute.%22&hl=en&lr=" "Mozilla/4.0 (compatible;
 ↪MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322;
 ↪.NET CLR 2.0.50727; InfoPath.1)"

Okay, that was simple. Now, extracting only the referrer field is easily done with a quick call to awk:

$ grep 'google.com/search' access_log | head -1 | awk '{print $11}'

Okay, closer. The next step is to chop off the value at the ? and then at the & afterward. There are a bunch of ways to do this, but I use only two calls to cut, because, well, it's easy:

$ grep 'google.com/search' access_log | head -1 | awk
 ↪'{print $11}' | cut -d\? -f2 | cut -d\& -f1

Nice! Now, we need to strip out the q= artifact from the HTML form used on Google itself, replace all occurrences of + with a space, and (a little bonus task) convert %22 into a double quote so the search makes sense. This can be done with sed:

$ grep 'google.com/search' access_log | head -1 |
 ↪awk '{print $11}' | cut -d\? -f2 | cut
 ↪-d\& -f1 | sed 's/+/ /g;s/%22/"/g;s/q=//'
"important Style Sheet Attribute."

Let me unwrap this a bit so it's easier to see what's going on:

grep 'google.com/search' access_log | \
  head -1 | \
  awk '{print $11}' | \
  cut -d\? -f2 | cut -d\& -f1 | \
  sed 's/+/ /g;s/%22/"/g;s/q=//'

Obviously, the head -1 is only there as we debug it, so when we pour this into an actual shell script, we'll lose that line. Further, let's create a variable for the name of the access log to simplify things too:



grep 'google.com/search' $ACCESSLOG | \
  awk '{print $11}' | \
  cut -d\? -f2 | cut -d\& -f1 | \
  sed 's/+/ /g;s/%22/"/g;s/q=//'

We're getting there....

Sorting and Collating

One of my favorite sequences in Linux is sort | uniq -c | sort -rn, and that's going to come into play again here. What does it do? It sorts the input alphabetically, then compresses duplicate lines with a preface count of how many matches are found. Then, it sorts that result from greatest matches to least. In other words, it takes raw input and converts it into a numerically sorted summary.

This sequence can be used for lots and lots of tasks, including figuring out the dozen most common words in a document, the least frequently used filename in a filesystem, the largest file in a directory and much more. For our task, however, we simply want to pore through the log files and figure out the most frequent searches that led people to our Web site:



grep 'google.com/search' $ACCESSLOG | \
  awk '{print $11}' | \
  cut -d\? -f2 | cut -d\& -f1 | \
  sed 's/+/ /g;s/%22/"/g;s/q=//' | \
  sort | \
  uniq -c | \
  sort -rn | \
  head -5

And the result:

$ sh google-searches.sh
 154 hl=en
  42 sourceid=navclient
  13 client=safari
   9 client=firefox-a
   3 sourceid=navclient-ff

Hmmm... looks like there's a problem in this script, doesn't there?

I'm going to wrap up here, keeping you in suspense until next month. Why don't you take a stab at trying to figure out what might be wrong and how it can be fixed, and next month we'll return to this script and figure out how to make it do what we want, not what we're saying it should do!

Dave Taylor is a 26-year veteran of UNIX, creator of The Elm Mail System, and most recently author of both the best-selling Wicked Cool Shell Scripts and Teach Yourself Unix in 24 Hours, among his 16 technical books. His main Web site is at www.intuitive.com.

LJ Archive