LJ Archive

At the Forge

Creating Mashups

Reuven M. Lerner

Issue #147, July 2006

It's a crime not to mashup two or more Web services to deliver more than they can deliver separately.

Last month, we started to look at the Google Maps API, which allows us to embed dynamic (and Ajax-enabled) maps into our Web applications. That article demonstrated how easy it is to create such maps, with markers on the screen.

This month, we try something far more ambitious. Specifically, we're going to join the ranks of those creating mashups, combinations of Web services that often (but not always) have a mapping component. A mashup is a combination of two or more Web APIs in a novel way, making information more accessible and informative than it would be on its own.

One of the first mashups I saw was the Chicago crime map. The Chicago Police Department publishes a regular bulletin of crimes that have taken place within the city, and their approximate locations. Using this map, you can determine how safe your block is from crime, as well as look for patterns in other areas of the city. This mashup took information from the Chicago Police Department's public information and displayed it on a Google Maps page.

I was living in Chicago at the time it came out, and (of course) used the listing to find out just how safe my neighborhood was. The information always had been available from the police department, but it was only in the context of a mapping application that I really was able to understand and internalize this data. And indeed, this is one of the important lessons mashups have taught us—that the synthesis of information and an accessible graphic display, can make a great deal of difference to end users.

When mapping software was first made available, there was no official way to use the maps for unofficial purposes. A number of enterprising developers looked at the JavaScript used to create the maps and reverse-engineered APIs for their own use. Google, as well as Yahoo and MapQuest, have since released APIs that make it possible for us to create mapping applications using their systems. This has made mashups with maps even more popular than ever, with a growing number of Web sites and blogs examining them.

This month, I demonstrate a simple mashup of Google Maps with Amazon's used-book service. The application will be relatively simple. A user will enter an ISBN, and a Google map of the United States will soon be displayed. Markers will be placed on the map indicating several of the locations where used copies of the book are available. Thus, if copies of a book are available in New York City, Chicago and San Francisco, we will see three markers on the map, one in each city. In this way, we'll see how two different Web APIs, from two different companies, can be brought together to create an interesting and useful display for end users.

This month's code examples assume you already have signed up for an Amazon Web services ID, as well as for a Google Maps ID. Information on where to acquire these IDs is available in the on-line Resources for this article.

A Simple Map

Our first challenge is to create a map that contains one graphic marker for each location in a list. We already saw how to do this last month using PHP. This month, we begin by converting the program to ERB, an ASP- or PHP-style template that uses Ruby instead of another language. You can see the file, mashup.rhtml, in Listing 1.

One way to parse ERB files correctly on a server is by running Ruby on Rails, which uses ERB as a default templating mechanism. But for a small mashup like this, using Rails would be overkill. So, I decided to use a simple ERB (Embedded Ruby, for HTML-Ruby templates) by itself.

To make this work, I installed eruby in the cgi-bin directory of my server (see Resources). I then told Apache that any file with an .rhtml extension should be parsed with eruby:

AddType application/x-httpd-eruby .rhtml
Action application/x-httpd-eruby /cgi-bin/eruby

After restarting the server, I was able to create HTML-Ruby templates without any problems, so long as they had an .rhtml extension. The file in Listing 1, mashup.rhtml, was a simple attempt at using my HTML-Ruby template to create a map. As with all Google Maps applications, our final output will be a page of HTML, including some JavaScript that invokes functions downloaded from the Google Maps server. Our Ruby code will be outputting JavaScript code, which will then execute in the user's browser.

To demonstrate that we can indeed do this for two fixed points, the ERB file defines an array of two latitudes, both within a short distance of my home in Skokie, Illinois:


<% array = [-87.740070, -87.730000] %>

Next, we iterate over the elements of this array, using the each_with_index method to get both the array element and the index within the array that we are currently on:


<% array.each_with_index do |item, index| %>

Now that we have both the latitude and a unique number for it, we can output some JavaScript:


var myMarker<%= index %> = new GMarker(new GPoint(<%= item%>, 42.037030));
map.addOverlay(myMarker<%= index %>);

What is happening in the above code isn't hard to understand, but it might be a bit complicated when you first read it. Basically, each iteration of our loop declares a new JavaScript variable. The first iteration creates myMarker0, and the second creates myMarker1. This is possible because we have the index of the current Ruby array element, and because we have made sure not to insert any spaces between myMarker and the Ruby output <%= index %>.

The myMarkerX variable is then defined to be a new instance of GMarker—that is, a marker on the Google map—located at a point defined by the latitude (the item variable) and longitude (a fixed value, 42.037030).

Finally, so that the user can see exactly where all of the points are, we print some text at the bottom of the page. The result is a map with two markers on it, and the location of each marker is listed in text.

Working with Addresses and Cities

This map is a nice start, but far from what we want to accomplish. And, one of the biggest impediments is the fact that Google Maps expects to get longitude/latitude pairs. Amazon's Web service does return information about third-party vendors, but it provides us with city and state information. So, we need a way to translate city and state names into latitude and longitude.

The easiest way to do this is to rely on someone else, who can translate an address into a longitude/latitude pair. Such geocoder services exist as Web services on the Internet; some of them are freely available, and others charge money. One of the best-known free geocoder services is at geocoder.us. To use this geocoder, we simply use a REST-style URL, as follows: http://geocoder.us/service/rest?address=ADDRESS, replacing ADDRESS with the place we want to go. For example, to find my house, we would say, http://geocoder.us/service/rest?address=9120+Niles+Center+Road+Skokie+IL.

The geocoder service returns an XML document that looks like this:


<rdf:RDF>
<geo:Point rdf:nodeID="aid77952462">
    <dc:description>9120 Niles Center Rd, Skokie IL 60076</dc:description>
    <geo:long>-87.743874</geo:long>
    <geo:lat>42.046517</geo:lat>
</geo:Point>
</rdf:RDF>

Because the longitude and latitude are nicely compartmentalized inside of the XML, it's easy to extract it in our program and then insert it into the JavaScript that we generate. However, from looking through the geocoder.us documentation, it doesn't seem as though it is able to handle city names (that is, without street addresses).

Luckily, at least one free geocoder service handles city names, returning a similarly styled XML document. We submit the name of a city as follows, once again using a REST-style request: http://brainoff.com/geocoder/rest?city=Skokie,IL,US.

We get the following result:


<rdf:RDF>
<geo:Point>
    <geo:long>-87.762660</geo:long>
    <geo:lat>42.034680</geo:lat>
</geo:Point>
</rdf:RDF>

As you can see, the longitude and latitude points we got back from this query are slightly different. If we were looking to create a map for driving directions, this would be of greater importance. But, we already know that we'll be looking at the entire map of the United States for this application, and that being blocks away, or even two miles away, won't make any difference.

We can now update our ERB file, such that it has an array of cities, rather than longitude/latitude pairs, as you can see in Listing 2. We begin the file by importing two Ruby classes that will be needed to handle this additional functionality:


<% require 'net/http' %>
<% require 'rexml/document' %>

Although our starting (and centering) point begins at the same longitude/latitude location, we begin at zoom level 13, which will be large enough to show all of the cities.

We then define four cities, putting them in an array called cities, showing four of the US cities in which I have lived. Notice that each element of this array is a string containing a city name, state abbreviation and US (for United States). Also note that when the city name has a space, we must replace it with a + sign (or %20), so the Web service request works appropriately:


<% cities = ["Skokie,IL,US", "Longmeadow,MA,US",
     "Somerville,MA,US", "Old+Westbury,NY,US"] %>

We then iterate through these cities, using each as the argument to our Web service geocoder:


<% geocoder_response =
    Net::HTTP.get_response('brainoff.com', "/geocoder/rest/?city=#{city}") %>

The results of the geocoder Web service are in XML, as we saw earlier. To extract the results of this query from the XML, we use the REXML library that comes with Ruby. This allows us to retrieve the geo:long and geo:lat elements, and then grab the textual contents of the elements:


<% longitude = xml.root.elements["/rdf:RDF/geo:Point/geo:long"].text %>
<% latitude = xml.root.elements["/rdf:RDF/geo:Point/geo:lat"].text %>

Having done the hard work, we now insert the appropriate JavaScript:


    var myMarker<%= index %> = new GMarker(new GPoint(<%= longitude %>,
<%= latitude %>));
    map.addOverlay(myMarker<%= index %>);

Along the way, we collect city names and locations into an array named final_list. We can then use this to produce a list at the end of the document:


<% final_list.each do |city| %>
<tr>
    <td><%= city['city'] %></td>
    <td><%= city['longitude'] %></td>
    <td><%= city['latitude'] %></td>
</tr>
<% end %>

Sure enough, this produces a page with a Google map showing all of those locations, and with a list at the bottom.

Adding Amazon Information

Although the above is nice to have, the city information is still hard-coded. What we want is to be able to retrieve information about third-party sellers of a particular book. This means we must get an ISBN from the user, ask Amazon for third-party sellers of that book, and then get the city and state in which each of those sellers resides. Our code will remain largely the same, except for the way we define the cities array, which will be far more complicated. You can see the resulting code in Listing 3.

Getting an ISBN from the end user is fairly straightforward. At the top of the file, we import the CGI class:


<% require 'cgi' %>

Now we can retrieve an ISBN that the user entered:


<% cgi = CGI.new %>
<% isbn = cgi['isbn'] %>

We use this ISBN to find all of the third-party sellers with a copy of this book. (Actually, we're going to look at only up to ten of the third-party vendors; Amazon returns only ten items at a time, and we won't complicate our code by looking for additional pages of results.) We take each returned vendor and put it into our vendors array.

So, let's start by getting information about vendors of used copies of our book. We do this by sending Amazon a REST request for our ISBN:


amazon_params = {'Service' => 'AWSECommerceService',
 'Operation' => 'ItemLookup',
 'AWSAccessKeyId' => 'XXX',
 'ItemId' => isbn,
 'ResponseGroup' => 'Medium,OfferFull',
 'MerchantId' => 'All'}.map {|key,value|
 "#{key}=#{value}"}.join("&")

amazon_response = Net::HTTP.get_response('webservices.amazon.com',
                                        '/onca/xml?' <<
                                        amazon_params)

The above is my preferred technique for keeping track of names and values, especially when I'm passing a lot of them—I create a hash, joining the keys and values with = signs, and then the pairs themselves with ampersands (& signs). This gives me a string that I can hand to Amazon.

The XML response that I get back then contains a lot of information, including details about each offer. That's actually all I care about here; I'm not keeping track of the price of the book (which would be useful, of course), but rather the location of each used copy we can grab. But we can't get that right away; the ItemLookup request gets us only the seller IDs and some basic information about each one. We'll need to grab the seller ID from each offer node, then use that to perform a second Amazon request, obtaining information about the vendor:


xml.root.elements.each("Items/Item/Offers/Offer/Seller/SellerId") do
|seller|
 # Now get information about each vendor
  amazon_vendor_params = {'Service' => 'AWSECommerceService',
      'Operation' => 'SellerLookup',
      'AWSAccessKeyId' => 'XXX',
      'SellerId' => seller.text}.map {|key,value|
      "#{key}=#{value}"}.join("&")

  vendor_response = Net::HTTP.get_response('webservices.amazon.com',
                                               '/onca/xml?' <<
                                               amazon_vendor_params)
  vendor_xml = REXML::Document.new(vendor_response.body)

This code sends a request to Amazon, gets an XML body back, and then looks for the City and State elements that a vendor will produce. Unfortunately, there's no fast and easy way to deal with countries outside of the United States, both with geocoding and with Amazon. Amazon's assumption seems to be that Canada is sort of like the United States, which is false. So, we'll always get the city and state and assume that it is in the United States. If our assumption turns out to be wrong, we'll allow ourselves to be corrected by the geocoder.

As we have grabbed information about each vendor, we have stuck the city and state information in the cities array. Now we're going to use that same array, just as we did in mashup2.rhtml—except now, the source is not a hard-coded list, but rather one that we put together from Amazon information. We had to make only two changes for things to work: a check that we didn't get nil from the geocoder (indicating there was an error, often because the vendor is in Canada), and a use of gsub to change space characters into + signs in the city name.

The results are quite nice to see, even if they're incomplete and a bit on the crude side: By going to a URL such as http://maps.lerner.co.il/mashup3.rhtml?isbn=0812931432, we can see where a number of used copies are located in the United States. This doesn't necessarily reflect the cost of the book, its condition, or the shipping charges—but it can be fun and interesting to see where different books have ended up, and which cities tend to have more (and fewer) used books.

Conclusion

Creating mashups, combinations of existing Web services, can be a great deal of fun, and can make it easier to see patterns in data by putting them on a map. It requires that you have a good understanding of the underlying technologies and their quirks—but with a bit of work, you'll see that creating such mashups can be fun and exciting, and even entertaining. Moreover, as the Web becomes increasingly interconnected, and as applications continue to blur the distinction between the desktop and the Web, we should expect to see more of such mashups, rather than fewer of them.

Resources for this article: /article/9013.

Reuven M. Lerner, a longtime Web/database consultant, is currently a PhD student in Learning Sciences at Northwestern University in Evanston, Illinois. He and his wife recently celebrated the birth of their son Amotz David.

LJ Archive