LJ Archive

At the Forge

20 Years of Web Development

Reuven M. Lerner

Issue #233, September 2013

Reuven waxes nostalgic and reflects on the first two decades of Web development.

A few months ago, I was walking with my seven-year-old son, and I mentioned that when I was young, we had computers in the house. I pointed out that while I was growing up, we didn't have an Internet connection, and that the Web hadn't even been invented yet. “Really?” he asked, confused. “So what did you do with the computer?”

The Web has permeated our lives to such a large degree, it's easy to understand why my son is unable to comprehend what it would mean to be in a world without it. The notion of a computer without Internet access today is quite strange; I have given a number of lectures in military and otherwise-restricted areas where Internet access is strictly forbidden, and let me tell you, there's something accurate about the way in which Internet access is described as an addictive drug. If I can't get my fix of e-mail, or blogs, or newspapers or Hacker News, I feel cut off from the rest of the world.

This is, according to my count, my 201st At the Forge column. It is also, somewhat coincidentally, a bit more than 20 years since I started writing Web applications. So, I'd like to take the opportunity to look at where Web technologies have come from, and where they're going—and to wax a bit nostalgic for the days when the Web was still the great unknown, something that we knew would one day be a part of everyone's lives, but that we couldn't express beyond those basic thoughts.

In the Beginning

In the beginning, of course, the Web was static. It took a while for people to realize that just because a Web browser was requesting a document didn't mean that the server actually needed to send the contents of a document. Instead, the server could lie to the browser, creating a document on the fly, by executing a program. This sort of thing, which Web developers do every day, remains one of the funniest parts of the Web from my perspective—that browsers ask for the contents of a URL, as if they're asking for something static, without any way of knowing whether the server is creating something new for them or sending the same data everyone else received.

Back in those early days, it wasn't entirely obvious just what technologies people were going to use to implement their Web applications. Those of you who have seen, or used, programs in a “cgi-bin” directory might remember CGI, the API that standardized the way Web servers could invoke external programs. You might not remember that the cgi-bin directory was where the compiled C programs would go. Yes, the demonstration CGI programs that came with the NCSA HTTP server were written in C and shell. It took a little while before people discovered that higher-level languages—mostly Perl and Tcl in those days—could do the job just as easily, while allowing programmers to be far more productive.

And, although Tim Berners-Lee called it the World Wide Web, usage was far from universal in those early days. When we put MIT's student newspaper on the Web in 1993, almost no one, even among our readership, knew what a Web browser was. We had to take out advertisements in our own newspaper, telling people how to install the Mosaic browser, so they could read the newspaper on-line. And of course, the look and feel of the newspaper in those days was extremely primitive. A paper we submitted to the first-ever World Wide Web conference described how we had made it possible for people to search through our newspaper's archives—an amazing breakthrough in those days!

Although it quickly became clear that the Web was becoming dynamic, no one really thought about it as serious software development. I remember expressing surprise, in 1995, when I was given a business card that said “Web application developer” on it. While the Web was becoming dynamic, no one thought of Web development as serious software work. I snickered a bit to myself, because it seemed clear to me that applications were serious desktop software—not the toys that we were creating.

But it was in that year, 1995, that I began to understand where the Web was headed. It was then that I learned about relational databases and began to understand just how powerful Web applications could be when connected to a powerful, flexible system for storing and retrieving data. It also was in that year that the Java and JavaScript languages were unveiled, tantalizing Web developers with the idea that we could have things happen inside the browser beyond simple text and images.

It was then that we began to imagine that the browser could be the beginning of a new platform for applications. Of course, when Marc Andreessen, then best known as author of Mosaic and a cofounder of Netscape, said things like that, we all laughed, wondering how the Web could turn into a platform for application development. But of course, that vision has turned into a reality. Desktop software hasn't disappeared entirely, but increasingly large categories of software are being pushed onto the Web, accessed through our browsers.

Libraries and Frameworks

It didn't take long before Web development really began to take off. Suddenly, the big thing was to have “a Web page” or even an entire site. The growth of the Web happened at about the same time that the term “open source” was coined, giving publicity and ammunition to the techies who wanted to rely on software such as Linux and Perl to run their systems. (It took a long time before some of my clients were convinced that running a Web server on Linux was a safe choice, or that it could handle the dozens of users who would visit their site in a given day.)

I don't think that it's a coincidence that open source and the Web grew together. Open-source operating systems and languages—in those days, Perl, Python and PHP—grew in popularity, both because they were free of charge and because they offered enormous numbers of standardized, debugged and highly useful libraries. CGI.pm became an enormously popular Perl module, because it made so much of Web development easy.

On the database side, things also were starting to improve: MySQL, which had been free of charge (but commercially licensed), was released under the GPL, and it was soon ubiquitous on the Web servers that you could license by the month or year. PostgreSQL also emerged on the scene and overcame the 8 kilobyte-per-row limit that had hobbled it for so long. Suddenly, you could get a full-fledged relational database sitting behind your application, using libraries that were fast, efficient and full of features.

Web technologies themselves were becoming more sophisticated as well. The standards for HTTP, CSS and HTML were formalized and then grew to handle such things as caching. The W3C, formed to create and encourage standards for Web-related technologies, produced XML, a document format that remains pervasive in the computer world.

It was, on the one hand, easier to create sophisticated Web applications than ever before. But on the other hand, it was no longer possible to learn everything you needed to know about Web development in a few days or weeks. It was becoming a specialized set of skills, with people specializing in particular subsets. No longer did “Webmaster” refer to someone who knew everything about all parts of Web technologies. You didn't yet hear about “front-end” or even “full-stack” Web developers. But you did have people who were specializing in particular languages, servers, databases and even (to some degree) in Web design, helping ensure that sites were both aesthetically pleasing and user-friendly.

All of this meant that Web development was becoming serious, and the individual libraries, such as CGI.pm, were no longer enough. Web developers started to use frameworks instead of libraries. I once heard someone distinguish libraries from frameworks by saying that a library is something you add to your code, whereas a framework is something to which you add your code. Given the number of things a Web developer increasingly was having to worry about—user sessions, database connections, cookie persistence and caching—it shouldn't come as a surprise that the frameworks started to become increasingly serious, complex and sophisticated. Many (but not all) of these frameworks were released as open-source software, allowing developers to create complete, stem-to-stern Web applications using software that was inexpensive, robust and scalable, as well as written by the same people who used it.

Even Java got into the act. Although it took years for Java to be released under an open-source license, server-side Java has been around for some time and has gone through many incarnations. From the initial rollouts of servlets and JSP pages, to Java beans and Enterprise Java Beans, to the dozens of standards and open-source projects that have grown up in the Java world, the language that was originally designed to run applets in our browsers suddenly was a powerhouse to contend with.

And yet, Java, which was introduced as a simpler alternative to C++, without all the overhead and problems associated with that language, frustrated many with its large numbers of configuration files, protocols and Java-centric words and libraries. Out of this frustration (to some degree, at least) came the first of a new generation of Web frameworks, Ruby on Rails (written in Ruby) and Django (written in Python). Both Rails and Django promised developers something that hadn't yet existed, namely the ability to write sophisticated Web applications, but without the enormous overhead and bloat that Java-based systems required. Not surprisingly, both were instant hits, and they managed to hook a new generation of Web developers, as well as pull many (myself included) away from other, older, less-sophisticated frameworks that we had been using until then.

Now I do most of my work in Ruby on Rails, and I continue to enjoy working with this framework every day. But even the biggest enthusiasts among us need to remember that Rails is nearly ten years old at this point. It hasn't reached the level of bloat and bureaucracy that we see in the Java world, but at the same time, we see that many other language communities have produced their own versions of Rails, using many of the ideas that were pioneered there. Moreover, as Rails has matured, it has needed to consider legacy software and offer developers ways to integrate Rails apps into an existing infrastructure. As a result, we've seen that the small, scrappy framework is now getting larger, a bit more bloated and certainly harder to learn than was the case when I first learned about it back in 2005.

Components, APIs and Browsers

Where does that leave us today? First, the news is all good, in that there never has been a better time to be a Web developer. The opportunities to learn, explore, extend, implement and use Web-based technologies already are overwhelming and seemingly limitless, and every day appears to bring new frameworks, libraries and techniques that make Web applications even more interactive and compelling than already was the case.

And yet, for all of the opportunities, Web development never has been more difficult to get into, simply because of the many technologies you need to master in order to create a Web application. Web development, as Philip Greenspun said so many years ago, requires that you be a generalist, understanding a little about a large number of different technologies. A modern Web developer needs to understand HTTP, HTML and CSS, at least one server-side language, SQL and JavaScript. It also helps a lot to know how to work with Linux or other server operating systems.

The reality also is that even the most experienced Web developers are starting to specialize. True, I think of myself as what's increasingly called a “full-stack Web developer”, understanding everything from server configuration to database optimization to JavaScript programming, but the march toward specialization continues, and I've seen that a growing number of jobs are aimed at people with expertise on the back end or the front end, without much overlap between the two. Configuration and deployment also are becoming their own specialties, as we've seen with the rise of devops during the last few years.

That said, if you're willing to learn new technologies and improve yourself constantly, there are limitless opportunities to do amazing things on the Web today.

Let's start with languages. It's clear that a huge proportion of Web development is being done with dynamic languages. Ruby, Python and PHP continue to dominate (outside of large corporations, at least). But we also see a large number of open-source languages based on the JVM, such as Clojure and Scala, each of which have their own Web development paradigms and frameworks. The rise of Go, a compiled, open-source language from Google, also is intriguing (and is something I hope to cover in an upcoming installment of this column). True, some languages are more popular than others, but you can find work in just about any language you want today, trying it out and learning how that language's paradigms fit into the tapestry of languages and frameworks.

Perhaps the most important language today is JavaScript. JavaScript, which never has been one of my favorite languages, now is required knowledge for anyone who works on the Web. It sits inside every browser and increasingly is used for server-side programs as well. JavaScript's ubiquity, and the fact that it is necessary for high-powered Web applications that live inside of the browser, means that such organizations as Google and Mozilla are spending enormous amounts of time, money and effort trying to get JavaScript to execute as efficiently as possible. The sophistication of browser-based programs never ceases to amaze me, and that's all due to JavaScript.

There's also a growing trend of languages that compile into JavaScript, such as CoffeeScript and ClojureScript, allowing us to write in one language but execute something else (that is, JavaScript) in the browser. And then there are the JavaScript frameworks, including Backbone.js and Angular.js, which are making it possible to create browser-based applications that not only do amazing things, but which also are structured like the serious applications that they are. I've often said that today, developing software isn't that difficult, but maintaining software is, and thus, server-side Web developers eventually realized they needed to rely on libraries and frameworks in order not to lose their minds.

As the browser part of applications become more sophisticated, we see that server-side Web applications increasingly are becoming API servers, offering outsiders a chance to retrieve and modify data stored in a database of some sort. True, there always will be a need for server-side Web development, but it seems to me that as people learn to create better and better experiences within the browser, we're going to see the server as a remote storage mechanism, offering one of the many APIs that our browser contacts in order to bring up a fancy-looking application for our users.

Indeed, we see a trend of breaking Web applications into many little parts, each of which specializes in a different thing. You want to have real-time updates? Use Pusher or PubNub. You want to have nice-looking fonts? Use Google. You need more servers? Fire up some additional machines from Amazon or Rackspace. Or, if you want to think about your servers from an even higher level of abstraction, use Heroku, which costs more but reduces your IT staff to nearly zero. The same is true for login systems, e-commerce transactions and even commenting systems, each of which now can be outsourced, via an API and a few minutes of work, and then plugged in to your Web application. The use of APIs is particularly relevant today, when we have so many people accessing the Web not via traditional browsers, but rather via mobile devices and platform-specific apps. That app you installed might not look like it's contacting a Web site, but it's undoubtedly making a JSON API call over HTTP in the background.

In other words, we're finally seeing the start of Tim Berners-Lee's original vision, in which people all over the world are both retrieving and contributing information, no matter where they happen to be. I'm writing this from Chicago, where I'm spending the summer, but thanks to modern Web technologies, I'm able to see my family in Israel (including play Chess with my aforementioned son), order books and other merchandise to my apartment, read the news, check the weather, pay for airline and movie tickets and listen to music. And no, none of this should come as a surprise to anyone reading this magazine, or even anyone who has been using the Web at any time in the last decade. But that's just the point. If we can step back a bit from the technologies we're using, we can see how much easier our lives have been as a result of the Web.

Web development was exciting when I started to do it 20 years ago, and the amazing thing is that it remains just as exciting today. If you've always wanted to be a Web developer, choose a language, learn how to use it to write Web applications and then see what you can do. I never cease to be amazed by the magic of programming, and the Web makes it possible to spread that magic to the entire world. Here's to the next 20 years of advances in Web technologies—and to the next 200 columns I'll write for Linux Journal!

Web developer, trainer and consultant Reuven M. Lerner is finishing his PhD in Learning Sciences at Northwestern University. He lives in Modi'in, Israel, with his wife and three children. You can read more about him at lerner.co.il, or contact him at reuven@lerner.co.il.

LJ Archive