How about a SETI project to build knowledge about how the Net is actually working?
If Net Neutrality is a good thing, shouldn't we be able to test for it? Shouldn't everybody on the Net be in a position to see how things are going for them? And, wouldn't it be useful to slice and dice data coming in from all those nodes looking at Net performance from the edges?
Those were some of the questions raised by Tom Evslin with “Net Neutrality at Home: Distributed Citizen Journalism against Net Discrimination”—a recent luncheon talk at Harvard's Berkman Center for Internet and Society (see the on-line Resources). “The goal of what I'm proposing is to preserve in the United States an Internet that is equally open to all applications regardless of who owns the network and regardless of who the application owner is.” He adds, “I'm being US-centric here because we suck as far as the rest of the world is concerned....The problem is here.” He also adds, “Note that this doesn't mean that all applications will function equally well on every network.”
As an example, he gives his own Internet connection from rural Vermont, which bounces off a satellite 25,000 miles over the equator and involves unearthly latencies that make it nearly unsuitable for VoIP. (Ironically, Tom is a VoIP pioneer. At the time he sold his wholesale VoIP company several years ago, it was the #7 carrier of voice data traffic minutes in the world.) A neutral network would make a best effort to deliver packets without discrimination in favor or against its source, destination or content. As Tom puts it, “What we want to see is each network equally open to applications, and not be more open to the application of the network owner, particularly if the network owner happens to be a monopoly.”
This is where the line between technology and politics blurs. Carriers and other neutrality opponents say the Net itself has never been neutral and has always allowed many kinds of discrimination. They argue that some applications—live teleconferencing, VoIP, streaming audio and video, fault-tolerant grid computing and live remote surgery, for example—would all benefit from QoS (Quality of Service) efforts that are anything but “neutral”. And they point out that discrimination of all sorts—in provisioning asymmetries, multiple service levels, selective port blockages and specific usage restrictions, to name a few—have been common practices for nearly as long as ISPs have been in business. They'd like to retain the right to discriminate, or to improve service any way they please, and to charge customers willing to pay for the benefits. They say they'd like to do that without government interference (even though carriers inhabit what they call “the regulatory environment”).
Meanwhile, neutrality advocates, such as Web inventor Tim Berners-Lee, want laws to preserve the neutrality they say has always been there and is threatened by carriers who loathe the concept. Plus, it's obvious (except to those employed by the carriers—a population that sadly includes many lawmakers) that the carriers have little if any interest in building open infrastructure that enlarges business opportunity for everybody who builds on it. They would, in every case, rather capture markets than liberate them—even if they would clearly have privileged first-mover and incumbent positions in those liberated markets. To them, “free market” means “your choice of silo”.
Tom Evslin wants us to step back from the Net Neutrality fray and resolve the issues through widespread knowledge that currently does not exist. Specifically, he'd like as many users as possible to test their network connections for upload and download speeds, DNS speed, latency, jitter, blocking, consistency and uptime, to name a few of many possible variables.
Yes, techies can run some of these tests at the command line (with ping, traceroute and so on). And today, any user can visit a site such as Speakeasy.net or BroadbandReports.com to test upload and download speeds in a browser. (BroadbandReports even lets users compare results with those of other customers of the same provider.) But Tom wants to go much further than that. He wants everybody to know what they're getting and to pool data that will paint clear pictures of how individual networks and network connections are performing over time. He believes this will not only provide useful information to both sides of the current debate, but will allow everybody to observe and speak about the Internet with far more understanding than individuals have today.
“We don't want to look just for discrimination”, Tom says. “We want the result of running the tools to be sort of a consumers' report map of Internet quality in general....The tools can measure both quality, and then discrimination as an aspect of quality—if the discrimination exists. But even if there's no discrimination, we'll get useful data over what kind of quality to expect where.” He sees much to gain and little to lose for everybody. That is, if everybody—or at least a very large number of users—participates.
A number of questions then follow:
Exactly what kind of tests are we talking about?
How do we get users to participate on a large or massive scale?
If millions of users are running millions of tests or probes, how do we prevent what we might call an “insistence on service attack”?
How do we compile, edit and publish results?
One answer to the first question came from a report about Dan Kaminsky releasing details about a traceroute-like TCP-based fault probe, at the Black Hat security conference in August 2006. The report says:
But unlike Traceroute, Kaminsky's software will be able to make traffic appear as if it is coming from a particular carrier or is being used for a certain type of application, like VoIP. It will also be able to identify where the traffic is being dropped and could ultimately be used to finger service providers that are treating some network traffic as second-class.
Look for this capability amidst a free suite of tools called Paketto Kieretsu Version 3. Now, what else?
For guidance, Tom says the tools must be:
Verified and calibrated.
Open source.
Perceived as safe.
Do non-destructive testing.
Return value to each user.
One model he brought up was SETI@home. Here thousands of individuals contribute otherwise idle compute cycles to the Search for Extraterrestrial Intelligence (SETI) Project. That's a familiar model to many of us, but not likely to attract users who aren't turned on by the challenge of helping find ET. So Tom is looking for something that is SETI-like in distribution, but pays off with practical information for the users. The following are some questions from my notes at the luncheon:
What if users actually knew how well the Net and its providers worked for them, on both absolute and relative scales?
What if users could look at their connection speeds the same way they look at speedometers in their cars? (Speakeasy.net does something like this with its speed tests, but how about making the test independent of any company?)
What if users could monitor packet loss or link quality with the same ease as they check signal strength on a cell phone?
What if users could see by a simple indicator that the Wi-Fi connection at the conference they're attending won't allow outbound e-mail? (How about a list of port blockages and what they mean?)
The program would have to be widely distributed. Tom says:
We want volunteers to run servers, to make sure various ports are open and to test the geography—like for DNS propagation. We need people who are willing to have their servers be the proxy for testing the intentional degrading of file sharing, SIP, P2P protocols and geography. Because geography is an issue. Countries now have firewalls. There might be legitimate peering problems, or routing issues. But we need to know when actual blocking is going on.
Where would these tools come from? The obvious answer is the Free Software and Open Source communities. “It is absolutely essential that the tools we get be open source”, Tom says. “The tools themselves might be prejudiced. So you need to be able to see inside them to know that they're not. Second, we want to be able to bring to bear as much of the technical community as cares to participate in the development and elaboration of these tools.” Tom thinks the application vendors should contribute to the effort as well, because they could only benefit from knowledge about the network. Same goes for the carriers, who would presumably like to gain bragging rights about how well they perform.
There needs to be organizations, perhaps on the SETI model, “so the task of information collection and analysis is distributed, as well as just the initial probing”, Tom says. Also:
We need people responsible for verification....I'm very sensitive to that, because I've been wondering whether my satellite ISP is blocking Skype. I go on Skype and the BlueSky forums and see one person saying, “I ran this test that shows absolutely that there's been blocking”, and another person saying, “The application is working for me but the test is failing”....So it's not a simple thing to know a test is actually working. One particular problem with Skype, and why Skype might not benefit from this as well as other providers, is that Skype uses a proprietary protocol....It's hard to imagine Skype contributing the particular open-source tool that is necessary to debug the things that might happen to the protocol that they're keeping secret.
Then again, having these kinds of tools looking at the network would help expose to users the deficiencies, in an open world, of closed protocols, codecs and other techniques for maintaining silos and keeping customers captive.
David Isenberg points out, “Unless you have tools for each application, that are app spec, you always run the risk that the test works fine in a generic sense and then they've got this deep packet inspection that finds the signature of the given application and blocks it.” Tom answers, “So you'd like to have a tool where you could feed in the signature of the application and test that generically, and at the same time you'd like to test the protocols that they use. SIP makes sense as an example.”
There is an editorial function too. News needs to go out through traditional media, as well as bloggers and other Net-based writers. The end result, in addition to keeping the carriers honest, is a far more well-informed public. Right now, most users know far less about how they travel the Net than they do about how they travel the road system. “Latency”, “jitter”, “packet loss” and “port blocking” are no more technical than “speed”, “acceleration”, “stopping distance” or “falling rock zone”. Network performance knowledge should be common, not professionally specialized.
The US has been falling behind the rest of the civilized world in broadband speed and penetration. Japan and Korea are committed to making fiber-grade service available to their entire populations, and other countries are similarly motivated to do what the US still cannot, because most of its Internet service is provided by a duopoly that cares far less about providing Net infrastructure than about delivering high-definition TV. Clearly, no help will come from lawmakers who still think a highly regulated phone/cable duopoly is actually a “free market” for anything, much less the Internet.
David Isenberg wrote his landmark paper, “The Rise of the Stupid Network” (see Resources) when he was still working for (the original) AT&T. The paper observed that most of a network's value is on its edges, rather than in its middle. At the time AT&T was busy engineering intelligence into its switches and other mediating technologies. Meanwhile, Dr Isenberg said that the network should be stupid (say, in the same way that the core and mantle of the Earth is stupid). It should be there to support the intelligence that resides on it and takes advantage of it, but is not reducible to it. In 1998, when he wrote that essay, the Net was already well-established. Yet the thinking of the carriers was still deeply mired in the past. Here's the gist of the piece:
A new network “philosophy and architecture”, is replacing the vision of an Intelligent Network. The vision is one in which the public communications network would be engineered for “always-on” use, not intermittence and scarcity. It would be engineered for intelligence at the end user's device, not in the network. And the network would be engineered simply to “Deliver the Bits, Stupid”, not for fancy network routing or “smart” number translation.
Fundamentally, it would be a Stupid Network.
In the Stupid Network, the data would tell the network where it needs to go. (In contrast, in a circuit network, the network tells the data where to go.) In a Stupid Network, the data on it would be the boss.
According to Craig Burton, the best geometric expression of the Net's “end-to-end” design is a hollow sphere: a big three-dimensional zero. Across it, every device is zero distance from every other device. Yes, there are real-world latency issues. No path across the void is perfect. But the ideal is clear: the connection between any two computers should be as fast and straightforward as the connection between your keyboard and your screen. Value comes from getting stuff out of the way, not from putting stuff in the way—especially if that stuff is designed to improve performance selectively. The middle is ideally a vacuum. You can improve on it only by making it more of a vacuum, not less. And, like gravity, it should work the same for everybody.
So I see the challenge here as a Search for Terrestrial Stupidity. And I think it's a challenge that goes directly to Linux Journal readers and their friends. We are the kinds of people (and perhaps some of the actual people) who imagined and built the open Internet that the whole world is coming to enjoy. And we're the ones who are in the best position to save it from those who want to make it gravy for television.
In other words, we need smart people to save the Stupid Network. I look forward to seeing how we do it.
Resources for this article: /article/9261.