LJ Archive

Linux for Suits

Use and Usefulness

Doc Searls

Issue #146, June 2006

Which comes first, the kernel chicken or the user-space egg?

I've always been intrigued by the distinctions between kernel space and user space. At the technical level, the distinction is largely between memory spaces: one where the kernel executes and provides services, and one where user processes run. As a rule, it's safer to run something in user space when it is possible, because user-space processes can't mess with the critical parts of the operating system. At a conceptual level, however, there also seems to be a distinction between usefulness and use.

I didn't start to see that distinction until I spent a week in October 2005 on a Linux Lunacy Geek Cruise with Andrew Morton, Ted Ts'o and a bunch of other kernel hackers. Andrew and Ted gave a number of talks, and I got a chance to spend additional time interviewing Andrew at length. As a result, I came to the conclusion that Linux is a species, and unpacked that metaphor in a Cruise Report on the Linux Journal Web site in November. Here's the gist of it:

Kernel development is not about Moore's Law. It's about natural selection, which is reactive, not proactive. Every patch to the kernel is adaptive, responding to changes in the environment as well as to internal imperatives toward general improvements on what the species is and does.

We might look at each patch, each new kernel version, even the smallest incremental ones, as a generation slightly better equipped for the world than its predecessors. Look at each patch submission—or each demand from a vendor that the kernel adapt to suit its needs in some way—as input from the environment to which the kernel might adapt.

We might look at the growth of Linux as that of a successful species that does a good job of adapting, thanks to a reproductive cycle that shames fruit flies. Operating systems, like other digital life forms, reproduce exuberantly. One cp command or Ctrl-D, and you've got a copy, ready to go—often into an environment where the species might be improved some more, patch by patch. As the population of the species grows and more patches come in, the kernel adapts and improves.

These adaptations are reactive more often than proactive. This is even, or perhaps especially, true for changes that large companies want. Companies such as IBM and HP, for example, might like to see proactive changes made to the kernel to better support their commercial applications.

Several years ago, I had a conversation with a Microsoft executive who told me that Linux had become a project of large commercial vendors, because so many kernel maintainers and contributors were employed by those vendors. Yet Andrew went out of his way to make clear, without irony, that the symbiosis between large vendors and the Linux kernel puts no commercial pressure on the kernel whatsoever. Each symbiote has its own responsibilities. To illustrate, he gave the case of one large company application: “The [application] team doesn't want to implement [something] until it's available in the kernel. One of the reasons I'd be reluctant to implement it in the kernel is that they haven't demonstrated that it's a significant benefit to serious applications. They haven't done the work to demonstrate that it will benefit applications. They're saying, 'We're not going to do the work if it's not in the kernel.' And I'm saying, 'I want to see that it will benefit the kernel if we put it in.'”

He added, “On the kernel team, we are concerned about the long-term viability and integrity of the code base. We're reluctant to put stuff in for specific reasons where a commercial company might do that.” He says there is an “organic process” involved in vendor participation in the kernel.

It made my year when Greg Kroah-Hartman (a top-rank kernel maintainer) called this “one of the most insightful descriptions about what the Linux kernel really is, and how it is being changed over time”.

A few weeks ago, I was talking with Don Marti about how all open-source projects seem to have the same kind of division between kernel space and user space—between code and dependencies on that code. It was in that conversation that I realized the main distinction was between usefulness and use. Roles as well as purposes were involved. Only developers contribute code. The influence of users, or even “usability experts”, is minimized by the meritocracy that comprises the development team. “Show me the code” is a powerful filter.

Most imperatives of commercial development originate and live in user space. These include selling products, making profits and adding product features that drive future sales. None of these motivations are of much (if any) interest to kernel development. Again, kernel development is reactive, not proactive. For companies building on Linux, the job is putting Linux to use, not telling it how to be useful. Unless, of course, you have useful code to contribute. (Greg Kroah-Hartman has put together an excellent set of recommendations. See the on-line Resources for links.)

A few tradeshows ago, Dan Frye of IBM told me it took years for IBM to discover that the company needed to adapt to its kernel developers, rather than vice versa. I am sure other employers of kernel developers have been making the same adjustments. How long before the rest of the world follows? And what will the world learn from that adjustment that it doesn't know now?

I began to see an answer take shape at O'Reilly's Emerging Technology conference in March 2006. I was sitting in the audience, writing and rewriting this very essay, when George Dyson took the stage and blew my mind. George grew up in Princeton, hanging around the Institute for Advanced Study where his father, Freeman Dyson, worked with Godel, Einstein, Turing, von Neumann and other legends in mathematics, physics and computing. Today, George is a historian studying the work of those same great minds, plus antecedents running back hundreds of years.

George's lecture, titled “Turing's Cathedral”, reviews the deep history of computing, its supportive mathematics and the staging of a shift in computing from the mechanical to the biological—one that von Neumann had begun to expect when he died tragically in 1957 at the age of 53. Here's how George approaches questions similar to the one that had been on my mind:

“The whole human memory can be, and probably in a short time will be, made accessible to every individual”, wrote H. G. Wells in his 1938 prophecy World Brain. “This new all-human cerebrum need not be concentrated in any one single place. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba.” Wells foresaw not only the distributed intelligence of the World Wide Web, but the inevitability that this intelligence would coalesce, and that power, as well as knowledge, would fall under its domain. “In a universal organization and clarification of knowledge and ideas...in the evocation, that is, of what I have here called a World Brain...in that and in that alone, it is maintained, is there any clear hope of a really Competent Receiver for world affairs....We do not want dictators, we do not want oligarchic parties or class rule, we want a widespread world intelligence conscious of itself.”

Then:

In the early 1950s, when mean time between memory failure was measured in minutes, no one imagined that a system depending on every bit being in exactly the right place at exactly the right time could be scaled up by a factor of 1013 in size, and down by a factor of 106 in time. Von Neumann, who died prematurely in 1957, became increasingly interested in understanding how biology has managed (and how technology might manage) to construct reliable organisms out of unreliable parts. He believed the von Neumann architecture would soon be replaced by something else. Even if codes could be completely debugged, million-cell memories could never be counted upon, digitally, to behave consistently from one kilocycle to the next.

Fifty years later, thanks to solid state micro-electronics, the von Neumann matrix is going strong. The problem has shifted from how to achieve reliable results using sloppy hardware, to how to achieve reliable results using sloppy code. The von Neumann architecture is here to stay. But new forms of architecture, built upon the underlying layers of Turing-von Neumann machines, are starting to grow. What's next? Where was von Neumann heading when his program came to a halt?

This is all excerpted from an earlier lecture by George, by the same title as the one he was giving at eTech. In this earlier lecture, George was focused on AI:

I found myself recollecting the words of Alan Turing, in his seminal paper “Computing Machinery and Intelligence”, a founding document in the quest for true AI. “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children”, Turing had advised. “Rather we are, in either case, instruments of His will providing mansions for the souls that He creates.”

Then he added, “Google is Turing's cathedral, awaiting its soul.”

I think, however, the cathedral is bigger than Google. In fact, I think it's bigger than a cathedral. I think it's a new world, built on materials no less natural yet man-made than the rocks and wood shaped and assembled into nave and transept, buttress and spire.

I reached that conclusion watching George flash quote after quote up on the screen in the front of the ballroom, each drawing another line in the shape we came to call computing. I photographed as many as I could, and transcribed a number of them. I've arranged them in chronological order, starting 450 years ago, with several more added in from my own quote collection. Follow the threads:

  • “Why may we not say that all Automata (Engines that move themselves by springs and wheeles as doth a watch) have artificiall life?”—Thomas Hobbes, 1651

  • “By Ratiocination, I mean computation. Now to compute, is either to collect the sum of many things that are added together, and to know what remains when one thing is taken out of another...and if any man adde Multiplication and Division, I will not be against it, seeing Multiplication is nothing but Addition of equals one to another, and Division is nothing but a Subtraction of equals one from another, as often as is possible. So that all Ratiocination is comprehended in these two operations of the minde Addition and Subtraction.”—Thomas Hobbes, 1656

  • “This [binary] calculus could be implemented by a machine (without wheels)...provided with holes in such a way that they can be opened and closed. They are to be open at those places that correspond to a 1 and remain closed at those that correspond to a 0. Through the opened gates small cubes or marbles are to fall into tracks, through the others nothing. It [the gate array] is to be shifted from column to column as required.”—G.W. von Leibniz, March 16, 1679

  • “Is it a fact—or have I dreamed it—that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather the round globe is a vast head, a brain, instinct with intelligence! Or shall I say, it is itself a thought, nothing but a thought, and no longer the substance which we deemed it?”—Nathaniel Hawthorne, 1851

  • “I see the Net as a world we might see as a bubble. A sphere. It's growing larger and larger, and yet inside, every point in that sphere is visible to every other one. That's the architecture of a sphere. Nothing stands between any two points. That's its virtue: it's empty in the middle. The distance between any two points is functionally zero, and not just because they can see each other, but because nothing interferes with operation between any two points. There's a word I like for what's going on here: terraform. It's the verb for creating a world. That's what we're making here: a new world. Now the question is, what are we going to do to cause planetary existence? How can we terraform this new world in a way that works for the world and not just ourselves?”—Craig Burton, in Linux Journal, 1999

  • “Here are three basic rules of behavior that are tied directly to the factual nature of the Internet: 1) No one owns it. 2) Everyone can use it. 3) Anyone can improve it.”—“World of Ends”, by Doc Searls and David Weinberger, 2003

  • “There are a couple of reasons why we have national parks and access to the seashore. Some things are so much the gifts of nature that they should be reserved for everyone. And some things (like the sea, and like the Internet) are so important to each of us that keeping them freely available makes us a group of citizens rather than slaves....Now—the Internet wasn't created by nature; it's an agreement between machines made possible by the designers of that agreement (or protocol). But it is a great gift, and it is very important to being a citizen, and for these reasons it is owned by all for common use. It's a commons, like the Boston Common. And no sovereign ever showed up to which the people who 'own' the Internet (that is, everyone) surrendered their ownership.”—Susan Crawford, January 2003

  • “We had this idea back in the 70s that one day we would make computers that would somehow be intelligent on their own. And it's not quite working that way. What we're doing is making computers intelligent because we're part of them.”—Tim O'Reilly at eTech 2006

As creatures, human beings are gifted with something perhaps even more significant than the powers of intelligence and speech. We also have the capacity to extend the boundaries of our bodies beyond our skin, hair and nails. Through a process of indwelling, we are enlarged and empowered by our clothes, tools and vehicles. When we grab a hammer and drive a nail, the hammer becomes an extension of our arm. Our senses extend through the wood of the handle and the metal of the head, as we pound a nail through a board. Oddly, the hammer does not make us superhuman, but more human. Because nothing could be more human than to use a tool.

Likewise, when we drive a car, ride a bike or pilot a plane, our senses extend to mechanical perimeters. We don't just think “my tires”, “my wings”, “my fender”, “my engine”. We know these things are ours. They are parts of our selves, enlarged by the merging of sense and skill and material.

A robin is born knowing how to build a nest. A human is born knowing how to do little beyond suckling. Yet because we are gifted with an endless capacity for learning, and for enlarging our selves, and for doing these things together in groups of all sizes, we have built something larger than ourselves called civilization.

Open-source infrastructural building materials and methods have enabled us to build a new framework, a new environment, for civilization. Call it a giant brain, a World of Ends, or a network of networks. In every case, it is a product of the form of nature we call human.

The purpose of this new world—this natural environment for business, study, games and countless other human activities—is to be useful. In the same way that our senses extend from our bodies to our tools and vehicles, the usefulness of kernel-space code extends into the Net that's built on that code.

As a result, user space has become almost unimaginably large. And sure to become larger.

Resources for this article: /article/8942.

Doc Searls is Senior Editor of Linux Journal.

LJ Archive