Linux Journal issue #1 featured Linus Torvalds' announcement of a code freeze for Linux 1.0, alongside an article comparing Linux to its 32-bit competitors of the day: Windows NT and OS/2. Original author of that article Bernie Thompson stirs the pot again and tastes what the operating system world has been cooking up.
20 years ago in the world of operating systems, IBM was a dying king and Microsoft an ascendant prince. Apple was in exile. And a young wizard named Linus Torvalds rebuilt venerable UNIX, which already had 20 years of its own history, and re-imagined it in the form of Linux. The world had a powerful new open-source platform on which to build.
By the end of the 1990s, IBM was nearly irrelevant to the personal computer market it had built. Microsoft had risen to nearly 90% market share (windowsitpro.com/systems-management/study-shows-windows-owns-desktop-market). Linux had a disconcerting combination of enormous investor hype and low market share. But Linux quietly was beginning to appear in set-top boxes and networking equipment. It was dominating the Internet server market. And a little company called Google was deploying thousands of Linux boxes to revolutionize search (linuxgazette.net/issue59/correa.html).
By the end of the 2000s, at least ten “year of the Linux desktop” milestones had come and gone without success. Windows still was dominant, but the PC era was about to end. Apple had returned from the dead to win the hearts and wallets of consumers and developers by building OS X and iOS on an open-source Mach+BSD kernel with different lineage from Linux. Linux was everywhere and nowhere: the Internet ran on Linux, yet few consumers' browsers did.
Here in 2014, the world seems to be shifting in Linux's direction, as long as a Linux-derived kernel is what counts. Windows 8 has alienated Microsoft's installed base. Mobile is ascendant. Android has reached 80% smartphone share (www.idc.com/getdoc.jsp?containerId=prUS24442013). And last year, Chromebooks rose to take 10% of the US computer market and 21% of laptop sales, outselling Apple MacBooks (https://www.npd.com/wps/portal/npd/us/news/press-releases/u-s-commercial-channel-computing-device-sales-set-to-end-2013-with-double-digit-growth-according-to-npd). Yet the Linux desktop, in the pure form of Ubuntu or one of the other distros, still shuffles along with less than 2% share by most measures.
In the first issue of Linux Journal, I wrote “Linux needs 2MB RAM to try out, OS/2 needs 4MB, and NT needs 12MB.”
Those Linux and OS/2 numbers were for command-line configurations only. The NT number included its GUI since it couldn't operate without it. But the march of Moore's Law over the long stretch is stunning. In less than 20 years, we've seen nearly a 1,000x increase in typical memory sizes.
Today, Windows 8 can run in 512MB for just a single task or two, but 2GB is a normal minimum. Linux is still the most miserly. 256MB is normal for platforms like the Raspberry Pi, and running Linux on a wireless router with 8MB of RAM is not unheard of using DD-WRT micro version.
The original article focused on the issues of “Will Linux, OS/2, or NT work on my PC?” These issues have receded as even sub-$200 PCs today can run anything. The important question has become “What's the best fit for what I need my computer to do?”
A good operating system mediates between users, applications and hardware with a focus on clarity, compatibility and security. It enables applications written today to work on the latest OS version, and a significant portion of the existing installed base of older versions. It enables those applications to keep working even as things underneath change over time. It allows hardware created today to support existing applications and ones not yet written. All it takes is for one essential application or piece of hardware to be unavailable on a new platform, and the user can't move. This creates powerful lock-in to a users' current system.
The OS also provides a UI and set of standard ways for users to get things done. Humans are flexible, but they suffer compatibility problems too. When a new UI moves things around, there's a human cost in time and frustration. When there's a shift in the way computers are used—from command line, to mouse and keyboard GUI, to touch GUI—not every use scenario or user shifts. We all know the command line is still critical for any IT pro. Operating systems must either pick a paradigm or find a way to expose similar functionality in each world.
This isn't easy, which is part of why the OS world has fractured and diversified in the shift to touch GUIs during the past few years. As platforms and UIs have been churned, at times it seems we've needed a Hippocratic Oath for programmers and UI designers: first, do no harm.
And in the realm of security, protecting users and their data has become even harder post-NSA. We've entered a Wild West era where we have ceded moral authority to deter nations, corporations and rogue groups from exploiting others on-line. It's everyone for themselves, and a trusted operating system is at the center of that.
How are the operating systems of today doing by these standards?
Windows has had the largest financial investment in it, by far, during the past 20 years. For the last decade, Microsoft has had thousands of employees working on Windows at any given moment. Executives are obsessed to distraction with avoiding the innovator's dilemma. Many investments are “big bets” that aim to anticipate customer and partner demand, rather than smaller steps to respond to it. Often priorities are focused on internal goals and “better together” initiatives that attempt to extend lock-in or push the user base to the latest version. When those initiatives are aligned with the interests of users and partners, they tend to succeed. But many have not.
Among the unheralded successes has been the Windows Update mechanism. For users, it appears that they plug any new or old device into their Windows PC, and it just works—no digging for disks for downloads. In fact, it's a smart cloud-based system that was implemented before the cloud was a buzzword. When a new device is plugged in, Windows automatically reads the plug-and-play IDs, checks its servers on the Web, and downloads and installs the best available driver automatically. More than ten years on, Linux and Mac OS X have nothing equivalent.
On the downside, the Windows 8 transition has been particularly jarring. Microsoft faced a tough choice: lose market share in the new tablet space or create a version of Windows and an ecosystem of applications that supports tablets well. It chose to sacrifice usability as a desktop system to win new tablet users. The result is a confusing collision of two UI worlds. Some users have gone scrambling to find an alternative, driving up sales of Chromebooks and perhaps hastening the tablet shift.
Security-wise, it's important to understand that Windows 7 and 8 have perhaps the best line-of-code level security of any existing OS. Microsoft invests enormous efforts to identify and fix potential exploits before shipment. But the weekly “patch Tuesday” flood highlights how difficult it is to protect a big, juicy target like Windows. It's the potato blight of our era. We'll talk about Linux security later, but what protects Linux is less the quality of its code, and more the diversity of it.
After one and a half years, Windows 8 has crossed only 10% market share (arstechnica.com/information-technology/2014/01/windows-8-x-breaks-10-percent-internet-explorer-11-makes-a-splash). But Microsoft still has a chance to regain its footing, given the combined market shares of XP, 7 and 8.
Steve Jobs warned Microsoft that trying to merge the desktop and tablet worlds wouldn't work. And so far Apple has largely stuck to that line, with success. Millions of eager buyers are willing to pay a premium for Apple products. iOS has been passed by Android in pure unit sales, but still gets more use day to day.
Mac OS X was my main platform for much of the 2000s. It puts a nice UI on a deeply functional UNIX foundation. The excellent MacPorts system gives entry to the full catalog of open-source applications. Apple's XQuartz project enables even GUI X apps to be ported. Users that care about particular scenarios like working with photos, music and video have a platform with great support for those scenarios, because of Job's attention to end-to-end quality.
Apple has delivered many innovations, but among the most powerful was the successful iOS and later OS X App Stores. Small developers had withered under Microsoft. By enabling software developers to make money, these app stores quickly allowed Apple's application ecosystem to rival Microsoft's.
Apple has historically been more willing to sacrifice the compatibility of older applications and hardware. That might have become a problem as Apple's installed base grew. But the application problem has been mitigated with the strategy of offering free OS upgrades combined with free application upgrades from the App Store. Your current software binaries won't work several versions from now, but you won't care because you'll have downloaded a free update.
Hardware compatibility, however, often has been sacrificed. One gets the sense that Apple sees a robust hardware ecosystem as competing with it, rather than a source of strength for the platform as a whole.
In security, Apple has been both smart and lucky. Smart to build on UNIX. Smart to introduce strict app store criteria and the Gatekeeper feature to steer users away from untrusted apps. But it also has been lucky to stay under the radar. If Windows and OS X's market shares were reversed, Apple would be forced to have a much higher level of focus on individual exploits, being a monoculture like Windows.
Google also has kept the tablet and desktop worlds apart somewhat, in the form of Android for tablets and Chrome OS for laptops, both built on the Linux kernel.
Linus' strategy of benevolent dictatorship for the Linux kernel has delivered stable progress through the years and kept the worst decisions out. However, above the kernel, progress on Linux has been unstable and constantly disruptive...mostly to users, if not to competitors.
With Android, Google's role has been to provide that stability above the kernel, along with the opportunity for a for-profit ecosystem of software and hardware to build around the platform. The result has been an amazing explosion of Linux-based devices.
Because Android and Chrome OS are open source, whether you're Amazon building the Kindle or CyanogenMod, you have the freedom to take it in new directions.
Google has succeeded in hastening the end of the Windows monopoly by grabbing a significant chunk of market share with an open-source operating system based on Linux. All this makes it likely that the next generation of applications will be developed with Web compatibility and platform portability as a primary concern. Even if Android and Chrome OS do not continue their meteoric rise, for the moment they have mitigated much of the lock-in that would have prevented users from moving to the next innovative platform.
The ever-churning cauldron of Linux distros and spins is the hotbed of innovation from which Android, Chrome OS, Kindle, TiVO and countless other innovations past and future are born. On the downside, competing groups innovating in different directions have obvious consequences for some key OS characteristics: clarity and compatibility.
In practice, for hardware that is well documented, the results at the kernel level are generally excellent. If you can get your driver into the kernel, it tends to get carried forward intact. On the other hand, trying to keep a binary driver in sync with the Linux kernel is difficult, expensive and usually futile. Linux's strategy of sacrificing binary compatibility for the freedom to innovate and keep the kernel clean has proven powerful over time.
There are some exceptions, such as graphics, which touches many layers of the stack. The transition to compositing desktops has hurt performance on Linux scenarios where a beefy GPU isn't available. And the more functionally complex KMS/DRM driver model has had many impacts on users. OS features like support for multiple monitors that require participation at many layers have been difficult to achieve.
At the application level, the story is more mixed than the kernel. Competing libraries, versions and package managers make it difficult to port even open-source applications to every distribution. There are many potential libraries to become dependent on, and each dependency may evolve in ways that will demand that your application change or bifurcate to keep up.
Some distros focus on minimizing churn and compromising by providing facilities, such as a commercial app store. Ubuntu and Red Hat Enterprise Linux are best known as playing this role on the desktop. When that plot has been lost, back-to-basics distros like Linux Mint have won converts.
In the cloud, Amazon has a growing list of applications in its marketplace built on Linux (often Red Hat- or Debian-based). You can see Linux's customizability, maintainability and lack of licensing cost at work in how Amazon prices Linux-based hosting vs. Windows hosting. Linux is a more cost-effective choice.
Can the Linux distributions do a better job for end users? Yes, definitely. Branches like Ubuntu's effort to replace X with Mir should not be taken so lightly. It would be a boon to Linux adoption if there were more efforts to consolidate or combine projects, to present a unified front to applications and end users. Is the universe forever expanding until we're all on isolated islands, or will it consolidate? It's a delicate balance. We could start with a joint development conference between the Canonical and Red Hat teams to find common ground.
On the plus side, Linux's software and hardware diversity make it difficult for mass exploits. And more fundamentally, open source is essential for trust through transparency. Any individual, nation or corporation can do its own code review to weed out potential exploits. While not a panacea for sophisticated, targeted attacks, these characteristics allow Linux to deliver the best security of any platform in practice.
Despite the challenges, the future for Linux is unstoppably bright.
Beyond the desktop, Linux leads in servers and has completely conquered the embedded market. It's impossible to conceive of any one company being able to port a commercial OS to as many CPU and device architectures as Linux has. As the embedded world transitions from 8/16-bit processors to 32-bit, billions of new devices will be running Linux inside.
Even today, average Microsoft or Apple executives are probably (knowingly or unknowingly) running several copies of Linux in their homes: perhaps their TV, router, thermostat or the coming generation of appliances and vehicles.
Linux is currently winning the mobile race with Android and grabbing significant “desktop” share with Chrome OS. Efforts such as CyanogenMod, Firefox OS and Ubuntu for phones will keep creating innovative options. And as applications and users increasingly live within the browser, the barriers to move between platforms will continue to fall.
The revolution that Linus started 20 years ago is accelerating. “World domination”, which was previously a sly joke, now appears inevitable. Happy birthday, Linux Journal.