If silo builders control the edge, the distributed future can't happen.
In December 2016, Peter Levine (peter.a16z.com) of the venture firm Andreessen Horowitz published a post with a video titled “Return to the Edge and the End of Cloud Computing” (a16z.com/2016/12/16/the-end-of-cloud-computing). In it, he outlines a pendulum swing between centralized and distributed computing that goes like this:
Mainframe/Centralized/1960-1970 ↔ Client-Server/Distributed/1980–2000
Mobile-Cloud/Centralized/2005-2020 ↔ Edge Intelligence/Distributed/2020–
He says the “total addressable market” in that next pendulum swing will include the Internet of Things, with trillions of devices, starting with today's cars and drones. He also says machine learning will be required to “decipher the nuances of the real world”.
Thanks to bandwidth and latency issues, most of this will have to happen at end points, on the edge, and not in central clouds. Important information still will flow to those clouds and get processed there for various purposes, but decisions will happen where the latency is lowest and proximity highest: at the edges. Machines will make most of the data they gather (and is gathered for them). That's because, he says, humans are bad at many decisions that machines do better, such as in driving a car. Peter has a Tesla and says “My car is a much better driver than I am.” In driving for us, machine-learning systems in our things will “optimize for agility over power”. Systems in today's fighter aircraft already do this for pilots in dogfights. They are symbiotes of a kind, operating as a kind of external nervous system for pilots, gathering data, learning and reacting in real time, yet leaving the actual piloting up to the human in the cockpit. (In dogfights, pilots also do not depend on remote and centralized clouds, but they do fly through and around the non-metaphorical kind.)
The learning curve for these systems consists of three verbs operating in recursive loops. The verbs are sense, infer and act. Here's how those sort out.
Sense
Data will enter these loops from sensors all over the place—cameras, depth sensors, radar, accelerometers. Already, he says, “a self-driving car generates about ten gigabytes of data per mile”, and “a Lytro (https://www.lytro.com) camera—a data center in a camera—generates 300 gigabytes of data per second.” Soon running shoes will have sensors with machine-learning algorithms, he adds, and they will be truly smart, so they can tell you, for example, how well you are doing or ought to do.
Infer
The data from our smart things will be mostly unstructured, requiring more machine learning to extract relevance, do task-specific recognition, train for “deep learning”, and to increase accuracy and automate what needs to be automated. (This will leave the human stuff up to the human—again like the fighter pilot doing what only he or she can do.)
Act
As IoT devices become more sophisticated, we'll have more data accumulations and processing decisions, and in many (or most) cases, machines will make ever-more-educated choices about what to do. Again, people will get to do what people do best. And, they'll do it based on better input from the educated things that also do what machines do best on their own.
Meanwhile, the old centralized cloud will become what he calls a “training center”. Since machine learning needs lots of data in order to learn, and the most relevant data comes from many places, it only makes sense for the cloud to store the important stuff, learn from everything everywhere, and push relevant learnings back out to the edge. Think of what happens (or ought to happen) when millions of cars, shoes, skis, toasters and sunglasses send edge-curated data back to clouds for deep learning, and the best and most relevant of that learning gets pushed back out to the machines and humans at the edge. Everything gets smarter—presumably.
His predictions:
Sensors will proliferate and produce huge volumes of geo-spacial data.
Existing infrastructure will back-haul relevant data while most computing happens at edges, with on-site machine learning as well.
We will return to peer-to-peer networks, where edge devices lessen the load on core networks and share data locally.
We will have less code and more math, or “data-centric computing”—not just the logical kind.
The next generation of programmers won't be doing just logic: IF, THEN, ELSE and the rest.
We'll have more mathematicians, at least in terms of talent required.
Also expect new programming languages addressing edge use cases.
The processing power of the edge will increase while prices decrease, which happens with every generation of technology.
Trillions of devices in the supply chain will commoditize processing power and sensors. The first LIDAR for a Google car was $7,000. The new ones are $500. They'll come down to 50 cents.
The entire world becomes the domain of IT. “Who will run the fleet of drones to inspect houses?” When we have remote surgery using robots, we also will need allied forms of human expertise, just to keep the whole thing running.
We'll have “consumer-oriented applications with enterprise manageability.”
His conclusion: “There's a big disruption on the horizon. It's going to impact networking, storage, compute, programming languages and, of course, management.”
All that is good as far as it goes, which is toward what companies will do. But what about the human beings who own and use this self-educating and self-actualizing machinery? Experts on that machinery will have new work, sure. And all of us will to some degree become experts on our own, just as most of us are already experts with our laptops and mobile devices. But the IoT domain knowledge we already have is confined to silos. Worse, silo-ization of smart things is accepted as the status quo.
Take for example “Google Home vs. Amazon Echo—a Face-Off of Smart Speakers” by Brian X. Chen in The New York Times (https://www.nytimes.com/2016/11/04/technology/personaltech/google-home-vs-amazon-echo-a-face-off-of-smart-speakers.html?_r=1). Both Google Home and Amazon Echo are competitors in the “virtual assistant” space that also includes Apple's Siri and Microsoft's Cortana. All are powered by artificial intelligence and brained in the server farms of the companies that sell them. None are compatible with each other, meaning substitutable. And all are examples of what Phil Windley called The Compuserve of Things in a blog post by that title exactly three years ago (www.windley.com/archives/2014/04/the_compuserve_of_things.shtml). His summary:
On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent of the online services of the 1980s, or will we learn the lessons of the internet and build a true Internet of Things?
If progress continues on its current course, the distributed future Peter Levine projects will be built on the forest-of-silos Compuserve-of-things model we already have. Amazon, Apple, Google and Microsoft are all silo-building Compuserves that have clearly not learned the first lesson of the internet—that it was designed to work for everybody and everything, and not just so controlling giants can fence off territories where supply chains and customers can be held captive. For all their expertise in using the internet, these companies are blindered to the negative externalities of operating exclusively in their self interest, oblivious to how the internet is a tide that lifts all their economic and technological boats. In that respect, they are like coal and oil companies: expert at geology, extraction and bringing goods to market, while paying the least respect to the absolute finitude of the goods they extract from the Earth and to the harms that burning those goods cause in the world.
But none of that will matter, because the true Internet of Things is the only choice we have. If all the decisions that matter most, in real time (or close enough), need to be made at the edge, and people there need to be able to use those things expertly and casually, just like today's fighter pilots, they'll need to work for us and not just their makers. They'll be like today's cars, toasters, refrigerators and other appliances in two fundamental ways: they'll work in roughly the same ways for everybody, so the learning curve isn't steep; and they'll be substitutable. If the Apple one fails, you can get a Google one and move on.
For an example, consider rental cars. They're all a bit different, but you know how to drive all of them. Sure, there are glitches. Every Toyota I rent plays Edith Piaf (from somewhere in my music collection) as soon as I plug in my phone to the car's USB jack. Other car brands have their own dashboard quirks. (Last month, my visiting sister rented a Chrysler 200, which had the stupidest and least useful climate control system I've ever seen, but it was easy for both of us to drive.)
Also, as the ever-more distributed world gets saturated by smart things on Peter Levine's model, we will have more need to solve existing problems that get worse every day in present time. Some examples:
Too many login and password combinations, plus the fact that we still need logins and passwords at all. Geez, it's 2017. We can do better than that.
Too many ways to message each other. Last I counted, Apple's App Store had something like 170 different messaging apps, and Google Play had more than a hundred. The only standard we ever had for bridging them all was XMPP, originally called Jabber, which I advocated mightily in Linux Journal, back around the turn of the millennium. (See “The Message” at www.linuxjournal.com/article/4112, “Talking Jabber” at www.linuxjournal.com/article/4113 and “Jabber Asks the Tough Question” at www.linuxjournal.com/article/5631.) For whatever reason, XMPP stalled. (Never mind why. Make another standard protocol everyone can adopt.)
Too many contacts and too few ways of connecting them to login/password management, to-do lists, calendars or other ways of keeping records.
Calendar and contact apps silo'd into the bowels of Apple, Microsoft, Google and others, with too few compatibilities.
To solve all these problems, you need to start with the individual: the individual device, the individual file, the individual human being.
If you start with central authority and central systems, you make people and things subordinate dependents, and see problems only silos can solve. All your things and people will be captive, by design. No way around it.
Why do I pose this challenge here? Two reasons: 1) because Linux answered the same challenge in the first place, and it can again; and 2) because Linux geeks have the best chance of both grokking the challenge and doing something about it.