The ever-expanding substrate

“Prediction is very difficult, particularly about the future.”
Niels Bohr, Danish physicist
Yogi Berra, baseball player
Robert Storm Petersen, cartoonist
Et al1

Prediction, the art of defining what will happen, is inevitably fraught with uncertainty — to the extent that a wide variety of people are said to have called it out. In technology circles especially, predictions often sit at the bottom of the heap, a few tiers down from lies, damn lies, statistics and marketing statements from computer companies. In scientific circles the concept does have a head start on other domains, given that predictability will need to have a certain level of pre-considered, and potentially peer-reviewed proof. “I predict that the ball will land at point X” is a very different proposition to “I predict that the world will arrive at point X, given the way technology is going”. So, should we even try to consider where technology is taking us? Fortunately we have a number of quite solid, proven premises upon which to construct our views of the future.

First, as surmised by Gordon Moore all those years ago, computers and other devices are getting smaller and faster. We now carry a mainframe’s amount of processing in our pockets. As computers shrink, so they need less energy to function so what was once unsuitable because of insufficient power or space becomes achievable. Meanwhile, harnessing the properties of light has enabled networks to reach many times around the globe. And, for the simple reason that technology brings so much value to so many domains, its continued investment has driven innovation and supply/demand economics, leading to costs falling at a phenomenal rate, making what what was once impractical or unaffordable now commonplace. As computer equipment sales people know, it is becoming less and less viable to hold any stock as the rate at which it goes out of date is increasing to such an extent.

The overall impact is that the threshold of technological viability is falling. What was once impossible for whatever reason becomes not only probable but (sometimes very quickly) essential. For example, it may still not be viable for the population to wear health monitoring equipment. As costs fall however, the idea of a low-cost device that signals a loved one in case of disaster becomes highly attractive. Not everything is yet possible, due to such constraints: when we create information from data for example, we often experience a best-before time limit, beyond which it no longer makes sense to be informed. This is as true for the screen taps that make a WhatsApp message, as for a complex medical diagnosis. And, as so neatly illustrated by a jittery YouTube stream, we also have a threshold of tolerance for the poor quality.

Such, gradually reducing constraints — time, space, power and cost — have guided the rate of progress of the information revolution, and continue to set the scene for what is practical. Compromises still have to be made, inevitably: we cannot “boil the ocean” and nor can we attach sensors to every molecule in the universe (not yet, anyway). While the sky is the theoretical limit, in practice we cannot reach so high. All the same, in layman’s terms, as we use more electronics, the electronics become cheaper and better, making the wheel of innovation spin faster. The result is that the exception becomes the norm, driving a kind of ‘technological commoditisation’ — that is, capabilities that used to be very expensive are becoming available anywhere and as cheap, quite literally, as chips. Tech companies have a window of opportunity to make hay from new capabilities they create (such as Intel), integrate (Facebook) or use (Uber) before their ‘unique selling points’ are rolled into the substrate.

This inevitable process has seen the demise of many seeming mega-corporations over recent years, particularly in the tech space. No innovation ever really goes away however; rather, we just get a bigger, and cheaper tool box with which to construct our technological future. All the same technology still has a way to go before its tendrils reach a point beyond which it makes no sense to continue. Does this mean we will all be living in smart cities in the immediate future? Are we to become cyborgs, or brains in vats, experiencing reality through some computer-generated set of sensory inputs? Realistically, no — at least not in the short term. Despite this, we will continue to see the commoditised substrate of technology — the cloud — continue to grow in power, performance and capability, at the same time as we will find sensors, monitors and control devices appear in a widening array of places, extending technology’s reach deep into our business and personal lives.

So to predictions, the domain of scientists, baseball players and cartoonists, in other words all of us. The thought experiment is relatively simple, as it involves answering the straightforward question of ‘what if?’. The longer version is as follows: if we are not going to see our entire existence transformed by technology (a.k.a. brains in vats) then we are nonetheless going to see it augment our lives, in every aspect — how we communicate, how we live, how we stay well, how we do business. The stage is set for acting, living, being smarter. What is harder to plan, however, is the order in which things will happen, and the impact they will have. Technology may be instrumental in helping us all live much longer, for example, but the very dynamics of society will have to change — indeed, they are already changing — as a result.

Niels Bohr died in 1962, but not before he was instrumental in the creation of CERN, that august institution which curated the ‘invention’ of the World Wide Web. While it is worth remaining sanguine about technology’s potential, keep in mind that the simplest ideas, which go on to take the world by storm, are often the hardest to predict. To fully take advantage of the potential offered by this miraculous technological substrate needs some equally clever thinking — in the shape of software, algorithms and mathematics to be applied to the vast data sets we now have available, which we shall look at next.