Smart Shift

Introduction

Recent generations of technology have made us more connected than ever. We can (and do) broadcast our every movement, every purchase and every interaction with our mobile devices and on social networks, in the process adding to what was already a mountain of information. But what happens when it becomes possible to mine the seams of gold held within?

Drawing on Snowden and the Panama papers, Babbage and Lovelace, Ancient Greece and the Wild West, Smart Shift takes us on a technological journey towards a thrilling and, sometimes, downright scary future. As we share every facet of our lives we move inexorably towards the transparent society, in which nothing can be hidden. Part history, part explanation, part manifesto, Smart Shift reflects on the sheer quantity of information being generated, the clever ways in which people are using it, and the inadequacy of current legislation. It looks to topical examples, such as algorithmic trading and the global banking disaster, or the use of drones in both war and peace.

It concludes with two insights: first, concerning the nature of the contract we need between each other; and second, the importance of maintaining our own sense of identity. Are we ready for to shift? It is time to find out.


Preface

Smart Shift was written between 2014 and 2016, and has both very quickly gone out of date, and remained grounded in history. Writing a book about the turbulent times in which we live was always going to be a challenge. While some parts will already be out of date, it is hoped that the overall themes will remain constant.

Note that I have stuck with Unix rather than the more historically accurate UNIX, simply because it is more readable. I have used the capital B to differentiate the Blockchain used by Bitcoin, with more generally available blockchain mechanisms.

Thank you to Chris Westbury at the University of Alberta, Canada for his paper[1] “Bayes for Beginners”. And population ecologist Brian Dennis’ (1996) polemic on why ecologists shouldn’t be Bayesians is essential reading, and a lot of fun.
 
Additional references are scattered throughout the book. Any feedback or comments please let me know — this journey is a long way from being over.

Jon Collins, 2018


Acknowledgements

Smart Shift has been written and pulled together over four years, based on innumerable conversations, calls and briefings, articles written and read. As such, there are simply too many people to thank but I will try.

First off, thank you to Ed Faulkner, previously at Virgin Books (now at Random House) for starting me on this adventure, perhaps inadvertently, over a pizza next to the Thames. Thank you also to Martin Veitch at IDG Connect, for giving me the opportunity to write what I wanted to write and therefore making this book possible.

Thank you to my son Ben, for introducing me to the works of Temple Grandin and making me think; and to my daughter Sophie for her encouragement and making me act.

To my friend colleague Simon Fowler for his insights, feedback and general stoicism in the face of my scatterbrained sensibilities.

To Aleks Krotowski for alerting me to Kranzberg’s first law of technology, that “Technology is neither good nor bad; nor is it neutral.”

To my very good friend and occasional collaborator Rob Bamforth, for the expression, “It’s not convergence, but collision.”

To Brian Robbins for the reference to the Mosquito drone, and for keeping me grounded.

To Tim Bolton at Philips Components for introducing me to the concept of simulated annealing, all those years ago, and for helping me realise that writing was just something humans could do.

To my Alcatel colleague Brigitte Leglouennec for the “keep it for five years” storage model for wine.

To Robin Bloor, for starting me on this journey back in 1999. And to Dale Vile, friend and serial colleague, for keeping me facing in the right direction.

To Roger Davies, for introducing me to the concept of value, which has touched all parts of my business and personal life.

To Judy Curtis, for all her help and support.

To my partners in crime at Leeds University, for showing me there was fun to be had with technology.

To Martin Haigh at the UK Land Registry.

To Mike Lynch, and David Vindel at Ketchum for the introduction.

But most of all to my long suffering wife, best friend and soul mate Liz, for all the joy we have shared over the past 30 years.

I said I would write about owl noises, so I have.

1. Foreword

“Where it began, I can't begin to knowin’…” Neil Diamond, Sweet Caroline

While fossil evidence of the dawn of humanity was first discovered in the Kibish Formation in Lower Omo Valley in 1967, it wasn’t until 2005 that the remains of a skull were satisfactorily dated at 195,000 BC, plus or minus 5,000 years. What came before this point in our collective history is unknown, and indeed, scant evidence exists as to what happened in the millennia afterward.

All we do know is that, in the 150,000 years or so that followed, we experienced some kind of cultural epiphany, a waking up of our individual and collective consciousnesses. By 40,000 years BC we were creating music and art, we had begun to breed animals and harvest crops, and we had spread across all 5 continents, leaving the beginnings of a historical trail behind us.

While 150,000 years may sound like an inordinate amount of time, it isn’t that great. Imagine if, as a child, you were given a note by a very old man — let’s say he was 90 years your senior. If you lived to a similarly ripe old age, you might pass the note to a child — it isn’t impossible to think that it could become some kind of tradition. Passing such a note from elder to junior would only need to happen 1,700 times for 150,000 years to be reached. To put this in context, if each five-second transition could be compressed without the intervening years, the whole exercise would take under 6 hours.

But we digress. Conflicting theories exist as to what happened in the intervening period, many of which have been based on the quite recent and sudden availability of mitochondrial DNA (mtDNA) evidence, leading to an explosion of studies. One of the more romantic, and therefore attractive options was that, 74,000 years ago, humanity was at its lowest ebb. Faced with environmental extremes for the preceding 50,000 years or so, then came the onslaught of a volcanic explosion from the Toba Mount in Sumatra. Geological records suggest this lasted six years, and was sufficiently impactful that it caused an ice age that lasted a further thousand years.

At our lowest point, so the story goes, only the 2,000 most hardy human souls, only the most creative, clever and strong were able to survive, somehow, living on their wits and their ability to harness the flora and fauna of the world around them. From this pool of hard, smart individuals we came, in all our diverse, multicultural glory. Humanity triumphed over adversity and, as a result, it inherited the earth.

It’s a lovely story but probably not true, according to Anthropologist John Hawks. Not only did Hawks refute the theory (setting a more likely figure at around 120,000 people, itself not enormous), he put the surge down to cultural connectivity and the way we have connected and bred. To illustrate this point, when Homo Sapiens arrived in Europe, noted examples such as Oase Boy suggest humans were not against procreating with the indigenous, Neanderthal population.

But what drove such cultural connectivity? One view is from the Lund university, that the creation of tool has itself driven culture. "When the technology was passed from one generation to the next, from adults to children, it became part of a cultural learning process which created a socially more advanced society than before. This affected the development of the human brain and cognitive ability," says Anders Högberg, PhD. In other words, as we started to collaborate on creating tools, we created habits that we have been honing ever since and which are still a deep part of our collective psyche. Innovation drove collaboration and vice versa, catalysing the explosion of creativity which has transformed humanity.

And we haven’t really looked back. It does beg the question — what would our 40,000 year old forebears have made of the environment they see before them today? The wars and famines we still experience may have been all too familiar; the mannerisms, behaviours and even the music potentially very similar. But what would they have thought of the automobile and telephone, of iPads and web sites?

In this book we try to answer exactly this question. We start by looking at the technological threshold upon which we now stand — the processors and networking, the software and protocols that are creating a smarter bedrock upon which our collective futures are being built. We also look at the resulting foundation of technologies — ‘cloud’-based processing using massively powerful computers to analyse (but also generate - yes, that’s a dilemma) vast quantities of data and connect it to a widening network of increasingly small devices. Is this foundation finished? Absolutely not — but its near-future shape is now clear to see.

We then look at what innovators have been doing with this set of capabilities. We consider the role of software and the sometimes-maverick characters that have created it, together with the resulting data — variously compared to mountains or fire hydrants. Bringing all these pieces together we look at the ‘open’ movement, both a consequence and a cause of technological advancement.

With an understanding of our technological foundation in place, we consider the impact the shift is caving on our corporations, on our culture and on ourselves. We look at how technology is changing the nature of democracy and public intervention, how youngsters are being born into an information-fuelled world, even as older generations still struggle with the basics. And we review whether customers are clear on the ramifications of corporate actions on their privacy and wellbeing.

From present to near future, we look at where technology is taking us as machine learning and mobile, drones, 3D printers and data farms conspire to give us what are tantamount to superpowers. What are we going to make of it all, and indeed, where do we go from here? Are we heading towards a utopia or armageddon, and what might influence our direction one way or another? What do we need to keep in mind as people and as entities, to ensure we maximise the good and minimise the bad? How should we consider such tools from a media, business and government perspective? We identify issues which need to be kept in mind, and highlight areas that should be treated as a priority by policy makers and strategists.

The very ground beneath our feet is becoming smarter, and we can choose to move with it or be moved by it. Which leads to the conclusion — that while technology can help us in the process, becoming smarter is ultimately something we will have to do for ourselves.

The smart shift is happening. Are you ready?

2. Innovation abhors a vacuum

In a beechwood in the Cotswolds, a rural idyll1 in the South West of England, the sun streams through a canopy high above, creating dappled pools of light across the moist loam. Fully grown trees stretch thirty or forty metres into the air, barely allowing for competition. But competition there is — saplings extend as high as they dare, their inch-wide trunks bearing a minimum of leaves. Only a few will survive, but those capturing just enough sunlight will develop faster than their peers, eventually becoming the giant trees that cast shade on the others.

When we reference Darwin’s survival of the fittest, we tend to personify the nature of adaptation, as though the trees actively strive for the light. They do not: it is their nature to respond to sunlight’s life-giving energy, just as it is the nature of a grass root to follow a trail of humidity through a porous brick wall. These are not near-hopeless attempts to survive; rather, they represent chemical responses to external stimuli. They remain mind-bogglingly impressive, as are the adaptations that gave hover-flies the ability to mimic the sounds (and have the same colourings) as wasps, or the physiological capabilities of humans to change the thickness of their blood based on altitude.

Equally thrilling is the unforced, cool inevitability of innovation, made possible as human nature, its precociousness and ingenuity derive energy from the pools of light beneath the canopies of our existence, its consequences sometimes quite unexpectedly bursting into life.


  1. If you haven’t walked the section of the Cotswold Way between Bisley and Cranham, you probably should. 

2.1. Just another journey on the bus

One bleak1 January morning in 20092, Susan Boyle left her council house in Blackburn, East Lothian and set out on a journey that was to change her life. As the story goes, some six buses later she arrived at Glasgow’s Clyde Auditorium just in time for her Britain’s Got Talent audition — only to be told there was no record of the appointment. After a nerve-jangling wait, the receptionist informed Susan that she could be squeezed in.3 It was hardly an auspicious beginning for the diminutive singer, even if history later showed4 that she was in exactly the right place, at the right time.

Susan Boyle wasn’t the only runaway success of 2009. At the same time as she took her six buses, the London-based press team at Hitwise UK were putting the finishing touches on a news release for one of their clients, Twitter. It was already apparent that Twitter was a phenomenon: the number of subscribers had increased5 ten-fold[^ (947%)] in the previous twelve months, making it, “one of the fastest-growing websites of the year.” Less clear at the time was just how successful it would become, nor the impact it would have.

Spurring Twitter’s success in America was some careful use of online tools by presidential hopeful Barack Obama – a report on the campaign by PR agency Edelman noted how, by the time of the election three months before, Obama had over 13 million subscribers to his email list and 5 million ‘friends’ across 15 social networking sites.6 Twitter’s contribution was relatively insignificant: 115,000 followers, a figure that increased to 140,000 by January 2009.7

All the same the still-nascent social site, which forced its users to limit themselves to 140-character broadcast messages, was intriguing people from all walks of life. Not least actor Ashton Kutcher, who joined Twitter just five days before the Hitwise announcement (and indeed, Susan Boyle’s audition).8 Five days after both events, on 26 January his (then-)wife, Demi Moore found herself caught in Twitter’s spell.9 The pair rapidly accumulated followers: by the start of April, Ashton already had 675,000 followers and Demi, 380,000 – over a million connections between them.10

Back in the UK the following week, press releases from Britain’s Got Talent production company Talking Thames announced that a phenomenon was about to take place.11 Insiders and those who had attended Susan Boyle’s live audition were already in on the act. Many felt that they had seen it all before — in the first series of Britain’s Got Talent, ‘lump of coal’ Paul Potts had surprised everyone with his rendition of Nessum Dorma,12 and his album ‘One Chance’ went on to chart at number one in nine countries13. Surely that was a one-off?

The Susan Boyle episode of BGT was broadcast on Saturday 11th April. Producers and editors at Talkback Thames had done their best to maximise Susan Boyle’s potential for success: while her opening comments about wanting to be like Elaine Page were doubtless from the heart, they were carefully spliced with footage of disdainful looks from audience and judges, maximising the impact of the then-dowdy performer.14

The standard media frippery worked. Like Paul Potts before her, TV audiences were unexpectedly wowed by Susan’s rendition of ‘I Dreamed a Dream’ – rightly so as she gave the performance of a lifetime. But the story didn’t end there. Some viewers (200 of them15) were so entranced by the performance that they uploaded a clip of the show to YouTube — without a glance to copyright, of course!

Susan Boyle’s online video was almost immediately popular: through a fortunate trick of time zones, by the time US audiences were paying attention the clip had already seen several thousand views in Europe, drawing the attention of the Web-savvy Kutchers. At 7.35am UK time on Sunday April 12th, Ashton tweeted a Digg link to the video16 with the message, “This just made my night.” Eight minutes later Demi Moore tweeted back, “You saw it made me teary!”

Demi and Ashton were no strangers to obscure, even titillating messages such as “Race you to the bedroom” sometimes with photos attached17 which encouraged their followers to click18 on whatever link came their way. There was no mention of the singer in the exchange, but in the fortnight that followed, Susan Boyle’s YouTube clip was watched more than 50 million times.

Of course, this wasn’t entirely down to the popularity of Ashton Kutcher and Demi Moore – but their interest catalysed a wave of Twitter activity that snowballed across the continents. The level of interest across public and press, in both the UK and US, exceeded everybody’s expectations – not least Talkback Thames and Simon Cowell’s Syco. While the producers felt they had done everything they could to big up Susan Boyle (as well as that of other contestants across the series), they had not planned for this. It took a full ten days to allocate Susan her own a PR firm,19 despite the number of press mentions breaking the thousand mark.

A week after the show, American talk-show host Rosie O’Donnell described both Simon Cowell’s reaction and the greater impact of Susan Boyle’s audition20. “This was so rare... something authentic in a world that is usually manufactured,” she said. “It was a perfect moment which will never happen again." She may have been right. In the years since Susan Boyle became a global phenomenon,21 we have seen social tools such as Twitter and Facebook become instrumental in all kinds of world-changing events. While the exact causes of such an avalanche effect are difficult to unpick, celebrity involvement undoubtedly helped, not least because at the time Twitter’s biggest hitters were competing for followers. It’s not unrelated that under a week after Susan’s audition, Ashton had achieved his own goal of being the first Twitter user to amass over a million followers, pipping CNN22 at the post:

“In the much-publicized duel, Kutcher's Twitter account crossed the 1 million mark on Twitter about 2:13 a.m. ET Friday, narrowly beating CNN's breaking-news feed, which had 998,239 followers at the time. CNN passed the mark at 2:42 a.m. ET.”

Many such battles have already been won, then almost immediately rendered irrelevant. Oprah Winfrey, who ‘launched’ her Twitter presence on the same day, has over seven million Twitter followers at the time of writing; the most followed users are Lady Gaga, Justin Bieber and Barack Obama, which says as much as we to know about our celebrity culture. With active Twitter users totalling more than 250 million23, there is no longer any ‘first mover advantage’ to be gained. Even Twitter itself is flagging in the bigger race for social media supremacy.

The potential for such online tools to have a profound impact is still being bottomed out. Events include the ousting of President Mubarak in Egypt,24 the public backlash against the News of the World which brought down the Sunday paper, or the incitement to riot and loot in the UK. Each time that a snowball effect occurs, millions of individuals contribute one tiny part but the overall effect is far-reaching.The darker side of such minuscule moments of glory that our actions may not always be entirely our own. Psychologists are familiar with how people follow crowds: on the positive, social tools enable power to the people and new opportunities for democracy; conversely however, we leave ourselves open to the potential for a hive mind, which brings out our worst character traits.

Not only might this lead us towards less savoury kinds of group-think, including cyber-bullying and other forms of online abuse; it is also highly open to exploitation. Media organisations are already learning the lessons from examples such as Susan Boyle’s success, and are researching ways to maximise their impact, ‘leveraging’ our online and offline characteristics to maximise the impact of any campaign. On Egypt, Ahmed Shihab-Eldin, a producer for the Al-Jazeera English news network was quoted as saying,25 “"Social media didn't cause this revolution. It amplified it; it accelerated it." However, the boundary is diminishing between such efforts and manipulation of audience, customer and citizens, and history suggests that if the opportunity for exploitation exists, some will take it.

Susan Boyle found herself in the eye of a perfect storm, the specific likes of which we may never see again. However, the social media phenomenon is not the last time technology and demographics will work together to disrupt the ways we act, and indeed interact. We are already entering the next waves, of machine learning and algorithmics, of the Internet of Things and sensor networks, each of which will strongly influence our thinking and behaviours.

The news is not all bad, indeed we stand to gain a great deal as a race and as a collection of cultures, with as much cause to celebrate as to be concerned, as we shall see in the next chapter. Anyone fancy a tipple?


  1. http://www.metoffice.gov.uk/climate/uk/2009/january.html 

  2. Wednesday 21st January 

  3. http://www.dailymail.co.uk/tvshowbiz/article-1317220/Susan-Boyle-come-said-doctors.html 

  4. http://www.dailyrecord.co.uk/news/editors-choice/2010/01/21/hard-to-believe-but-true-one-year-ago-nobody-had-heard-of-susan-boyle-86908-21983825/ 

  5. http://weblogs.hitwise.com/robin-goad/2009/01/twitter_traffic_up_10-fold.html 

  6. http://www.europeanbusinessreview.com/?p=1627 

  7. http://crave.cnet.co.uk/software/re-tweeting-obama-twitter-and-the-presidential-inauguration-online-49300681/ 

  8. http://twitter.com/#!/aplusk/statuses/1123211498 

  9. http://twitter.com/#!/mrskutcher/statuses/1150053991 and http://twitter.com/#!/kevinrose/status/1150514889 

  10. http://articles.cnn.com/2009-04-03/entertainment/moore.twitter.threat_1_tweet-twitter-message 

  11. http://news.scotsman.com/entertainment/Interview-The-prime-of-Miss.5196739.jp 

  12. http://www.contactmusic.com/news.nsf/story/paul-potts-to-release-album-in-15-countries_1036331 

  13. http://en.wikipedia.org/wiki/Paul_Potts 

  14. http://www.guardian.co.uk/commentisfree/2009/apr/16/britains-got-talent-susan-boyle 

  15. http://corp.visiblemeasures.com/news-and-events/blog/bid/9115/Susan-Boyle-s-Got-Viral-Video-Talent 

  16. http://digg.com/d1oVwl 

  17. “Bed time” December 4 2009 http://twitpic.com/s2aoh
    http://twitter.com/#!/mrskutcher/status/6328572928 -
    http://twitpic.com/s2bjm RT @aplusk: http://bit.ly/4Xh4De RT @mrskutcher: is it http://bit.ly/566UFw” 

  18. http://thebosh.com/archives/2009/12/demi_moore_and_ashton_kutcher_perfect_twitter_flirting.php 

  19. http://www.dada.co.uk/casestudies/susan-boyle 

  20. http://www.people.com/people/article/0,,20273553,00.html 

  21. Susan Boyle video – 73,027,839 views now on Youtube http://www.youtube.com/watch?v=RxPZh4AnWyk&NR=1 

  22. http://edition.cnn.com/2009/TECH/04/17/ashton.cnn.twitter.battle/ 

  23. https://investor.twitterinc.com/releasedetail.cfm?ReleaseID=862505zhttps://investor.twitterinc.com/releasedetail.cfm?ReleaseID=862505 

  24. http://www.fastcompany.com/1720692/egypt-protests-mubarak-twitter-youtube-facebook-twitpic 

  25. http://news.cnet.com/8301-13577_3-20031600-36.html 

2.2. There is truth in wine

The Rías Baixas region of Northern Spain is marked by four estuaries — the Rías — the lush lands between which support some 20,000 independent wine growers. The majority of the wine is white because of the unusually resilient Albariño1 grapes, apparently2 brought in by Cistercian monks in the twelfth century. The grape variety proved more than a match for the cool, damp Atlantic climate for many centuries, until the 1870s when the region found itself beset by an aphid called Phylloxera, accidentally imported from the USA by way of a plant specimen sample. The hungry bug devastated3 the continent’s wine harvests and left centuries-old practices in tatters: only vines on the Greek island of Santorini managed to escape, so it is told4.

Wine growers have long memories. First they turned their attentions to hybrid varieties of vine, which were less susceptible to disease though they produced inferior wine. In the 1970s, a hundred years after the event, hybrids were replaced by the lower-yield, yet superior traditional vine types, this time grafted onto Phylloxera-resistant American root stocks. While this was seen by many as a compromise, it nonetheless meant that growers in Albariño and elsewhere could start producing their traditional wines once again. This time however, and aided by EU funding, no expense was to be spared for pesticides.

Today the European Union produces5 some 175 million hectolitres per year (or 141Mhl, depending on the source6), equating to about 65% of global production — another disease like Phylloxera would wipe over €30 billion from Europe’s revenues, according to 2010 figures7. Nearly a third of Europe’s production, and 15% of global production, comes from Spain, the biggest wine grower in the European Union8 with9 some 1.2 million hectares (compared to 910 thousand in Italy and 865 thousand in France).

As yields have been pushed to the max in recent years, wine growers themselves are wondering whether there is a smarter way to minimise the risk of disease without increasing the costs and other potential downsides. The spray-it-all approach is both expensive and unhealthy: according to the 2010 report, viticulture uses double the fungicides10 of other types of crop, and about the same amount of pesticides. “The higher consumption of fungicides in viticulture is due to the fact, that Vitis vinifera has no resistance to introduced fungus diseases and requires chemical protection,” it states. A consequence of such intervention is that agriculture has been suffering from the law of diminishing returns — that is, it has used a series of blunt instruments (including grants and pan-European rulings such as the constraints on vine planting, which are just being removed in 2015) as much as possible, resulting in compromises to flavour and overall results which undermine the point of production in the first place.

Such considerations, as well as the changing climate and economic factors, mean increasing thought about how wine processes themselves can evolve. “Seasons are changing, weather patterns are different, so working practices are also changing. In addition, organic growing is rising as a trend, which goes back towards reaching the natural balance.” Indeed, pesticides are themselves a relatively modern invention. Counterintuitively for luddites at least, technology can deliver at least part of the answer, in the form of sensors that can ‘read’ the qualities of the soil. Not only can resulting analysis determine where and when to apply nutrients (thus saving money and avoiding over-fertilising), they can identify the onset of disease by watching for symptomatic changes to the environment. If vines are being infected, they can be sprayed, isolated or even ripped out before the damage spreads.

A pioneer in this space is Slovenian technologist Matic Šerc, whose company Elmitel is looking11 at the role of sensors12 in wine growing, to monitor temperature, soil composition and so on, and compare the data captured to changes in the weather. Underneath it all is a foundation of technology far broader than just linking the sensors to a computer. The network relies on what has been termed ‘cloud computing’ - that is, making use of processing resources from the Internet. And necessarily so - the sensors generate large amounts of information, which would require more power than even a collective of wine growers would want to fund.

Wine growers, by their nature, are traditionalists so they do not lean towards such use of technology. “In certain areas, 30% of growers don’t have mobile phones, never mind smartphones. It’s not realistic to expect this to skyrocket!" says Matic. This is truer than ever in the Rías Baixas, which is dominated by family-run wineries. Javier Lastres, a local wine grower, never expected to find himself at the forefront of innovation. “We saw that it could be useful, especially for younger people who know how to use computers, he says13.

The results have gone well beyond the aspirations of the growers. Not only can they determine where and when to apply nutrients (saving money and avoiding damaging the land through over-fertilising), they were able to identify the onset of disease, simply by watching for symptomatic changes to the environment. If vines are seen as infected, they can be isolated or even ripped out before the damage spreads to other vines or even vineyards.

So, what about consumption? Let’s go across the Atlantic to the west suburbs of Chicago, Illinois, where employees of label printer company Zebra became aware of a similarly Sideways (check the14 film) insight, that the information being printed on their labels was also of value. The group, which became known as Zatarists15, built a software platform16 to record the data and associate it with the item upon which the label was stuck. A simple enough principle with, as we shall see, potentially far-reaching consequences — not least with wine.

By way of background, I first learned of a rolling wine store when I was living in Paris, and where even apartment blocks have individual cellars. The notion is simple: few people have the facilities to store wine across the long term, but most wine can be kept for five years without too much maintenance. So here’s the plan — simply store the wine you would have drunk now, for five years, and end up with wine that may have cost considerably more (and tastes much nicer!). Of course, this does mean buying twice as much wine for a few years, before the model starts to kick in. Overall however, given the ability to store a hundred bottles of wine, say, and faced with still-floundering interest rates, the mathematics make wine a much better investment than many other options.

As many wine buffs know, a major downside of storing wine is remembering what you have. Occasionally this results in happy accidents of wine that has managed to survive far longer than it should have done, to good effect. Equally often however the wine can go past its prime, losing its flavour or becoming corked. If the idea is to save money on more expensive wine, the economics of the model can quickly become lost. But what if you could know exactly what wines you have in your store, and not only this but the ideal date by which they should be drunk? Suddenly wine storage becomes less of a gamble and therefore more of a financially sound idea.

Which is exactly the kind of model that Zatar was created to support. The company first demonstrated its ‘smart’ wine rack back in 2013, at a Smart Cities conference event in Barcelona. It worked as follows: physical sensors recognise the presence of a bottle in a ‘bay’; details of the wine could be logged and then referenced from a tablet computer. If a bottle was removed, sensors would acknowledge the event. Zatar’s wine rack was very simple yet highly effective, as it created a bridge between the physical and the computer-based, ‘virtual’ world. Once the two are connected however, the impact could be profound — a fact that didn’t go unnoticed to the Zatarists. Today’s smart wine rack also incorporates environmental sensors so cellar owners can be kept informed about changes in temperature, to ensure the environment remains within appropriate criteria — not too warm or cold, for example.

This brings us to a very interesting juncture, which is being felt in all kinds of area. The game is going from one of ‘what if’ — idle speculation about what could be possible — to ‘what about’ as people realise the potential of linking computers with the objects and environments around us. For example, temperature fluctuations are not necessarily a problem; they simply change the longevity of the wine. So, rather than sending an alarm, software could adjust the drink-by dates against each affected bottle. Potentially this information could be linked to criteria defined by the wine grower: each year, and in each region, some wines are deemed to be less susceptible to fluctuations in environmental conditions. The current method is to ask the grower, “How long can I keep this for?” but the reasons and data behind this decision is also known — which links us back to the sensors and data captured during the growing process.

Going one step further, if the wine is good, it also makes sense to provide direct feedback to the grower, as data or comment (or indeed, order another bottle!). And, if fault is found, would the buyer not appreciate a mechanism to let the grower know? A UK company linking growers with consumers is Naked Wines17, which proclaims itself as “a customer-funded wine business.” From the company’s web site you can chat directly to wine growers, including Carlos Rodriguez18, a wine producer who has interests in Albariño production. “Albarino is one of our treasures in Spain,” says19 Carlos. “I make my Albarino wine in the area of Condado, the warmer and sunnier area of Rias Baixas to get as much maturation as possible on the grapes.”

Indeed, if a piece of data can be used to identify a vine and the wine it produces, could it then be used to link the wine bottle, the seller, the buyer, then location it ends up in? Further opportunities exist to bring together the existing pieces — add the ability to ‘recognise a wine from its label or a unique identifier (such as an AVIN20 code) to Libelium’s sensor network21 acting on vines, Zebra’s smart wine rack and Naked Wines’ capability to engage directly with the growers, and you have a closed loop from vine to consumer and back. The wine rack would be able to determine the conditions of the cellar, while an information feed would be able to link this to a certain wine’s propensity to mature. Add a visual display, such as a “drink me now” light flashing on a bottle (or its virtual representation, on the tablet computer), and the loop is closed.

The chain of events linking the vine to the glass is far from completing its evolution. For example, drones are already appearing in wine estates from Bordeaux to Napa as a way of checking ripeness, vine damage and indeed, disease in far-flung fields. “You can teach [^software] what the canopy looks like and it can see quantitative difference in vegetation over time,” explains22 US wine grower Ryan Kunde who has been using drones23 for several years24. Just as growers all over the world are watching each other to see where technology can make a difference, wine is just one element of a wider world of retail and entertainment, both environments which could make additional connections — wine sellers could price their products more accurately or offer cellarage services, while consumers could ‘integrate’ their cellars with menu planning and, if wine really is an investment, how they manage their finances.

We can learn much from the microcosm that is viticulture. Wine is, by its nature, relatively costly and therefore it merits use of similarly costly technology by growers and by consumers. As we see the costs of technology continue to drop, so the domains in which it is used will continue to widen. As costs fall further and technology becomes able to support a broader range of products, we might be able to engage directly with our fruit and veg suppliers, for example. One of the biggest consequences of sledgehammer-based farming approaches was the creation of mountains of meat and grain, as we became too good at producing, beyond our ability to consume. In the future we might well be able to tie supply directly25 with the ultimate creators of demand — individual consumers — protecting the land at the same time as giving people what they want.

Outside food and drink, the wider world is getting smarter as devices from electricity meters, to cashpoints and petrol pumps, all are becoming able to collect and generate information, to provide it to customers and suppliers alike, and support more efficient services. They become possible only because the necessary technological pieces are in place, chugging away behind the scenes, a complex array of components operating in harmony. As we build on such capabilities, we are not only create a wealth of opportunity but also a number of challenges, the most fundamental of which is, “Will we get left behind?” It’s a good question, and there is no doubt that humanity, and its relationship to the resources and other species across the planet, will change fundamentally as a result of what is happening in front of our eyes today.

As we shall see, we lost control of the many headed beast we call technology a long time ago. But does it mean that we will be handing our future off to machines, with the consequence of living in a digital world also mean facing mediocrity and blandness? Perhaps wine holds the answer here as well. “It’s a triangle of the soil, the weather and the vine,” says Matic, who grew up in an environment where the general populace is roped in to help with the grape harvest in the fields once a year (“There’s usually some kind of a ‘party',”, he says). “When you manage vineyards you can manage the soil and you can manage the canopy, to an extent. But you cannot completely switch the soil, or change weather conditions, with technology.” Agrees Ryan, “Wine hasn’t become a commodity; it's still tied directly to the earth.”

So, how can we get the balance right, benefiting from technology without undermining nature? For wine, the key lies in how it can help growers make better informed decisions, helping reduce both costs and risks. “Wine has a story, a personality,” says Matic. “Wine growing has practices that are very old, but the data helps you manage more efficiently, more precisely.” Ultimately wine is more than a product, it is a consequence of everything that takes place to turn the first opening leaves of Spring into the dark reds and crisp whites, the infusion of flavours and textures that bring so much pleasure to so many. But within this framework technology enables growers to become better at their craft, providing the information they need to ‘sniff the air’ and make judgements about when to harvest, and what yields to expect. Clearly, neither humanity nor nature are done with yet.


  1. http://www.riasbaixaswines.com/about/what_albarino.php - (despite being harder to cultivate) 

  2. From the Monastery of Armenteira - Cluny 

  3. http://books.google.co.uk/books?id=CE505JHpBHUC - About Wine by J. Henderson, Dellie Rex 

  4. http://www.travelerstales.com/press/wine.html 

  5. http://ec.europa.eu/agriculture/markets/wine/index_en.htm 

  6. http://gain.fas.usda.gov/Recent%20GAIN%20Publications/Wine%20Annual_Rome_EU-27_2-22-2013.pdf 

  7. http://ec.europa.eu/agriculture/external-studies/value-gi_en.htm 

  8. http://ec.europa.eu/agriculture/markets/wine/studies/vine_en.pdf 

  9. http://www.winesfromspain.com/icex/cda/controller/pageGen/0,3346,1549487_6763472_6778161_0,00.html 

  10. The most used fungicide is sulphur. 

  11. and who is engaged in an accelerator programme in Bordeaux to develop the eVineyard app and service 

  12. from a Madrid-based company called Libelium http://www.libelium.com/smart_agriculture_vineyard_sensors_waspmote/ 

  13. https://www.youtube.com/watch?feature=player_embedded&v=O3iXaBp2IL0 

  14. www.imdb.com/title/tt0375063/ 

  15. http://www.zatar.com/community/blog/picture-perfect-event-zatar-makes-its-debut-ciscos-internet-things-world-forum 

  16. http://www.zatar.com/community/blog/devices-get-facebook-their-own-thanks-greektown-company 

  17. http://www.nakedwines.com/full_site 

  18. http://www.nakedwines.com/winemakers/carlos-rodriguez.htm 

  19. http://www.nakedwines.com/wines/carlos-rodriguez-albarino-2011.htm 

  20. http://thenextweb.com/2010/12/07/avin-hits-30-million-labels/ 

  21. http://www.libelium.com/sensors-mag-smart-viticulture-project-in-spain-uses-sensor-devices-to-harvest-healthier-more-abundant-grapes-for-coveted-albarino-wines/ 

  22. https://web.archive.org/web/20140608064809/http://www.citeworld.com/article/2114440/development/winery-arduino-temperature-monitors-drones.html 

  23. http://eandt.theiet.org/magazine/2015/07/farming-drones.cfm 

  24. http://drnkwines.com/winemaker-ryan-kunde 

  25. And what of organisations such as the US supermarket chain Wholefoods looking to deepen their relationships with their suppliers and customers? 

2.3. Brainstorming the future

For I am troubled, I must complain, that even Eminent Writers, both Physitians and Philosophers, whom I can easily name, if it be requir’d, have of late suffer’d themselves to be so far impos’d upon, as to Publish and Build upon Chymical Experiments, which questionless they never try’d; for if they had, they would, as well as I, have found them not to be true. Robert Boyle, 1661

Generations of schoolchildren are familiar with Boyle’s Law, that is, how the pressure of a gas is inversely proportional to its volume. The less volume available, the greater pressure there will be — as also experienced by anyone who has tried to blow up1 a hot water bottle.

In 1660 Irish-born son of an earl Robert Boyle co-founded the Royal Society with the singular goal2 to further what was termed ‘the scientific method’, that is, to improve knowledge through a process of experiment. “We encourage philosophical studies, especially those which by actual experiments attempt either to shape out a new philosophy or to perfect the old.” Less familiar to schoolchildren, though recognised as a seminal work was his 1661 expose3 of alchemy, 'The Sceptical Chymist’, which positioned him quite rightly as the father of Modern Chemistry.

As we all know, the world has been profoundly changed by the explosion in discovery triggered by Boyle and his Royal Society fellows — Robert Hooke, Isaac Newton, Charles Babbage and Darwin, all contributors to the scientific foundations of the modern world. It’s also not hard to draw parallels with how dawn of science replaced the mysticism of the alchemists in the 17th century, and how we are seeing rationality come up against the unproven whims of many startups today, whose main desire appears to turn virtual lead into gold. Some lucky types do appear to have stumbled upon the magic formula — consider Instagram for example, which was launched in 2010 and sold to Facebook less than two years later, for a reputed billion dollars in cash and stocks. Or the Chinese e-commerce site Alibaba, whose IPO in September 2014 raised some $25 billion. Who wouldn’t look at examples such as these and try to replicate their success?

It’s clearly not easy to achieve, neither is it straightforward to understand how some great ideas work, while others flounder. For example, consider Vodafone's M-Pesa (“Pesa” is Swahili for “Money”) mobile payments system, which started out as a phone top-up card in Kenya but expanded to become a highly successful mobile payments system across countries in Africa, the middle east and eastern Europe. Why has it been so successful, when similar schemes in Western countries have not been widely adopted, despite years4 of trying? Why was Facebook so successful when its forerunners — Friends Reunited, MySpace and the like — were not? And what about Bitcoin5, which is reputed to be more reliable, lower cost and more secure than traditional currency, why aren’t we all using that?

What we do know is that the time between seeing up a company and getting it to a billion dollar valuation has become shorter. Such companies are called ‘unicorns’ in the investment trade — an appropriate name given their generally elusive nature. Top of the list6 at the time of writing is ‘taxi firm’ Uber7 (in quotes because it doesn’t own any taxis), as well as AirBnB, Snapchat8 and a host of companies you may never have heard of — Palantir, Xiaomi, the list goes on. Despite such paper successes, venture capital companies are the first to admit that they are none9 the wiser about where the next successes will come from. To quote10 Bill Gates, “Its hit rate is pathetic. But occasionally, you get successes, you fund a Google or something, and suddenly venture capital is vaunted as the most amazing field of all time.” And the elephants’ graveyard of startups is only a blink away. What is a startup, asks11 Closed Club, set up to analyse why startups fail, but "a series of experiments, from conception of an idea to researching competitors, running Adwords campaigns, doing A/B tests…”

The models most widely used to explain mass adoption tend to see the journey from the point of view of the product, service or trend — Geoffrey Moore’s Crossing the Chasm12, for example, or Gartner’s Hype Cycle13 work on the basis that all good things will eventually emerge, once the bad stuff has been filtered out. Such models worked well when everyone was trying to do much the same thing, and when most technology was corporate — indeed, they still have validity in big-bucks enterprise procurements. However they do little to explain small ticket, big-impact phenomena such as mobile apps, social networking platforms or cloud-based services.

Right now we are in a brainstorming phase which owes more to alchemy than to science, and within which, anyone can pick up a few bits of technology and ‘invent’ a whole new product category. It is impossible to keep up with all combinations. A few years ago an article appeared about how innovations have come from taking two disparate ideas and linking them together, such as the microwave oven. As says Gerhard Grenier, of Austrian software company Infonova14, “Innovation is combining existing things in new ways to create additional value.” Many start-ups and large company initiatives appear to be games of “what if”; a random progression of combinatorial experiments, each testing out new selections of features and services on an unsuspecting public, to see what sticks.

The CES conference of 2014 was marked by having more than its traditional share of zany ideas: For comedy value alone the prize had to go to ‘Belty’ — a ‘smart’ belt that senses when its occupier is being a bit too sedentary, or even when it needs to loosen during a particularly hefty meal. “The amazing thing is that we haven't invented anything new,” remarked Cicret founder Guillaume Pommier. “We just combined two existing technologies to create something really special.” Perhaps more useful though less probable (in that it doesn’t yet exist) is ‘Cicret’, a bracelet-based projector which can shine your Android mobile device screen onto your wrist. Various reports have pointed out the more obvious weaknesses in this model, not least that it needs a perfectly sculpted forearm and ideal lighting conditions.

In both cases one has to ask about the contexts within which Cicret and Belty are being created. The crucible of innovation, it would appear, is an environment warm enough to have bare forearms, where people spend a lot of their time sitting around and eating. And while sitting at that al fresco restaurant table, someone thought these ideas were sufficiently compelling to become a real product, while other, well-fed individuals saw them as viable enough to put some money in the hat. We are right in the middle of a brainstorming phase and, as everybody knows, there is no such thing as a bad idea in a brainstorm.

And when we find, in hindsight, that only one out of a hundred ideas had any legs, we claim the experiment to be a huge success. Today everything in the physical world can be equipped with sensors, interconnected and remotely controlled, making such examples legion — chances are, if you think of a combo, such as soil sensors in a plant pot, someone will already have thought of it at CES 2015. As advocates of Geoffrey Moore might add to their marketing, “Unlike other smart plant pots…"

Not all such ventures stand a chance of succeeding, but as Edison himself once noted, it’s not the 10,000 failures that matter, it’s the one success but the absence of a magic formula is frustrating for anyone wanting to make it big. Experience pays off, as numerous articles on the subject express15, as these deliver people with the right characteristics — relentless focus, ability to manage resources, the right relationships and the all-important but catch-22 challenge demonstrable track record. This also fits with the impression that charisma and informed guesswork are the main arbiters of what might work. “You won’t have all the answers about the space, but you should have an educated and defensible opinion about it… [^which] is what you bet your company on: “Yes, people want to share disappearing photos.” “Yes, people want to crash on other people’s couches instead of a hotel room.”,” says startup founder Geoffrey Woo16.

Today’s innovation-led world could have more to do with the earliest days of science than we think. Boyle was one of the first to document synthesis, the chemical process during which a number of compounds react to form something new and, sometimes, remarkable. In the decades that followed his insights, and with the support of organisations such as the Royal Society, Boyle’s progenitors and progeny were to investigate every possible combination of acids and alkalis, crystals and solutes, often risking their own health (or, in the case of Carl Scheele, his life17) in the process. Indeed, the mix-and-match approach is still in place within the chemistry research departments of today’s pharmaceutical giants such as GSK or Pfizer.

While the health risks of electronics and software may be smaller to the individual, they can have a dangerous effect on society as a whole. All the same tech startup company founders are exhibiting similar behaviours to the scientists of old, trying out combinations and seeing what sticks. Indeed, it is no coincidence that technology parks are attaching to Universities, in the much the same way as pharma companies fund campus environments18 for their own scientific research. The result, however, is a hit-and-hope approach, with only hindsight as the arbiter of whether or not it is a good idea — as has been pointed out, for every success story, many startups with a similar ideas have failed. It is understandable that, according to a recent poll of founders and as noted19 Bill Gross, CEO of business incubator Idealab, the single most important success factor is one of timing. In some ways however, even this conclusion is a cop out — if an idea fails ten times and then succeeds, the most obvious difference may well be temporal but it does not explain the factors representing the difference between failure or success.

Can we see beyond the experiments and reach any deeper conclusions about success, or indeed failure factors? Perhaps we can. In general the greatest tech success stories manage to identify a way of short-circuiting existing ways of doing things, joining parts of the overall ‘circuit’ and exploiting the potential difference between two points. Trade was ever thus, right from the days of importing highly prized spices from far-flung places and selling them for a hundred (or more) times the price for which they were bought. If, I work out a way to get the spices far more cheaply, for example by investing in a new form of lower-cost sea transport, then I can undercut existing suppliers and still make a healthy profit. This is exactly the model that DirectLine, a UK telephone-based insurance company, exploited: the company delivered reasonably lower prices to the customer, but with substantially reduced overheads, resulting in greater profits overall.

This phenomenon has been seen over and over again in recent decades, as we have already seen with Uber’s impact on the taxi industry and Vodafone’s M-Pesa use of mobile phone top-ups as a currency, as well as with companies like Amazon and eBay that have caused so much disruption to retail industries. The term originally used was ‘disintermediation’ — the removal of existing intermediaries — but in fact re-intermediation would be more accurate as consumers and clients still buy indirectly. This short-circuitry also explains how companies such as voucher company Groupon can grow so fast so quickly, only to vanish at a later point. In this way it’s like the stock market: if one person spots an opportunity, it isn’t long before numerous others come sniffing around, undermining any multiplier effects that can be achieved when one startup is in a new, and therefore monopolistic position.

Re-intermediation is essentially about re-allocation of resources, as it requires cash to flow via new intermediaries rather than old ones, at a sufficient rate to enable the upstart companies to grow. Something needs to kick-start this process, which is where venture capital comes in. In the early days of a startup, seed cash is not always going to be that easy to come by — new companies tend to benefit from ‘funding rounds’ each of which can be hard-fought (remember there could be hundreds of other companies looking for the same pot of funds). Looking at this demand for funds in chemical synthesis terms, the model is inherently endothermic in that it draws, rather than releases resources (in this case in the form of cash, not energy). The term ‘burn rate’ was adopted during the dot-com boom (even spawning a card game of the same name20) to describe the uneasy relationship between sometimes cautious capital supply and hungry startup demand, with many companies floundering and even failing when on the brink of success, if the money quite simply ran out.

Energy should not be linked to capital alone, but is better considered in terms of positive value, either real or perceived. Facebook’s growth, for example, tapped into a latent need — the village gossip post — and the site used this to demonstrate its worth to advertisers. Amazon’s continued reputation21 as a loss maker has done nothing to damp investor enthusiasm or quell market fears about its voracious appetite, given how people keep using it. And the adoption of Google and Skype (the latter now owned by Microsoft) as verbs22 demonstrates an old tactic, familiarised by the likes of Hoover, which has assured a stable future for both.

This links to a common tactic among larger companies: the ‘embrace, extend and extinguish’ technique, originally honed23 by Microsoft in the 1990’s, is one of many ways to ensure both new entrants and established competitors are starved of energy. Other examples include the promotion of open source equivalents to draw resources away from the established competition — just as IBM did against24 Microsoft and Sun Microsystems in the early Noughties, so is Google doing with Android25, to fend off Apple. Attempted strangulation, starvation from energy-giving oxygen, is recognised as a valid business strategy when competing against newcomers who are trying to outmanoeuvre, or overtake the incumbent players.

The chemistry set analogy bears a number of additional comparisons. The reaction rate26, for example, which can depend on latent temperature, pressure and so on — once again, we can thank Robert Boyle for helping us understand this. So, just as positive value input can increase temperature, so can latent need and a good marketing campaign positively catalyse pressure. Indeed, crowdfunding models operate on both axes, driving demand while increasing available resources (and it is without irony that crowdfunding sites themselves benefit from the same models).

The ultimate goal, for any startup, is that its innovation reaches a critical point — that is, where the position changes from attempting to gain a foothold with a product or service, to it achieving ‘escape velocity’, in much the same way that a liquid becomes a gas, which can then spread through diffusion. Reaching such scale may require a level of industrialisation: "You're committing not just to starting a company, but to starting a fast growing one,” says27 Paul Graham, co-founder of Y-Combinator. “Understanding growth is what starting a startup consists of.” The chemistry-based analogy is not perfect, none is. However, it does go a long way towards explaining why some organisations segments struggle with technology adoption, (such as the UK NHS, being ’a late and slow adopter’ according to28 The Healthcare Industries Task Force): while the desire to make use of new tech may be there, a critical level of energy is not.

In 1737, in Leiden in Holland, Abraham Kaau gave one of the last recorded speeches29 about the dubious nature of alchemy. By this point, some 75 years after Robert Boyle’s ministrations, he was largely preaching to the converted. In today’s technology-flooded world, brainstorming we remain, against a background of dubious and unscientific ways of deciding how to progress which allow room for a great deal of risk as well as reward. This sets the context against which we can understand where we are, and what we need to have in place to progress: simply put, it will be difficult to put in place any structures as long as the world of technology remains so chaotic. But understand it better we must, not according to the general superstitions of the time but applying more scientific methods to how we synthesise new capabilities. Indeed, given how our abilities to oversee30 such things are themselves extending, theoretically smart startups are no better off than cobblers’ children31.

The bottom line is that we have the potential for incredible power at our fingertips, but with power also comes responsibility. Enter: the smart shift.


  1. http://sploid.gizmodo.com/man-with-incredibly-huge-lungs-can-make-a-hot-water-bot-1618031822 

  2. https://royalsociety.org/~/media/Royal_Society_Content/about-us/history/Charter1_English.pdf 

  3. http://www.gutenberg.org/files/22914/22914-h/22914-h.htm 

  4. http://www.swindonadvertiser.co.uk/news/11225462.How_smart_was_that_/ 

  5. http://www.idgconnect.com/abstract/9310/of-bitcoin-charities-a-story-lifeboats 

  6. http://fortune.com/unicorns/ 

  7. https://www.uber.com 

  8. https://www.snapchat.com/ 

  9. http://www.wsj.com/articles/SB10000872396390443720204578004980476429190 

  10. http://www.rollingstone.com/culture/news/bill-gates-the-rolling-stone-interview-20140313?page=4 

  11. http://closedclub.co 

  12. http://culttt.com/2012/04/23/inside-the-tornado-review/ 

  13. http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp 

  14. Now part of BearingPoint 

  15. https://gigaom.com/2015/02/15/three-tips-for-young-startup-founders-from-a-guy-whos-been-there/ 

  16. https://nootrobox.com 

  17. http://en.wikipedia.org/wiki/Carl_Wilhelm_Scheele 

  18. http://www.liftstream.com/biotech-clusters.html 

  19. http://www.b2bnn.com/2015/06/ted-talk-takeaways-successful-startups-are-always-timely/ 

  20. http://www.computinghistory.org.uk/det/17925/Burn-Rate/ 

  21. http://www.ibtimes.com/amazon-nearly-20-years-business-it-still-doesnt-make-money-investors-dont-seem-care-1513368 

  22. http://www.fastcompany.com/3004901/google-what-it-means-when-brand-becomes-verb 

  23. http://www.economist.com/node/298112 

  24. http://www.law.washington.edu/lta/swp/Institutions/bstrategy.html 

  25. http://www.macrumors.com/2014/09/24/google-chairman-eric-schmidt-on-competition/ 

  26. http://en.wikipedia.org/wiki/Reaction_rate 

  27. http://www.paulgraham.com/growth.html 

  28. http://www.kingsfund.org.uk/sites/files/kf/Technology-in-the-NHS-Transforming-patients-experience-of-care-Liddell-Adshead-and-Burgess-Kings-Fund-October-2008_0.pdf 

  29. http://www.jstor.org/stable/10.1086/660139 

  30. http://www.idgconnect.com/abstract/9396/looking-big-data-are-approaching-death-hypocrisy 

  31. http://www.bartleby.com/100/115.77.html 

2.4. What is the smart shift?

We live in genuinely exciting times. Just in the past five years, we have arrived in a place where it is seen as normal for grandmothers and infants to leaf through ‘pages’ in a virtual book on the screen of a tablet computer, even as tribesmen manage their personal finances using mobile phones. As a species we pass messages, access services and share experiences using a phenomenal variety of online tools, from social networking to in-car navigation, from travel cash cards to video on demand. We can (and we do) broadcast our every movement, every purchase and every interaction. Our individual behaviours, the actions of corporations and collectives, and even of of nations and international bodies are fundamentally and dramatically changing. To paraphrase Sir Paul McCartney, you’d have to be blindfolded and riding a camel backwards through the desert not to have noticed the profound changes that are sweeping through.

All the same, and despite the stratospheric success of Susan Boyle or the sudden arrival of unicorn companies such as Uber and AirBnB, nobody has a crystal ball. While many would love to be the next big thing, neither they, nor the ability to has a monopoly on the future. Whatever we know about the so-called digital revolution, equally sure is that nobody planned it — its twists, turns and quite sudden revelations have taken entire nations by surprise. As was once suggested1, “Prediction is hard, especially about the future.” And it continues, unplanned, like outbreaks of some quickly mutating virus, itself feeding on a series of accidental mutations. As Bob Cringely explained in his 1986 history of Silicon Valley, ‘Accidental Empires2’ :

“1. It all happened more or less by accident.
2. The people who made it happen were amateurs.
3. And for the most part they still are.”

The digital revolution is not happening because a few smart people are making it so; in some cases, it is happening despite such efforts. The brainstorming phase is essentially a series of consequences which happen to have humans attached — Facebook’s Mark Zuckerberg for example, who was in the right place at the right time. For every Steve Jobs there is an example of someone who, with very similar resources, technical expertise and financial backing, failed to achieve anything worth mentioning (and indeed, even Jobs had his fair share of failures). And neither Alan Turing and his cohorts, nor Gordon Moore, nor Tim Berners-Lee had a clear idea of where their creations would take us.

One way or another, the digital revolution continues regardless — if ever we had control of the many-headed beast we call technology, we lost it some time ago. As a consequence the immediate future is anything but predictable; even creating a snapshot of these phenomena (which is, disconcertingly, the intention of this book) is nigh impossible. Indeed, it is tantamount to describing a termite mound by looking at the activities of individual termites — distressingly fast, almost immediately out of date and completely missing the bigger picture. And then, of course, the termites gain superpowers, telepathy and the ability to clone themselves at will…

So, as the transcontinental train of change hurtles on, is there any handrail that we can grab? The answer, and indeed the premise for this book, is yes. For despite the speed of change, the scope of change is reaching its limits. As profoundly affecting as they are, current events reflect the first time that the digital revolution has circumnavigated the whole globe. The geographic limits of technology have been reached, and so, therefore, have the majority of human boundaries3: while an Amazonian tribesman may choose not to communicate with a city dweller in Manila, that is through choice, not due to a technological barrier. Until we colonise Mars we can only go deeper in terms of how we use technology, not wider. 

The stage is set, but for what? We have been subjected to factors outside our control several times in our existence — from the earliest days of farming (and our first dabbling into genetic engineering), through the smelting of metals through the bronze and iron ages, and right up to the age of enlightenment, which itself spawned the industrial revolution. Just as this caused populations to move from rural areas into cities, drove fundamental advances in sanitation and catalysed the growth of the middle classes, so today’s technology-laced existence is changing the way we, as a race, live, think and feel. This book focuses on this complete shift of human behaviour and thinking, which (for want of a better term) we shall call the ‘smart shift’. And shift it is — not forward or back but sideways, stepping across to a sometimes subtly, yet altogether different way of being.

The smart shift has had a long gestation, with its roots at the discovery of electricity and hence both electronic calculation engines and long-distance communication. These are the starting buffers of the track we career along today. But if we cannot predict, are we also doomed to continue without a plan? Map it out we must, or face the continued challenge of individuals, companies and even nations acting in their own interests without any constraints, simply because our legal and governance configurations are insufficient to keep up.

The evolutionary process of mutation is both positive and negative. With every up there has been a down — as Kranzberg’s first law4 of technology states, “Technology is neither good nor bad; nor is it neutral.” It is worth recalling Richard Gatling, who was so keen5 to demonstrate the futility of war that, in 1861, he created a weapon which showed no regard for human life (if any weapon ever did). Gatling was, let us say, a complex character - at the same time as selling his weapon to the US Army, he sided6 with the Confederates. No doubt he would have fitted in well with many modern captains of the technology industry. While the Gatling Gun and its many copies may not have changed the nature of society, it most certainly did change the nature, tactics and psychology of war and indeed, played a crucial, role in the first, ‘Great’ War, the war to end all wars, in which so many soldiers perished due to a failure to understand the shift taking place.

The Gatling Gun changed the nature of war as profoundly as Jethro Tull’s threshing machine changed the nature of agriculture, the latter for the better in many ways but at the same time, taking away the livelihoods of thousands and driving them into factories. In this anniversary period of the First World War, let us neither fool ourselves into thinking all advances are by their nature a good thing, nor that we can be smart enough to change our behaviours quickly even as the causes of change are right in front of our noses.

On the upside, we have sufficient knowledge of the tracks upon which technology has been laid, and of human, corporate and national behaviour, to have a fair stab at what needs to be in place to help us deal with what we are experiencing. The early days of the industrial revolution were more about discovery than invention: the consequence was decades of change as first the machines took over. Eventually such diversification subsided, then humanity started to get back into control. Some of the most important figures in this period were politicians who grasped where it was all leading, and who were then able to drive the legislation required to counter it. Innovation without policy leads to chaos and vested interests winning short-term gains over the greater good.

Throughout the past few hundred years we have seen some quite profound changes in how we think and act, driven by what we can loosely call ‘innovations’. It is difficult to imagine one more profound as what is happening currently: perhaps quantum mechanics will yield another such shift, at some point in the future. Even so, historians will look back in a few hundred years and recognise just what a watershed moment the digital age is for the human race and, indeed, the planet. Meanwhile, from the perspective of living within these turbulent times, the bottom line is that those individuals and entities that shift faster — through luck or judgement — will have an advantage over those who move less fast. This is not a prediction; as innumerable examples illustrate, it is happening right now.

All the same and with some optimism, let’s consider how on earth we got into this turbulent phase of our existence in the first place. To do so, we need first to get a handle on the raw materials of the digital revolution, which is being powered by a capability unheard of in history — the ability to store, manipulate and transmit astonishingly large quantities of information, anywhere on (and indeed, off) the planet. To understand how this became possible, let’s go back to where this latest phase began — the end of the Second World War.

3. A sixty year overnight success

In these celebrity-strewn days of X-Factor and The Voice, a truism pervades the entertainment industry. Young performing arts students are told, in no uncertain terms, that instances of instant fame are infrequent and unlikely, and no substitute for the years of hard graft more likely to lie ahead. But then, seemingly out of nowhere, will emerge a Ricky Gervais or Alan Rickman, a Joseph Conrad or Raymond Chandler, a Andrea Bocelli or Al Jarreau. “It took me ten years to achieve overnight success,” goes the adage.

The reason why few, if any fields offer an easy path to success is mathematical — to put it bluntly, if it were that easy, everyone would be doing it. Given that success involves harnessing potential differences between context and custom, there can be advantage to be gained from being first; equally, in times of plenty, those not destined to be plucked from obscurity by the hand of Lady Luck, a chance meeting or a telephone vote, recognise the role of graft. The technology sector is no different - “In the digital age of 'overnight' success stories such as Facebook, the hard slog is easily overlooked,” says inventor and entrepreneur James Dyson.

When a ‘new’ phenomenon - social networking, cloud computing, mobile telephony - comes from seemingly nowhere, industry insiders claim they have seen it all before, and maybe they have. Hopeful technologists and excited pundits flock in, hoping to extract value from what they did not create. Meanwhile, without consideration of the whys and wherefores, the world changes once again.

3.1. The incredible shrinking transistor

“There is nothing new about the Raspberry Pi. It’s just smaller,” said its creator, Welshman Eben Upton, to a room full of wide-eyed schoolchildren. He was right, of course. The hardware at the heart of the Pi, a low-cost device originally released for students to learn how to program, follows a processor architecture that is now some sixty years old. Its two key components — a central unit for arithmetic processing and a memory store — are familiar elements of many computing devices, from mobile phones to mainframes. And they have been around for over seventy years, since first being mentioned in a 1945 report1, written by John von Neumann. The Hungarian-born polymath was explaining EDVAC, the Electronic Discrete Variable Automatic Computer that had been built at the University of Pennsylvania. To understand its architectural significance, we need only to look at Von Neumann’s own words about its “considerable” memory requirement: “It is… tempting to treat the entire memory as one organ,” he wrote of its use of computer memory for both programming instructions and for data, which has become a design constant in just about every computer built since.

EDVAC was a joint initiative between the University and the United States Army ordnance department; it is no coincidence that von Neumann came to the field of computing through his work on the hydrogen bomb. Indeed, EDVAC’s predecessor2, the Electronic Numerical Integrator And Computer (ENIAC — you can see where computer people’s love of acronyms was born) was used to perform calculations on the H-bomb itself. Both computers were, however, reliant on valves. Not only were these a factor in its of its not considerable weight — with 6,000 in total, the computer weighed almost 8 tonnes. But also they placed a constraint on what was feasible for the computer to do. Simply put, it was as big as it could affordably be, and was therefore only able to run programs that could fit in the computers memory and run in an appropriate time.

For computers to become smaller, cheaper and indeed faster required semiconductors. The ability for substances other than metal to conduct electricity (albeit at a level inferior to metal, hence the term semi-conductor) had been known since Faraday’s time. It was known that substances could be treated to increase their conductivity: so, for example, selenium became more conductive if it was lit; lead sulphide responded to microwave radiation; and indeed, silicon and germanium were better able to conduct electricity if they were given an electrical charge of their own. It wasn’t long after this discovery that scientists realised that a chunk of silicon could be used as an electrical switch, with the on/off feature being controlled by electricity as well. Thus, shortly after the war, and independently in Paris and New Jersey, the transistor was born.

It would take a further seven years for Gordon Teal, MIT graduate and employee at Texas Instruments, to produce silicon crystals of sufficient quality to produce the first commercially viable transistors. The diffidence of his peers was in part responsible for the slow progress. “I grew crystals at a time when they were an urgent but only dimly-perceived need,” he was to remark3 of his 1954 achievement. “Most of my associates continued to believe that single crystals were of only limited scientific importance and would never be of any major use in devices that had to be produced on a large scale.”

Clearly, Teal’s colleagues had not thought of the potential for using transistors in computing. A computer essentially consists of a massive array of switches, turning on or off based in that inputs they are given. They use mathematical logic — for example, if two inputs are on, then an output will be switched on — and they can build up such logical steps to do simple arithmetic, with which they can perform more complex maths, and so on. When we talk these days about having millions of transistors on a chip, we are talking about millions of microscopic switches, linked together in such a way to make such operations possible. Rather than being manufactured, the transistors are etched onto a layer of pure silicon using stunningly advanced photographic techniques.

All of this is why, while the Raspberry Pi may be ‘nothing new’, it remains nothing short of miraculous. Weight for weight, today’s computers are thousands of times more powerful than those of twenty years ago, and are millions of times cheaper. To put this in context, the Pi can run the same software as a 1980s mainframe but while the latter cost 2 million dollars, the cheapest Pi will set you back less than a fiver[^4]. Equally, the Pi is no larger than a tortilla chip, whereas the mainframe filled a room.

The phenomenon of computer shrinkage is known as ‘Moore’s Law’ after Gordon Moore, one of the founders of chip manufacturer Intel. In 1957 Moore put forward a thesis, based on advances in photographic etching, that the number of transistors on a chip would double every year for ten years. “That meant going from 60 elements on an integrated circuit to 60,000 – a thousandfold extrapolation. I thought that was pretty wild,” he later remarked4. In fact it was not enough — a decade later he had to revise this figure to doubling every 24 months, and later to 18 months. Whether through sheer luck, amazing foresight or simply a knack at creating self-fulfilling prophecies (word is that it has been used to set the research and development programmes at a number of companies, including Intel), the industry has thus far managed to follow this version of Moore’s Law pretty closely.

A familiar corollary of the law, of course, is that the costs of computer processors and other silicon-based integrated circuits have also been falling. In part, this is direct maths: if the cost of a chip remains constant, then the more transistors you can fit, the less you will spend per transistor. Additional factors, such as improved manufacturing quality and ‘supply and demand’ also have an effect. Quality means you can create bigger processors without throwing too many away, and the more people want, the more companies create, pushing costs down.

Moore’s Law isn’t just making things smaller and cheaper — imagine what you can do if you keep things big? That’s exactly what’s happening at the other end of the scale. Today’s supercomputers do not rely on a single processor but rely on hundreds, or even thousands of processors to tackle a single programming task – such as predicting the weather or running aerodynamic simulations in Formula 1 (we shall return to both of these later). Indeed, boffins at the University of Southampton in the UK have even used the lowly Raspberry Pi to create5 a supercomputer, with an aim to6 “inspire the next generation of scientists and engineers,” explains Professor Simon J. Cox, who oversaw the project. The initiative has achieved just that: “Since we published the work, I have received hundreds of mails from individuals, school groups and other Universities reproducing what we have done.”

The end result of such phenomenal innovation in electronics is, simply, that we can do more with the computers that result — we can do more complex maths or run bigger programs. To whit, let us now turn our attention to software, the stuff which runs on computers. While the separation of hardware from software may appear to be a fait accompli today, it did not come about by accident, but with some hard thinking about exactly how it would work. We can trace the furrowed-browed origins of software back to the midst of the Industrial Revolution and the founding father of computing itself, Charles Babbage. Inspired, or rather troubled by the painstaking, error-prone efforts it required to construct astronomical tables by hand7, he was heard to declare, “I wish to God these calculations had been executed by steam!” Babbage set out to build a mechanical device that would be able to conduct arithmetic with the “unerring certainty of machinery.” His designs for the Difference Engine, while complex — they required some 25,000 complex parts to be hand-made — were sufficient to generate both interest from the Royal Society and grants from the UK Government.

As well as being an initiator of what we know today, Babbage was also responsible for the world’s first IT project overrun. As the months of development turned to years, the powers that were became increasingly reticent to put more good money after bad. Above all, the Difference Engine was a one-trick pony, “Can’t we have something more generally useful?” he was asked. Eventually and sadly, a dispute over a single payment to his engineering team caused production to cease, never to be restarted. “Babbage failed to build a complete machine despite independent wealth, social position, government funding, a decade of design and development, and the best of British engineering,” says historian Doron Swade at the Computer History Museum.

With no machine to show, Charles Babbage was undaunted. Turning his frustrations into creative energy, he set about solving exactly the problem that had surfaced — the need to create a more general purpose machine. “Three consequences have … resulted from my subsequent labours, to which I attach great importance,” he wrote8. Not only did he manage to specify just such a creation, which he termed the Analytical Engine. But also he devised a way of specifying mechanical calculation machines and, to cap it all, he came up with a vastly simplified version of the Difference Engine, which he reduced to only 6,000 parts. It was this latter device that he proposed to make for the government, rather than the Analytical Engine which he knew would be a step too far. As a (perhaps ill-advised) rebuke to those in the corridors of power, he suggested that the new machine might even be able to unpick the vagaries of the government’s own finances.

Babbage was not acting in isolation but was in constant contact with academic establishments across Europe. In 1840 he was invited to Turin to present to a number of eminent individuals, including the young, yet “profound analyst” M. Menabrea, who wrote a detailed review on the proposed device. As the review was in Italian, Babbage turned to a mathematically trained linguist, who he had met some years previous at a party. Her name was Ada Lovelace, daughter of Lord Byron.

Menabrea’s review of the Analytical Engine design was excellent; but the notes written by Ada Lovelace on her translation were nothing short of stunning. Taking nine months to complete the translation and accompanying notes, Lovelace “had entered fully into almost all the very difficult and abstract questions connected with the subject,” wrote Babbage. Not least9 the notion of separating data values from mathematical operations, which is a precept of modern computing. Or as she put it: “The peculiar and independent nature of the considerations which in all mathematical analysis belong to operations, as distinguished from the objects operated upon and from the results of the operations performed upon those objects, is very strikingly defined.” Dividing operations from objects was tantamount to inventing programming as we know it today. “Whether the inventor of this engine had any such views in his mind while working out the invention, or whether he may subsequently ever have regarded it under this phase, we do not know; but it is one that forcibly occurred to ourselves on becoming acquainted with the means through which analytical combinations are actually attained by the mechanism.”

It was Ada Lovelace’s explanation of how the device might be used to calculate Bernoulli numbers, that has earned her the distinction of being the world’s first computer programmer. She saw such creations as things of beauty: “We may say most aptly, that the Analytical Engine weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves.” Not only this but she also discerned the potential for software bugs — “Granted that the actual mechanism is unerring in its processes, the cards may give it wrong orders.”

Both Babbage and Lovelace saw the potential for such a device not just to solve the mathematical problems of the day, but as a means to extend knowledge. “As soon as an Analytical Engine exists, it will necessarily guide the future course of science,” said Babbage. And concurred Lovelace, “The relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated.” The pair were understandably excited, but tragically it was never to be: the costs of producing such a device were simply too great, and the financial benefits of doing so still too small, for sufficient funding to become available. This relationship, between cost and perceived benefit, is of such fundamental importance to the history of computing that we should perhaps give it a name: the Law of Diminishing Thresholds, say. Babbage paid the ultimate price for his failure to deliver on the law, in that his designs remained unaffordable in his lifetime. When Babbage died much of his and Lovelace’s impetus was lost, and its continuation had to wait until the advent of more affordable electrical switching before their spark of innovation could be re-ignited.

One of the first to perceive the potential of electrical switches — as were already being used in telephone exchanges — was young German engineer Konrad Zuse. While he had little knowledge of history[^11] when he started his work in the early 1930’s — “I hadn't even heard of Charles Babbage when I embarked,” he was to remark10 — he realised that the ability of telephone switches to maintain an ‘on’ or an ‘off’ state meant that they could also be used to undertake simple arithmetic. As a simple example, the binary number 10 represents decimal 2, and 01 represents 1; by creating a certain arrangement of switches, it’s straightforward to take both as ‘inputs’ and generate an output which adds them together, in this case to make 11 which equates to decimal 3. From such little acorns… given that complex maths is ‘merely’ lots of simple maths put together, one can create a bigger arrangement of switches to do addition, multiplication, differentiation…

Like Babbage before him, Zuse’s real breakthrough was to extrapolate such ideas into a general purpose computing device, keeping software separate from hardware, and data from mathematical operations. “I recognised that computing could be seen as a general means of dealing with data and that all data could be represented through bit patterns,” he said. “I defined "computing" as: the formation of new data from input according to a given set of rules.” Zuse was an engineer, not a mathematician, and as such worked on his Z1 computer in ignorance of such esoteric principles as mathematical logic, which he worked out from scratch. Given the state of Europe at the time, he also worked unaware of progress being made in other countries — for example, the UK academic Alan Turing’s work. “It is possible to invent a single machine which can be used to compute any computable sequence,” Turing wrote11 in 1936, in his seminal paper about the “universal computing machine.”

History does seem inclined to repetition, however. As the war started Zuse found his requests for project funding from the authorities fell on deaf ears. “Although the reaction was initially sympathetic towards the project we were asked simply, "How much time do you think you need for it?". We replied, “Around two years. The response to this was, “And just how long do you think it'll take us to win the war?”” Zuse pressed on regardless, developing the Z2, Z3 and eventually the Z4, for which he defined the Plankalkül programming language — but progress was understandably hampered. “1945 was a hard time for the Germans,” he remarked. “Our Z4 had been transported with incredible difficulty to the small Alpine village of Hinterstein in the Allgäu. All of us who had managed to get out of Berlin were happy just to have survived the inferno there. Work on Plankalkül now continued in wonderful countryside, undisturbed by bomber attacks, telephone calls, visitors and so on. Within about a year we were able to set up a "revamped" Z4 in full working order in what had once been a stable.”

Meanwhile of course, work in the UK and USA had continued with better funding and less danger of falling bombs. In the UK, Tommy Flowers had designed the clanking, groundbreaking Colossus computer based on Turing’s principles. But it would be across the Atlantic, with the ENIAC then the EDVAC, that Von Neumann and his colleagues set the architecture and direction for computing as we know it. But all the same, the Law of Diminishing Thresholds — the notion that the balance was tipping between costs and effectiveness of computer processing, driven by advances in both electronics and programming — held true. While the Second World War was a tragedy of unprecedented scale, it is possibly no coincidence that the ideas emerged in the preceding decade led to an explosion of innovation almost immediately afterwards. The rest, as Turing had foretold, consists solutions waiting for problems. The overall consequence, simply put, is that as the number of transistors increases, costs of processing fall, many previously unsolvable problems have moved into the domain of the solvable.

For example, when the Human Genome Project was first initiated back in 1990, its budget was set at a staggering $3 Billion and the resulting analysis took over four years. In 1998, a new initiative was launched at one tenth of the cost – $300 Million. Just over a decade later, a device costing just $50,000 was used, aptly, to sequence Gordon Moore’s DNA in a matter of hours. By 2015 the costs for a ‘full’ sequence had dropped to $1,000 — and services to analyse DNA for certain markers (ancestral or medical or to establish paternity) have become commonplace, costing mere tens of dollars.

Of course all this processing would be nothing if we didn’t have the mechanisms and the routes to shift the data, both inside data centres and around the world. To see just what an important factor this is, we need to go back to Ancient Greece.

3.2. The world at your fingertips

When Darius the Great became king of the Persian empire in 549 BC, he recognised how geography limited his ability to communicate his wishes. As notes historian Robin Lane Fox, “Centralised rule is the victim of time and distance1.” The consequence, and one of Darius’ most significant achievements was to construct what became known as the Royal Road, stretching from his first capital, Susa to the distant city of Sardis, some 1677 miles away. With regular stations about 15 miles apart along the route, a relay of couriers on horseback could cover the entire distance in seven days. “There is nothing mortal which accomplishes a journey with more speed than these messengers, so skilfully has this been invented by the Persians,” gushed2 Greek historian Herodotus. “Neither snow, nor rain, nor heat, nor darkness of night prevents them from accomplishing the task proposed to them with the utmost speed.”

The road thus became the earliest established postal service. Economic growth across the region increased as a consequence, noted3 historian Craig Lockard, helping the Persian Empire become the largest the world had seen. But just as swords can cut two ways, so the road was also a significant factor in the demise of the Empire some 200 years later, when Alexander, also known as the Great, took advantage of its well-maintained state to pursue his arch-enemy, King Darius III of Persia. To add insult to injury, Alexander used the very same courier system to co-ordinate his own generals.

Other systems of communication existed — such as so-called fryktories (signal fires) and even a complex system of semaphore, as documented by Polybius, in the second century BC:4

“We take the alphabet and divide it into five parts, each consisting of five letters. There is one letter less in the last division, but it makes no practical difference. … The man who is going to signal is in the first place to raise two torches and wait until the other replies by doing the same. These torches having been lowered, the dispatcher of the message will now raise the first set of torches on the left side indicating which tablet is to be consulted, i.e., one torch if it is the first, two if it is the second, and so on. Next he will raise the second set on the right on the same principle to indicate what letter of the tablet the receiver should write down.”

However, it was ultimately via well-kept roads that empires grew and could be sustained. Whereas fires and flags served their singular purposes well, the roads provided extra versatility and, above all, reliability — particularly important messages could be sent with additional security, for example. Nobody knew this, nor used such systems better than the Romans, whose famously straight roads came about primarily to aid the expansion of empire. Indeed, Emperor Diocletian created a new legion5 in about 300AD specifically to protect the Royal Road at the point where it crossed the Tigris river in Amida in eastern Turkey, now the modern-day town of Diyarbakır.

The road-based system used throughout ancient times did have its limitations, however. Either big messages (such as whole manuscripts, carefully copied by hand) could be carried over long distances to smaller numbers of people, or short messages such as edicts could be cast more quickly and broadly, relying on heir distribution at the other end of the line. The whole thing was a bit of a trade-off – either say a lot to people who were close, or a little to people who were further away. For all their constraints, such models of tele-communications (tele- from the greek, meaning “at a distance”) lasted some two millennia, with even the invention of the printing press (which we shall look at later) doing little to improve matters.

The real breakthrough came at the turn of the nineteenth century, when Allesandro Volta6 stumbled upon the creation of the electric circuit. It is hard to imagine a discovery more profound, nor more strangely circuitous. At the time electricity was all the rage among the intellectual classes across Europe and America, and theories about how it worked were legion. Volta’s first breakthrough came in 1778, when he worked out that static electricity could be stored in a device he called a condenser (today we’d call it a capacitor). Then, in 1792, he set out to disprove7 another theory by Galvani, who reckoned that animals possessed a unique property called animal electricity, a consequence of which, he was able to demonstrate in an experiment typical of its time, was how frogs’ legs could still move when they had been separated from the frog.

Over a period of months Volta conducted a wide variety of experiments, many at random, before achieving the singular breakthrough which would mark the inception of the modern world. Having no doubt worked through a fair number of amphibians, he discovered that a frog’s leg would indeed move, if strips of two different metals (copper and zinc) were applied to them. Having surmised that the metal strips were creating the electricity in some way rather than the tissue of the frog, Volta only required a short hop (sorry) before discovering that a more reliable source of electric current could be created by replacing the de-connected legs with salt solution.

The resulting explosion of discoveries included the invention of the electromagnet by Englishman William Sturgeon in 1825, thirteen years after which Samuel Morse demonstrated the first workable telegraph8. It took a further five years for the public authorities to commission a longer-distance telegraph line, at a cost of $30,000. Using a coded system of dots and dashes that would forever more be known as ‘Morse Code’, the first message to be sent across the wire, on 24th May 1844, by a young girl called Annie Ellsworth, was the biblical9 (and quite prescient), “What hath God wrought?”

The still-nascent USA was an ideal starting point for the telegraph, being such a vast, unpopulated country. It still took a goodly while for the mechanism to achieve any level of ‘mass’ adoption however — while demand was great, the distances that cables needed to cover in order to be practical were simply too great. In August 1858 a message was sent from the UK to the US (once again with a biblical theme — “Glory to God in the highest; on earth, peace and good will toward men”). In the meantime, some in the US felt it viable to establish a trans-continental Pony Express service — this came in 1860 and had a route to rival that of ancient Persia, going from St Joseph, Missouri to Sacramento, California, some 1,900 miles in total. Once again, stations were set along the way between 5-30 miles apart, a distance set by the practicalities of horse riding. The first letter, addressed to a Fred Bellings Esq. and stamped April 3rd, took 10 days to reach Sacramento.

When Abraham Lincoln made his inaugural address10 as president on the eve of the American Civil War, it was transmitted via a hybrid of old and new — it was first telegraphed from New York to Nebraska, then carried by just-founded Pony Express to California where it was telegraphed on to Sacramento. While the civil war may have been involved in the demise of the Pony Express only two years after its launch, the telegraph was already being reeled out across the continent — indeed, just two days after the transcontinental telegraph was finished, on 24th October 1861, the courier service folded. “The transcontinental telegraph put the Pony Express out of business in the literal click of a telegrapher's key. That's not an exaggeration,” commented11 author Christopher Corbett.

Quite quickly the telegraph extended across Northern America, even as it did the same across continental Europe and beyond. The notion of a simple wire and an electromagnet led, as we know, to innovations such as the telephone patented by Alexander Graham Bell in 1876, followed shortly after by the invention of radio. Alongside other inventions — the LED, the fibre-optic cable and so on — humanity now had everything it needed to transmit larger quantities of information to be transmitted, both digital data and content — text, graphics, audio and video — across the globe.

The proliferation of mechanisms of information transfer has also led to an explosion of approaches12, protocols and standards. One of the clever bits about data — it being ultimately a set of zeroes and ones with some topping and tailing — is that any transmission standard can be interfaced with, or integrated with, any other. As a consequence, in the 1990’s, a familiar sight for data networking engineers was a wall chart covered with boxed and arrows, showing how different standards interfaced.

Until relatively recently, the real contention was between standards that sent a constant stream of data between two points, and standards that chopped data into packets and sent it off, joining it back up when it reached its destination. The latter model had several advantages, not least that packet-based networks operated like a mesh, so packets could be routed from any node to any other — if a node wasn’t working, the packets could find another route through. Whereas with a point-to-point network, if the route failed for whatever reason, the connection would be lost — something we have all experienced when a telephone call gets dropped. Trouble was, packet-based networks couldn’t work fast enough to send something like an audio or video stream. If packets did get lost they had to be sent again, adding to the delay. The trade-off was between a faster, less reliable connection or a slower, more choppy connection that was less likely to fail.

As networks gradually became faster, it was only a matter of time before packet-based protocols ruled the roost. Or, should we say, a pair of protocols in particular, known as Transmission Control Protocol (TCP) to handle the connection, and Internet Protocol (IP) to manage the route to the destination. TCP/IP was already some two decades old by this point — the standard was defined in 197513 by US engineers Vint Cerf and Bob Kahn, and had already become well-established as the de facto means of data transfer between computers.

What with Tim Berners Lee's 1989 creation of a point-and-click capability he termed the World Wide Web, using a proven[^14] mechanism known as Hypertext Markup Language (HTML) to describe information and a mechanism known as Hypertext Transport Protocol (HTTP) for transfer of text and images across the Internet, the future of TCP/IP appeared to be assured but it still was not adequate for the transmission of more complex data sets such as streamed audio and video — the required increases in networking speeds and volumes, based on laying huge quantities of fibre between our major cities, and from there to smaller locations.

The final piece in the networking puzzle came with the creation of Assymetric Digital Subscriber Line (ADSL) technology, better known by the name 'broadband'. Experiments to use twisted pair cable — the type used in telephone lines — for data transmission had started in the Sixties, but it wasn't until 1988 that Joseph Lechleider, a signal processing engineer at Bell Labs, had a breakthrough idea14. "What if the download speed is set to be much faster to the upload speed," he thought to himself. Not only did the model reduce interference on the line (allowing for greater transport speeds overall), but it also fitted closely with most consumer use of the Web — in general, we want to receive far more data than we send.

ADSL broadband achieved mass adoption in the Far East, with inertia in the West largely down to incumbent telephone companies, fearful of cannibalising their own businesses. It took waves of deregulation and no small amount of infrastructure upgrades for broadband services to be viable for the mass market, a situation which is still unfolding for many.

All the same networking capabilities have kept pace with computer procesing, with Nielsen's Law showing15 that bandwidth has increased 50% per year between 1984 and 2014. Nielsen’s Law says we should all have over 50Mbs by today, which of course we don’t for all kinds of reasons, not least that we rely on others to put the infrastructure in place to make it possible.

The final piece in the puzzle took place when the Internet first started to be transferred across radio waves. Two mechanisms are generally accepted today, namely Wireless Fidelity (WiFi) and data transmission via mobile phone technology, each of which is extending in its own way. According16 to the GSM association, mobile coverage now reaches “6 billion subscribers across 1,000 mobile networks in over 200 countries and regions.” Areas that do not have a mobile mast can connect via satellite, further extending the reach to the furthest nooks and crannies on the planet. Significant advances will come from mobile, not least we can expect 4G LTE to make a real difference by operating at double WiFi speeds, reducing latency with minimal ‘hops’.

Mobility is what the word implies — technology can follow or needs, rather than us being fixed to its location. “Before long it will be possible for every person on this planet to connect to every other. We are only just starting to get our heads around what that might mean,” said musician and innovator Peter Gabriel, when introducing Biko, a song about a person on the wrong side of historical prejudice.

At the same time, technology offers huge potential to level the global playing field. Consider education, which is already being undertaken in developing countries, according to Alex Laverty at The African File. Clearly, the more that educational materials can be delivered onto mobile devices, the better - while the current handset 'stock' might not be all that suitable for interactive applications, it's worth keeping an eye on initiatives such as India's $35 tablet programme - and note that OLPC also has a tablet device in the pipeline.

Gabriel’s views echo the views17 of Nikola Tesla in 1926. “From the inception of the wireless system, I saw that this new art of applied electricity would be of greater benefit to the human race than any other scientific discovery, for it virtually eliminates distance.” From its lowly beginnings, we now have a communications infrastructure, suitable for both computer data and more complex 'content', which spans the globe. And it hasn’t stopped growing — so-called dark fibre (that is, cables which nobody has decided a use for, yet) continues to be laid between cities and nations. Over half a billion pounds was spent18 on fibre-optic cable in 2014, and the market shows no sign of slowing as nation after nation looks to extend its networking reach.

Even as a significant debate continues around net neutrality, across the last two decades we have seen an explosion of activity in consequence, through user-generated blogs, videos and other content to social networking, mobile gaming and beyond. What this really gives us, however, is a comprehensive platform of processing and communications upon which we can build a diverse and complex range of services. Enter: the cloud.


  1. http://en.wikipedia.org/wiki/Royal_Road 

  2. http://www.livius.org/articles/concept/royal-road/ 

  3. https://books.google.co.uk/books?id=ZUYFAAAAQBAJ&lpg=PA157&ots=1ueYmNrifr&dq=Royal%20Road%20stretches%20from%20Athens&pg=PA158#v=onepage&q=Royal%20Road%20stretches%20from%20Athens&f=false 

  4. http://www.mlahanas.de/Greeks/Communication.htm 

  5. http://www.livius.org/articles/legion/legio-v-parthica/ 

  6. http://science.jrank.org/pages/2299/Electric-Circuit.html 

  7. http://www.famousscientists.org/alessandro-volta/ 

  8. http://inventors.about.com/od/tstartinventions/a/telegraph.htm 

  9. http://inventors.about.com/od/mstartinventors/ig/Samuel-Morse---Patent/First-Telegraph-Message.htm 

  10. http://showcase.netins.net/web/creative/lincoln/speeches/1inaug.htm 

  11. http://www.dailymail.co.uk/news/article-2052728/Transcontinental-telegraph-150th-anniversary-worlds-1st-social-network.html 

  12. Hedy Lamarr - spread-spectrum wireless pioneer? http://www.biztechmagazine.com/article/2012/05/mothers-technology-10-women-who-invented-and-innovated-tech 

  13. http://internethalloffame.org/inductees/vint-cerf
    [^14] “Ted Nelson had the idea of hypertext, but he couldn't implement it.” http://edge.org/digerati/markoff/markoff_chapter.html 

  14. http://www.nytimes.com/2015/05/04/technology/joseph-lechleider-a-father-of-the-dsl-internet-technology-dies-at-82.html?_r=0 

  15. http://www.nngroup.com/articles/law-of-bandwidth/ 

  16. Need a reference here - but here’s a map http://www.collinsbartholomew.com/PDF/GSMA%20A2%20lo%20res.pdf 

  17. http://www.tfcbooks.com/tesla/1926-01-30.htm 

  18. http://www.ibisworld.co.uk/market-research/fibre-optic-cable-manufacturing.html 

3.3. Every silver lining has a cloud

“What’s that, there?” I ask Philippe, pointing to one of the black boxes, its LEDs flashing in some unholy semaphore.

“That’s a bit of iTunes,” he says.

Computer rooms today have come a long way from the Stonehenge-like mainframe environments of the Sixties and Seventies. Enter a data centre today and you will see rack upon rack of black frames, measured in U's - a U is one inch, the space between each bolt hole on the frame. False floors and rising columns contain kilometres of copper and fibre optic wiring; air conditioning ducts run in every direction, their efforts resulting in a constant, all-pervading noise level.

Heating engineers are the masters here, designers of alternating warm/cold corridor set-ups that keep the ambient temperature at 16 degrees. As illustrated by the amount of power drawn by data centre environments, power is the big bottleneck. Some have been looking into ways of dealing with this, not least the idea of using the heat generated by computers to heat houses, an idea being developed by German startup1 Cloud and Heat. In London and New York, data centres are situated in the basements of tower blocks, their heat routed to upstairs offices.

There’s hardly a person to be seen: such environments, once built, need little manual intervention. Access is restricted to a few deeply technical, security-vetted staff, whose entry to the facility — with hand and retina scanners, scanners, glass-walled acclimatisation chambers, lightless lobbies and serious-looking security personnel — looks like it is modelled on some evil megalomaniac’s underground headquarters. In reality the kingpins of these empires are a bit more mundane. The companies running many such environments, such as colocation company Equinix, often have no idea what is running in their facilities, while their tenants, people like Philippe, the young, smiling, bespectacled French CEO of BSO Network Solutions, look more like they have just walked out of accountancy school than harbouring any big ideas to take over the planet.

The black box containing “a bit of iTunes” could just have easily been running the web site for the BBC, or Marks and Spencer's, or some computer running on behalf of an unnamed customer. Tomorrow, the same computer server may be allocated to a completely different set of tasks, or even a different client. What makes it all possible is one of the oldest tricks in the computer engineer’s book. It’s worth explaining this as it helps everything else make sense.

When we think about computers, we tend to consider them in terms of a single ‘stack’ of hardware and software: at the bottom are the computer chips and electronics; on top of this runs an operating system (such as UNIX or Microsoft Windows). On top of that we have our software — databases and sales management software, spreadsheets, word processors and so on. So far so good — but in practice, things are done a little more cleverly. Inside the software that makes computers tick, an imaginary world is constructed which allows for far more flexibility in how the computer is used.

The reasoning behind creating this imaginary world came relatively early in the history of computing. The original computers that emerged post-war were geared up to run ‘batch’ jobs — literally, batches of punched cards were inserted into a reader, the program was run and the results generated. The problem was that only one job could be submitted at once, leaving a queue of frustrated people waiting to have their programs carried out. One can only imagine the frustration should there be a bug in a program, as a failure meant going to the back of the queue!

After a decade or so, computer technicians were starting to consider how to resolve this issue. Two solutions were proposed: the first in 1957 by IBM engineer Bob Berner2, who suggested that time could be shared between different programs, with each being switched in and out of memory in a way that multiple programs could appear to be running simultaneously. A few years later and also at IBM came a different idea: recalled3 systems programmer Jim Rymarczyk, how about pretending a single mainframe was actually multiple computers, each running its own operating system and programs?

The two models — time sharing and virtualisation — continued in parallel for several more decades, with the former being used on smaller computers in order to make best use of limited resources, and the latter being preferred for mainframes such that their massive power could be divided across smaller jobs. As computers became more powerful across the board, by the turn of the millennium both models started to appear on all kinds of computer. Fast forward to the present day and it is possible to run a ‘virtual machine’ on a mobile phone, which will already be making best use of time sharing.

While this may not appear to be much of a big deal to anyone outside of computing, it has had a profound impact. If what we consider to be a ‘computer’ exists only in software, then it can not only be switched on and off at will, but it can also be moved from one real computer to another. If the physical computer breaks, the virtual ‘instance’ can be magically restarted on a different one. Or it can be copied tens, or hundreds of times to create an array of imaginary computers. Or it can be backed up — if anything goes wrong, for example if it gets hacked, then it becomes genuinely simple to restore. Prior to virtualisation, a constant frustration for data centre managers was ‘utilisation’ — that is, owning a mass of expensive computer kit and only using half of it. With crafty use of virtual machines, a larger number of computer jobs can run on a smaller amount of hardware, dramatically increasing utilisation.

Virtualisation removed another distinct bottleneck to how computers were being used. Back in the Sixties and Seventies, computers were far too expensive for many organisations, who tended to make use of computer bureaus to run their occasional programs — one of the reasons why time sharing was seen as such a boon, was to enable such companies to run more efficiently. As computers became more generally affordable, companies started to buy their own and run most, if not all of their software ‘in-house’ — a model which pervaded until the advent of the World Wide Web, in the mid-Nineties. Back in the day, Web pages were largely static in that they presented pre-formatted text, graphics and other ‘content’. Quite quickly however, organisations started to realise they could do more with their web sites — sell things, for example. And others worked out they could offer computer software services, such as sales tools, which could be accessed via the Internet. Some of the earliest adopters of this model were existing dial-up data and resource library providers — all they had to do was take their existing models and make them accessible via the Web. Other, startup companies followed suit — such as Salesforce.com, which quickly found it was on to something.

By 2001 a large number of so-called ‘application service providers’ (ASPs) existed. A major issue with the model was that of scale: a wannabe ASP had to either buy its own hardware, or rent it — it needed to have a pretty good idea of what the take-up was going to be, or face one of two major headaches: either it could realise it had over-estimated demand and be stuck with a huge ongoing bill for computer equipment, or it could have under-estimated and be unable to keep up with the number of sign-ups to the service. While the former would be highly discomfiting, the latter could spell disaster. E-commerce companies, such as rapidly growing online bookseller Amazon, were struggling with the same dilemma of resource management.

For a number of very sensible engineering reasons, Amazon and others were reliant on ‘lots of smaller computers’ rather than the ’small number of big computers’ model. Racks of identical, Intel-architecture servers known as blades were being installed as fast as they could, with resource management software doing its very best to shift processing jobs around to make best use of the hardware. Such software could only take things so far — until, that is, such servers finally become powerful enough for virtualisation to became a viable option. Virtualisation unlocked the power of computer servers, enabling them to be allocated in a far more flexible and responsive fashion than before. As a result, less hardware was needed, bringing costs down considerably.

You might think the story ends there, but in fact this was just the beginning. The real breakthrough came in 2002, when the engineers running Amazon’s south African operation realised that the company’s computers could also host virtual machines belonging to other people. With virtualisation, the model simply became that customers paid for the CPU cycles that they actually used. Almost overnight, the dilemma of ‘how much computer resource should I buy’ was removed from every organisation that ever wanted to build a web site, or indeed, run any kind of program at all. Based on the fact that the Internet is often represented as a cloud on corporate presentations, the industry called this model ‘cloud computing’.

Today we are seeing companies start from scratch with very little equipment, due to the pay-as-you-go model which now extends across most of what any company might need. One such business, Netflix, has sent shock waves around the media industry; remarkably however, the company only has 200 employees. How can this be? Because it is almost entirely running on Amazon’s network of computers — fascinatingly, in direct competition with Amazon’s own LoveFilm hosted film rental services. On the back of such increasingly powerful capabilities come the kinds of online services we are all using day to day – massively parallel search (Google), globally instant microblogging (Twitter), social networking (Facebook), customer databases (Salesforce) and so on. While Twitter’s interface might be simple for example, the distributed compute models making possible the parallel, real-time search of petabytes of data by millions of people are nothing short of staggering.

One area that the cloud shows huge promise is in the use of computers for research. Research funding doesn't scale very well - in academia, one of the main functions of professors and department heads is to bid for pockets of finance. Meanwhile, even for the largest companies, the days of unfettered research into whatever takes a scientist or engineer's fancy are long since over.

Much research has a software element - from aerodynamic experiments on Formula 1 rear wings that we look at in the next section, to protein folding and exhaustive antibody comparisons, there's no substitute for dedicating a few dozen servers to the job. Such tasks sometimes fall into the domain of High-Performance Computing but at other times simply having access to hardware resources is enough - as long as the price is right.

For a researcher, the idea of asking for twenty servers, correctly configured, would have been a problem in itself: no budget, no dice. Even if the money was available however, the kit would have to be correctly specified, sometimes without full knowledge of whether it would be enough. Consider the trade-off between number versus size of processors, coupled with quantity of RAM: it would be too easy to find out, in hindsight, that a smaller number of more powerful boxes would have been more appropriate.

Then come the logistical challenges. Lead times are always a challenge: even if (and this is a big 'if') central procurement is operating a tight ship, the job of speccing, gaining authorisation and checking the necessary contractual boxes can take weeks. At which point a purchase order is raised and passed to a supplier, who can take several more weeks to fulfil the order. It is not unknown for new versions of hardware, chipsets and so on to be released in the meantime, returning the whole thing to the drawing board.

Any alternative to this expensive, drawn-out yet unavoidable process would be attractive. The fact that a number of virtual servers can be allocated, configured and booted up in a matter of minutes (using Amazon Web Services’ Elastic Compute Cloud (EC24), say) can still excite, even though the model, and indeed the service, has existed for a few years. Even better, if the specification proves to be wrong, the whole lot can be taken down and replaced by another set of servers - one can only imagine the political and bureaucratic ramifications of doing the same in the physical world.

The absolute cherry on the top is the relative cost. As one CTO of a pharmaceutical company said to me, “And here's the punchline: the whole lot, to run the process, get the answers and move on - cost $87. Eighty seven dollars,” he said, shaking his head as though he still couldn't believe it. Unsurprising that a virtuous relationship is evolving between use of cloud resources and the increasingly collaborative nature of research, spawning shared facilities and tools such as those of the Galaxy project and Arch2POCM respectively.

Equally, it becomes harder to justify cases where the cloud is not involved. For example BBC’s Digital Media Initiative was to create a digital archive for raw media footage – audio and video – so that it could be accessed directly from the desktops of editors and production staff. This was planned save up to 2.5 percent of production costs saving millions of pounds per year. In practice, the “ambitious” project got out of control. It was originally outsourced to Siemens in 2008 but was brought back in house in 2010. Two years later, in October 2012 the BBC Trust halted the project and kicked off an internal review. And subsequently, the corporation’s Director General, Tony Hall, canned the whole thing5. In the event, the project cost £98.4 million over the period 2010 – 2012.

Would it be too trite to ask whether things would be different if the Beeb could have benefited from cloud computing, wouldn’t it? Surely an unfair question, given that five years ago cloud models were still in their relative infancy? While big-budget IT projects may still have been the default course in 2008, by the time the project first hit the ropes in 2010 the potential of the cloud was much clearer. The core features of cloud would seem to be tailor-made for the needs of broadcast media management – as NBC found when it streamed 70 live feeds of Olympic footage for direct editing by staff based in New York. Meanwhile, at the time of Margaret Thatcher’s funeral, the Beeb was forced to transfer videotapes by hand using that well-known high speed network – the London Underground.

The overall consequence of such experiences is that the ‘cloud’ has grown enormously, becoming a de facto way of buying computer resource. Processing has by and large become a commodity or a utility; indeed, if the number of computer servers in the world were a nation, it would be the fifth largest user of electricity on the planet. Cloud computing also has huge potential implications for emerging economies, for whom technology investment is a challenge. However it is not some magic bullet, and the network is generally still seen as a blocker to progress. The concept behind Data Gravity 6- that the money is where the data happens to be — is predicated on the fact that large quantities of data are so difficult to move.

All the same it should be seen as a major factor in giving the world a platform of massively scalable technology resources which can be used for, well, anything. In 2011 Cycle Computing's Jason Stowe announced the Big Science Challenge, offering 8 hours of CPU time on a 30-thousand core cluster to whoever could come up with a big question. “We want the runts, the misfits, the crazy ideas that are normally too big or too expensive to ask, but might, just might, help humanity,” says the release. “No idea is too big or crazy.” Which is a suitable epithet upon which the future of computing will be built.

3.4. A platform for the web of everything

Planning ahead for Formula 1 racing success is not for the faint-hearted. “We have a plan of where we are going for the year, and then something major comes along, like a competitor running something new, and suddenly we have to shift tack completely and head off in a different direction,” explains Williams F1’s IT Director, Graeme Hackland. “Our race culture is ‘You change things’. It happens race by race, we’re learning about our car, our competitors’ cars and that leads to changes in the course of a season.”

As a consequence the design of the car is being changed in between, and sometimes even during race meets. The ultimate decision often lies with the aerodynamics engineers, who test potential designs using the race team’s own supercomputer before 3D-printing them and running wind tunnel scenarios. Then any changes need to work in practice, on the track. To get feedback from cars relies on a complex array of sensors measuring vibration, changes in temperature and so on. “Each car has over 200 sensors, 80 of which generate real time data for performance or things that might go wrong, while the rest create data that the aerodynamicists analyse afterwards, for improvements to the car,” explains Graeme. To do so requires an array of equipment — supercomputing and general-purpose computing facilities back at home base are extended with a small number of truck-mounted servers situated at the side of the track. “We take a feed of everything generated on the car’s sensors, then we trickle feed data back as we need it,”

While Formula 1 is just the latest industry to be swept up by connecting monitoring devices and sensors to some kinds of computing facility, a number of industries have been doing so for a lot longer. Sensors themselves have been around for a goodly long time — the first thermostat appeared1 in the 1880s, patented by Warren S. Johnson in Wisconsin. Johnson was a science teacher at the State Normal School in Whitewater, Wisconsin, whose lessons were constantly being interrupted by staff responsible for adjusting the heating equipment. Over three years he worked on a device, based on the principle that different metals expand at different rates when heated; a coil of two metals could make or break an electrical circuit, which in turn could be connected to a steam valve. “It is evident that the making and breaking of the electric current by the thermostatI will control the supply of steam, and thus the temperature of the room,” stated2 Johnson in his patent of 1883. The company he founded, Johnson Controls, is now present3 in over 150 countries and turns over nearly $40billion in sales — not bad for a farmer’s son of a teacher.

Monitoring such devices from a distance is a more recent phenomenon, which has largely followed the history of computing — in that, once there was a computer to which sensors could be connected, people realised a number of benefits of doing so. The earliest examples of remote control came from the utilities industries in the 1930s and 1940s, and keeping a distant eye on pumping stations and electricity substations quickly became a must-have feature. “Most of the installations were with electric utilities, some were pipelines, gas companies and even done an installation in the control tower of O Hara airport in Chicago, to control the landing lights,” remembered4 engineer Jerry Russell, who worked on such early systems. Not only did they save considerable manual effort, but they also enabled problems to be spotted (and dealt with) before they became catastrophes.

As the Second World War took hold, another compelling issue lent itself to a different need for remote sensing. The war saw aviation used on an unprecedented scale, leading to the invention, then deployment of a radar system across the south and eastern coastlines of the UK. The team of engineers was led5 by Robert Watson-Watt, himself descended from James Watt. As successful as such a system was, it led to another challenge — how to identify whether a plane was friend or foe, ‘one of ours’ or ‘one of theirs’? The response was to fit a device on each plane, with a relatively simple function: when it picked up a radio signal of a certain frequency (I.e. The radar signal), it would respond in a certain way. Together with radar itself, the resulting technology, known as the identify friend or foe (IFF) system, was instrumental in the UK’s success at fending off the German onslaught during the Battle of Britain. And thus, the transponder was born.

Once the war was over of course, the technology revolution began in earnest, bringing with it (as we have seen) miniaturisation of devices, globalisation of reach and indeed, standardisation of protocols — that’s a lot of ‘isations’! By the 1960s, monitoring circuits started to be replaced by solid state supervisory circuits such as Westinghouse’s REDAC6 system, which could send data at a staggering 10 bits per second. Together, the invention of the Modem (which stands for MO-dulator/DEModulator, and which converts digital signals into sounds to enable them to be communicated down a telephone line) and the development of the teletype — the first, typewriter-based remote terminal — resulted in the creation of the Remote Control Unit (RTU), an altogether more sophisticated remote monitoring system which could not only present information about a distant object, but it could also tell it what to do. For example, a flood barrier could be raised or lowered from a central control point.

The idea of remote automation very quickly followed — after all, this was what Johnson himself originally saw as the benefit, as a room thermostat could directly control a valve situated in the basement of his school. Sensors and measuring devices were just as capable to input information to computer programs as punched cards, or indeed, people. As a consequence software programs started to turn their attention to how they could manage systems at a distance — in the mid 1960’s, the term SCADA (Supervisory Control and Data Acquisition System) was created7 to describe such systems. Not only this but the advent of computers brought the ability to not only process information, but also store it. Recording of historical events — such as changes of temperature very quickly became the norm rather than the exception, with the only constraint being how much storage was available. Even so, as it became easier to store a piece of data rather than worry about whether it was needed, “Keep it all” became a core philosophy of computing which remains in place to the present day.

As for how things moved on from this point, suffice to say that they progressed as might be expected, with the creation of the microprocessor by Intel in the early 1970’s being a major contributor. The smaller and cheaper computers and sensors became, the easier it was to get them in place. SCADA systems are prevalent around the world for management of everything from nuclear power stations to flood barriers. “Machine to machine” (M2M) communications is de facto in process control, manufacturing and transportation: today, it has been estimated that the number of sensors on a 747 is in the order of millions. We are still not ‘there’ of course — many houses in the UK still rely on electricity meters, for example, and in many cases water isn’t metered at all. But still, use of sensors continues to grow.

Meanwhile the notion of the transponder was continuing to develop on a parallel track. It didn’t take long for other uses of the idea to be identified: indeed, the patent holder for the passive, read-write Radio Frequency ID (RFID) tag in 1969, Mario Cardullo, first picked up8 the idea from a lecture about the wartime friend or foe systems. Before long the concept was being applied9 to cars, cows10, people and indeed, even the first sightings of contactless bank cards. The main challenge wasn’t to think of where such devices could be used, but to have devices that were both affordable and of a suitable size for such generalised use.

The real breakthrough, or at least the starting point of one, came in 1999. The idea behind the Auto-ID Center set up at the Massachusetts Institute of Technology (under professors David Brock and Sanjay Sarma) was relatively simple — rather than have a transponder holding a large amount of information, why not just create a simple RFID tag and give it a number, which could then be linked to information on a computer somewhere? The result, agreed a committee of standards bodies and big companies, would be that tags could be produced more cheaply and therefore they could be used more broadly. So far so good — and it didn’t take long for Sarma and Brock to work out that the best way of accessing such information was via the Internet. The rest is relatively recent history, as the only constraint became the cost of individual tags. Today you can’t buy a pair of underpants without an RFID chip in them. In 2007 RFID chips were approved11 for use in humans, and it is now commonplace to have an animal ‘chipped’.

The broad consequences of the rise of of both remote sensors, coupled with the ability to tag just about anything, are only just starting to be felt. The term “The Internet of Things” was coined to describe the merger between sensor-driven M2M and the tag-based world of RFID. Broadly speaking there is no physical object under the sun that can’t in some way be given an identifier, remotely monitored and, potentially controlled, subject of course to the constraints it finds itself within. Some of the simplest examples are the most effective — such as the combination of a sensor and a mobile phone-type device, which can be connected to farm gates in Australia so that sheep ranchers can be sure a gate is shut, saving hours of driving just to check a gate. “We still see high demand for GPRS,” remarks Kalman Tiboldi, Chief Business Innovation Officer at Belgium-headquartered TVH, a spares and service company pioneering pre-emptive monitoring of equipment failure. Even a bar code and a reader can be enough to identify an object, and indeed, recognition technology is now good enough to identify a product — such as a bottle of wine — just by looking at it.

As far as how far the Internet of Things might go, the sky is the limit. If the IoT was a train, it would have sensors on every moving part - every piston, every element of rolling stock and coupling gear. Everything would be measured — levels of wear, play, judder, temperature, pressure and so on — data from which would be fed back to systems which analysed, identified potential faults and arranged for advance delivery of spares. It would be passed to partners about the state of the track and considerations for future designs. The same feed could be picked up by passengers and their families, to confirm location and arrival time.

Meanwhile, in the carriages every seat, every floor panel would report on occupancy, broadcasting to conductors and passengers alike. The toilets would quietly report blockages, the taps and soap dispensers would declare when their respective tanks were near empty. Every light fitting, every pane of glass, every plug socket, every door and side panel would self-assess for reliability and safety. The driver’s health would be constantly monitored, even as passengers' personal health devices sent reminders to stand up and walk around once every half hour.

Is this over-arching scenario yet possible? Not quite — still missing is missing is a layer of software, the absence of which means that current Internet of Things solutions tend to be custom-built or industry specific, not sharing common building blocks beyond connectivity12. Does this matter? Well, yes. In the trade, experts talk about the commoditisation of technology — that is, how pieces of tech that started as proprietary eventually become ‘open’, in turn becoming more accessible and cheaper — it’s this that has enabled such things as the three quid Raspberry Pi, for example. We shall look at this phenomenon in later chapters, but for now it is enough to recognise that the IoT is an order of magnitude more expensive than it needs to be. This is not through a lack of trying — numerous startup software companies such as Thingworx13 and Xively14 are positioning themselves as offering the backbone for the smart device revolution. No doubt attention will swing to a handful of such platforms15, which will then be adopted wholesale by the majority, before being acquired by the major vendors and, likely, almost immediately become open-sourced, as has happened so often over recent decades.

These are early days: over coming years, new sensor types, new software platforms and ways of managing devices and data will emerge. In a couple of years costs will have dropped another order of magnitude, opening the floodgates to yet more innovation on the one hand, or techno-tat on the other. GPS-based pet monitors have been available for a while for example, albeit bulky and expensive. Now that they are reaching tens of pounds, they make sense, as will a raft of other examples. It really will become quite difficult to lose your keys16, or indeed misplace your child17.

How small will it all go? Whether or not Moore’s Law succeeds in creating more transistors on a single chip, technology shrinkage doesn’t appear to want to stop — which means that the range of uses for it will continue to broaden, for example to incorporating electronics into pharmaceutical drugs, which will then ‘know18’ whether they have been taken. An intriguing line of development in the world of IoT is the creation of devices which require minimal, or even no power to run. In Oberhaching near Munich sits EnOcean, a company specialising in “energy harvesting wireless technology,” or in laymans terms, chips that can send out a signal without requiring an external power source. Founded by Siemens employees Andreas Schneider and Frank Schmidt, EnOcean is a classic story of marketing guy meets technology guy. Harvesting methods include19 generating electricity from light, from temperature and from motion, leading to the possibility of a light switch which doesn’t need to be connected to the electricity ring main to function. The EnOcean technology may not be particularly profound by itself, but its potential in the broader, technology-enabled environment might well be.

Not everybody thinks miniaturisation is such a good thing. Silicon chips are already being created at a level which requires nanometer measurements, bringing it into the realms of nanotechnology, a topic which has been called out by science fiction authors (like Michael Crichton) as well as think tanks such as Canada-based ETC, which monitors the impact of technology on the rural poor. “Mass production of unique nanomaterials and self- replicating nano-machinery pose incalculable risks,” stated ETC’s 2002 report20, The Big Down. “Atomtech will allow industry to monopolize atomic-level manufacturing platforms that underpin all animate and inanimate matter.” It was Sir Jonathon Porritt, previously in charge of the environmental advocacy group Friends of the Earth, who first brought the report to the attention of Prince Charles, who famously referenced21 Crichton’s “grey goo” in a 2004 speech.

Some 10 years later, such fears are yet to be realised. But the notion of miniaturisation has continued consequences in terms of increasing the accessibility of technology. As a consequence the process, of seeing a platform of hardware, connectivity and control software capable of connecting everything to everything, continues on its way. What will it make possible? This is a difficult question to answer but the most straightforward answer is, “quite a lot.” And it is this we need to be ready for. How we do this is the most significant task we face today.

3.5. The ever-expanding substrate

“Prediction is very difficult, particularly about the future.”
Niels Bohr, Danish physicist
Yogi Berra, baseball player
Robert Storm Petersen, cartoonist
Et al1

Prediction, the art of defining what will happen, is inevitably fraught with uncertainty — to the extent that a wide variety of people are said to have called it out. In technology circles especially, predictions often sit at the bottom of the heap, a few tiers down from lies, damn lies, statistics and marketing statements from computer companies. In scientific circles the concept does have a head start on other domains, given that predictability will need to have a certain level of pre-considered, and potentially peer-reviewed proof. “I predict that the ball will land at point X” is a very different proposition to “I predict that the world will arrive at point X, given the way technology is going”. So, should we even try to consider where technology is taking us? Fortunately we have a number of quite solid, proven premises upon which to construct our views of the future.

First, as surmised by Gordon Moore all those years ago, computers and other devices are getting smaller and faster. We now carry a mainframe’s amount of processing in our pockets. As computers shrink, so they need less energy to function so what was once unsuitable because of insufficient power or space becomes achievable. Meanwhile, harnessing the properties of light has enabled networks to reach many times around the globe. And, for the simple reason that technology brings so much value to so many domains, its continued investment has driven innovation and supply/demand economics, leading to costs falling at a phenomenal rate, making what what was once impractical or unaffordable now commonplace. As computer equipment sales people know, it is becoming less and less viable to hold any stock as the rate at which it goes out of date is increasing to such an extent.

The overall impact is that the threshold of technological viability is falling. What was once impossible for whatever reason becomes not only probable but (sometimes very quickly) essential. For example, it may still not be viable for the population to wear health monitoring equipment. As costs fall however, the idea of a low-cost device that signals a loved one in case of disaster becomes highly attractive. Not everything is yet possible, due to such constraints: when we create information from data for example, we often experience a best-before time limit, beyond which it no longer makes sense to be informed. This is as true for the screen taps that make a WhatsApp message, as for a complex medical diagnosis. And, as so neatly illustrated by a jittery YouTube stream, we also have a threshold of tolerance for the poor quality.

Such, gradually reducing constraints — time, space, power and cost — have guided the rate of progress of the information revolution, and continue to set the scene for what is practical. Compromises still have to be made, inevitably: we cannot “boil the ocean” and nor can we attach sensors to every molecule in the universe (not yet, anyway). While the sky is the theoretical limit, in practice we cannot reach so high. All the same, in layman’s terms, as we use more electronics, the electronics become cheaper and better, making the wheel of innovation spin faster. The result is that the exception becomes the norm, driving a kind of ‘technological commoditisation’ — that is, capabilities that used to be very expensive are becoming available anywhere and as cheap, quite literally, as chips. Tech companies have a window of opportunity to make hay from new capabilities they create (such as Intel), integrate (Facebook) or use (Uber) before their ‘unique selling points’ are rolled into the substrate.

This inevitable process has seen the demise of many seeming mega-corporations over recent years, particularly in the tech space. No innovation ever really goes away however; rather, we just get a bigger, and cheaper tool box with which to construct our technological future. All the same technology still has a way to go before its tendrils reach a point beyond which it makes no sense to continue. Does this mean we will all be living in smart cities in the immediate future? Are we to become cyborgs, or brains in vats, experiencing reality through some computer-generated set of sensory inputs? Realistically, no — at least not in the short term. Despite this, we will continue to see the commoditised substrate of technology — the cloud — continue to grow in power, performance and capability, at the same time as we will find sensors, monitors and control devices appear in a widening array of places, extending technology’s reach deep into our business and personal lives.

So to predictions, the domain of scientists, baseball players and cartoonists, in other words all of us. The thought experiment is relatively simple, as it involves answering the straightforward question of ‘what if?’. The longer version is as follows: if we are not going to see our entire existence transformed by technology (a.k.a. brains in vats) then we are nonetheless going to see it augment our lives, in every aspect — how we communicate, how we live, how we stay well, how we do business. The stage is set for acting, living, being smarter. What is harder to plan, however, is the order in which things will happen, and the impact they will have. Technology may be instrumental in helping us all live much longer, for example, but the very dynamics of society will have to change — indeed, they are already changing — as a result.

Niels Bohr died in 1962, but not before he was instrumental in the creation of CERN, that august institution which curated the ‘invention’ of the World Wide Web. While it is worth remaining sanguine about technology’s potential, keep in mind that the simplest ideas, which go on to take the world by storm, are often the hardest to predict. To fully take advantage of the potential offered by this miraculous technological substrate needs some equally clever thinking — in the shape of software, algorithms and mathematics to be applied to the vast data sets we now have available, which we shall look at next.

4. The way that you do it

The pioneers of the industrial revolution saw steam powering everything. It was not so much short-sightedness as a lack of ability to see into the future. It was the same at the dawn of electro-mechanical invention, of light bulbs and motors. Faraday and his generation thought the same, looking at their experiments as an end in themselves, whereas in reality they were just scratching the surface - the creation of valves, itself superseded by semi-conductors, illustrates the point. Each generation thinks the same. We cannot think beyond 3 years - it’s a human trait.

This trait carries over into the technology industry, as ‘waves’ of innovation come and create unexpected outcomes. As Sun Microsystems alumnus and industry expert Rob Bamforth (his claim to fame is wearing the ‘Duke1’ Java suit from time to time) once remarked, “This isn’t about convergence, it’s collision.” We sometimes look at consequences and see a destination, whereas in fact we are seeing flags tied to stakes along the road, waypoints on a journey to we know not where. Still we cannot stop ourselves from seeing the latest trends as taking us into a new age. In reality however, the information age started many decades ago and continues to surprise us.

Some organisations — such as Google — instil a spirit of experiment, while others appear out of nowhere, accidental profiteers born in the scalding chalice of innovation. Nobody knows whether another leap is still to come — perhaps from quantum mechanics, or non-linear architectures. What we can say, however, is that the journey is far from over yet.

4.1. The information fire hydrant

“Come, let us build ourselves a city, and a tower whose top is in the heavens.”
Genesis 11:4, The Tower of Babel
“There's certainly a certain degree of uncertainty about, of that we can be quite sure.”
Rowan1 Atkinson, Sir Marcus Browning MP

As well as being a mathematician, Lewis Fry Richardson was a Quaker and a pacifist. He chose to be a conscientious objector during the First World War, and while this meant that he could not work directly in academia, he nonetheless continued his studies at its fringes. As well as creating models which could actually predict weather patterns, he focused much of his attention on the mathematical principles behind conflict, on and off the battlefield. His findings he summarised in a single volume, entitled Statistics of Deadly Quarrels, and published just as Europe and the world plunged itself into war once again. Perhaps it was this unfolding tragedy that pushed the pacifistic Richardson back to his studies: one area in particular intrigued him, namely the nature of border disputes, of which in Europe there were plenty. As he attempted to create models however, he found it challenging to determine the length of a border — indeed, the closer he looked at individual borders, the longer they became. Think about it: zoomed out, the edges of a geographical feature are relatively simple, but as you zoom in, you find they are more complicated and, therefore, the measurement becomes longer. The closer you go, the longer they become, until matters become quite absurd “Sea coasts provide an apt illustration,” he wrote2 as he watched his models collapse in a heap. “An embarrassing doubt arose as to whether actual frontiers were so intricate as to invalidate [^an] otherwise promising theory,”

The discomfiting nature of the phenomenon, which became known as the coastline paradox, was picked up by fractal pioneer Benoit Mandelbrot in 1967. In his paper3 ‘How Long Is the Coast of Britain?’ he wrote, “Geographical curves can be considered as superpositions of features of widely scattered characteristic size; as ever finer features are taken account of, the measured total length increases, and there is usually no clearcut gap between the realm of geography and details with which geography need not be concerned.” In other words, it wasn’t only the measurable distance that mattered, but the phenomenon cast into doubt what the geological features actually meant. Was a rocky outcrop part of the coastline or not? How about a large boulder? Or a grain of sand?

This same phenomenon is fundamental to our understanding of what we have come to call data, in all of its complexity. Data can be created by anything that can generate computer bits, which these days means even the most lowly of computer chips. Anything can be converted to a digital representation by capturing some key information, then digitising and converting it into data points, transporting it from one place to another using a generally accepted binary format. Whenever we write a message or make use of a sensor, we are adding to the mother of all analogue to digital converters. Digital cameras, voice recorders, computer keyboards, home sensors and sports watches and, well, you name it, all can and do generate data in increasing quantities. Not only is are the devices proliferating in volume and type, but we are then using computers to process, transform and analyse the data — which only generates more.

As a consequence, we are creating data far faster than we know what to do with it. Consider: at the turn of the millennium 75% of all the information in the world was still in analogue format, stored as books, videotapes and images. According to a study conducted in 2007 however4, 94% of all information in the world was digital — the total amount of stored was measured as 295 Exabytes (billions of Gigabytes). This enormous growth in information shows no sign of abating. By 2010 the figure had crossed the Zettabyte (thousand Exabyte) barrier, and by 2020, it is estimated, this figure will have increased fifty-fold.

As so often, the simplest concepts have the broadest impact: no restriction has been placed on what data can be about, within the bounds of philosophical reason. The information pile is increasing as we can (and we do) broadcast our every movement, every purchase and every interaction with our mobile devices and on social networks, in the process adding to the information mountain. Every search, every ‘like’, every journey, photo and video is logged, stored and rendered immediately accessible using computational techniques that would have been infeasible just a few years ago. Today, YouTube users upload an hour of video every second, and watch over 3 billion hours of video a month; over 140 million tweets are sent every day, on average – or a billion per week.

It’s not just us — retailers and other industries are generating staggering amounts of data as well. Supermarket giant Wal-Mart handles over a million customer transactions every hour. Banks are little more than transaction processors, with each chip card payment we make leaving a trail of zeroes and ones, all of which are processed. Internet service providers and, indeed, governments are capturing every packet we send and receive, copying it for posterity and, rightly or wrongly, future analysis. Companies of all shapes and sizes are accumulating unprecedented quantities of information about their customers, products and markets. And science is one of the worst culprits: Alice experiment at CERN’s Large Hadron Collider generates data at a rate of 1.2 Gigabytes per second. Per second!

Our ability to create data is increasing in direct relation to our ability to create increasingly sensitive digitisation mechanisms. The first commercially available digital cameras, for example, could capture images of up to a million pixels, whereas today it is not uncommon to have 20 or even 40 ‘megapixels’ as standard. In a parallel to Richardson’s coastline paradox, it seems that the better we get at collecting data, the more data we get. Marketers have the notion of a ‘customer profile’ for example: at a high level, this cold consist of your name and address, your age, perhaps whether you are married, and so on. But more detail can be added, in principle helping the understanding of who you are. Trouble is, nobody knows where to stop — is your blood type relevant, or whether you had siblings? Such questions are a challenge not only to companies who would love to know more about you, but also (as we shall see) because of the privacy concerns they raise.

Industry pundits have, in characteristic style, labelled the challenges caused by creating so much data as ‘Big Data’ (as in, “We have a data problem. And it’s big.”). It’s not just data volumes that are the problem, they say, but also the rate at which new data is created (the ‘velocity’) and the speed at which data changes (or ‘variance’). Data is also sensitive to quality issues (‘validity’) — indeed, it’s a running joke that customer data used by utilities companies is so poor, the organisations are self-regulating — and it has a sell by date, that is, a point when it is no longer useful apart from historically. When we create information from data, we are often experience a best-before time limit, beyond which it no longer makes sense to be informed. This is as true for the screen taps that make a WhatsApp message, as for a complex medical diagnosis.

All of these criteria make it incredibly difficult to keep up with the data we are generating. Indeed, our ability to process data will, mathematically, always lag behind our ability to create it. And it’s not just the raw data we need to worry about. Computer processors don’t help themselves as they have a habit of creating duplicates, or whole new versions of data sets. Efforts have been made to reduce this duplication but it often exists for architectural reasons — you need to create a snapshot of live data so you can analyse it. It’s a good job we have enough space to store it all, or do we? To dip back into history, data storage devices have, until recently, remained one of the most archaic parts of the computer architecture, reliant as they have been upon spinning disks of magnetic material. IBM shipped the first disk drives in 1956 — these RAMAC drives could store a then-impressive four million bytes of information across its fifty disk platters, but had to be used in clean environments so that dust didn’t mess up their function. It wasn’t until 1973 that IBM released5 a drive, codenamed Winchester, that incorporated read/write heads in a sealed, removable enclosure.

Despite their smaller size, modern hard disks have not changed a great deal since this original, sealed design was first proposed. Hard drive capacity increased by 50 million times between 1956 and 2013 but even this is significantly behind the curve when compared to processor speeds, leading pundits such as analyst firm IDC going to the surprising length of suggesting that the world would “run out of storage” (funnily enough, it hasn’t). In principle, the gap could close with the advent of solid state storage — the same stuff that is a familiar element of the SD cards we use in digital cameras and USB sticks. Solid State Drives (SSDs) are currently more expensive, byte for byte, than spinning disks but (thanks to Moore’s Law) the gap is closing. What has taken solid state storage so long? It’s all to do with the transistor counts. Processing a bit of information requires a single transistor, whereas storing the same bit of information for any length of time requires six transistors. But as SSDs become more available, their prices also fall meaning that some kind of parity starts to appear with processors. SSDs may eventually replace spinning disks, but even if they do, the challenge of coping wth the data we create will pervade. This issue is writ large in the Internet of Things — as we have seen, a.k.a. the propensity of Moore’s Law to spawn smaller, cheaper, lower-power devices that can generate even more data. Should we add sensors to our garage doors and vacuum cleaners, hospital beds and vehicles, we will inevitably increase the amount of information we create. Networking company Cisco estimates6 that the ’Internet of Everything’ will cause a fourfold increase in the five years from 2013, to reach over 400 ZettaBytes - that’s 10^21 bytes.

To technology’s defence, data management has long moved away from simply storing it on disk, loading it into memory and accessing it via programs. It was back in the late 1950’s that computer companies started to realise a knock-on effect of all their innovation — the notion of obsolescence. The IBM 4077 series of accounting machines, introduced 10 years before, could do little more than read punched cards and tabulate reports on the data they contained; while the 407’s successor, the 1401, was a much more powerful computer (and based entirely on new-fangled transistors), technicians needed some way of getting the data from familiar stacks of cards and to the core storage of the 1401 for processing. The answer was FARGO8 — the Fourteen-o-one Automatic Report Generation Operation program, which essentially turned the 407 into a data entry device for the 1401.

The notion of creating data stores and using them to generate reports became a mainstay of commercial computing. As the processing capabilities of computers became more powerful, the reports could in turn become more complicated. IBM’s own language for writing reports was the Report Program Generator itself, RPG. While it was originally launched in 1961, RPG is still in use today, making it one of the most resilient programming languages of the information age. IBM wasn’t the only game in town: while it took the lion’s share of the hardware market, it wasn’t long before a variety of technology companies, commercial businesses (notably American Airlines with its SABRE booking system) and smaller computer services companies started to write programs of their own. Notable were the efforts of Charles Bachman, who developed what he termed the Integrated Data Store wen working at General Electric in 1963. IDS was the primary input to the Conference/Committee on Data Systems Languages’ efforts to standardise how data stores should be accessed; by 1968 the term database had been adopted.

And then, in 1969, dogged by the US government, IBM chose to break the link between hardware and software sales, opening the door to competition9 from a still-nascent software industry. All the same it was another IBM luminary, this time Englishman Edgar Codd, who proposed another model for databases, based on tables and the relationships between data items. By the 1980’s this relational database model, and the Structured Query Language (SQL) used to access it, became the mechanism du choix for several decades afterwards, for all but mainframe software where (despite a number of competitors appearing over the years) IBM models still dominated.

Of course it’s more complicated than that — database types, shapes and sizes proliferated across computers of every shape and size. But even as data management technologies evolved, technology’s propensity to generate even more data refused to abate. As volumes of data started to get out of hand once again in the 1990’s, attention turned to the idea of data warehouses — data stores that could take a snapshot of data and store it somewhere else, so that it could be interrogated, the data analysed and the results used to generate ever more complex reports. For a while it looked like the analytical challenge had been addressed. But then, with the arrival of the Web, quickly followed by e-commerce, social networks, online video and the rest, new mechanisms were required yet again as even SQL-based databases proved inadequate to keep up with the explosion of data that resulted. Not least, the issue of how to search the ever-increasing volumes of web pages was becoming ever more pressing. In response, in 200310 Yahoo! colleagues Doug Cutting and Mike Cafarella developed a tool called Nutch, based around a indexing mechanism from Google, called MapReduce, itself “a framework for processing embarrassingly parallel problems across huge datasets.” The pair quickly realised that the mechanism could be used to analyse the kinds of data more traditionally associated with relational databases, and created a specific tool for the job. Doug named it Hadoop11, after his son’s toy elephant.

Hadoop spelt a complete breakthrough in how large volumes of data could be stored and queried. In 2009 the software managed12 to sort and index a petabyte of data in 16 hours, and 2015 was to be the year of ‘Hadooponomics13’ (allegedly14). The project inspired many others to create non-relational data management platforms. MongoDB, Redis, Apache Spark and Amazon Redshift are all clever and innovative variations on a general trend, which is to create vast data stores that can be interrogated and analysed at incredible speed.

Even with such breakthroughs, our ability to store and manage data remains behind the curve of our capability to create it. Indeed, the original strategists behind the ill-fated tower of Babel might not have felt completely out of place in present day, large-scale attempts to deal with information. And so it will continue — it makes logical sense that we will carry on generating as much information as we can, and then we will insist on storing it. Medicine, business, advertising, farming, manufacturing… all of these domains and more are accumulating increasingly large quantities of data. But even if we can’t deal with it all, we can do increasingly clever things with the data we have. Each day, the Law of Diminishing Thresholds ensures a new set both very old and very new problems that are moving from insoluble to solvable.

To do so this requires not just data processing, storage, management and reporting, but programs that push the processing capabilities of computers to the absolute limits. Enter: the algorithm.

4.2. An algorithm without a cause

La Défense was built in a time of great optimism, its ring of tall towers and fountain-filled squares offering a concrete and glass symbol of 1970’s hope. In the centre of main esplanade, tourists mill and take pictures while sharp-suited business people move with earnest determination, traversing between buildings like sprites in a vast computer simulation. At one end of the terrace, steps rise towards La Grande Arche – an imposing structure carefully positioned on a straight line connecting to the Arc de Triomphe and the Louvre, all waypoints on a majestic, historic timeline. “Look at us,” they all say. “Look at what we can do.”

I meet Augustin Huret in a nondescript meeting room, situated on the 17th floor of a tower overlooking the esplanade. It’s a fitting place, I think to myself as I sip a small, yet eye-wateringly strong coffee, for a discussion about the future. Augustin is a slim, quiet Frenchman, wearing a shirt and tweed jacket. At first glance, seated at the table with his Filofax, he looks like any other business consultant but the sharpness in his eyes quickly puts any such feelings to rest. “So, remind me. Why are we here?” he asks, head cocked to one side. I explain the background to the interview, which he considers just long enough to make me feel uncomfortable. Then he smiles, his hawk-like eyes softening. “Well, we had better get on with it,” he says.

Augustin Huret had no ordinary childhood. Whereas many of us used to spend long car journeys playing I-Spy or spotting 2CVs, the Huret family was different. “My father used to ask us about his algorithms,” says Augustin. “He wanted to be sure that they made sense to normal people.” At the time, the older M. Huret was tackling the challenge of using mathematics to look for unusual phenomena in large, incomplete sets of data — needles in the information haystack. Augustin was just twelve years old but he listened, and he learned.

The human brain has a remarkable capacity to spot such anomalies. If you were faced with a painting of a vast, troubled ocean, after a while you might eventually spot a small difference of colour. Look more closely and you would see a small ship being tossed on the waves. You’d probably start to wonder whether it was in trouble, or what it was doing out there in the first place – what was the pilot thinking? You might think about a movie with a similar plot, and wonder whether there was any relationship between the picture and the film. But even as your mind wandered, you would still simply accept what you were seeing, completely ignoring how difficult it was for your brain to process the huge quantities of data involved.

M. Huret Senior was working in this area of data analysis, known as pattern recognition. Much of what we think of as ‘intelligence’ actually relates to our ability to spot similarities – from how we use our eyes when we’re looking for our keys in a cluttered drawer, to that leaden feeling of déjà-vu we get when we find ourselves in a familiar argument. While scientists are still getting to the bottom of how the human brain makes such leaps of understanding, methods to replicate our simpler innate capabilities do exist. For example, a computer program can compare a given pattern (say, a fingerprint) with every known other one. If the pattern can’t be found, it can be ‘learned’ and added to a database, so it can be spotted next time around.

Augustin’s father was looking into more general methods of pattern recognition, which didn’t need to know anything about the data they were working with. Throw in a set of data points – the bigger the better – and algorithms would build a picture, from which they could then spot anomalies like the ship on the waves. While the principle was sound, the main hurdle was the sheer scale of the problem — simply put, the algorithms required more data than 1970’s computers could process. Time was not on Huret Sr’s side: not only would it take several years of processing to deliver any useful results, but (with a familiar echo) funding bodies were becoming increasingly impatient. Even as Augustin’s father was testing his theories, interest in the field in which he was working — Artificial Intelligence, or AI — was waning. In the end, the money ran out altogether. M. Huret Senior’s theoretical models remained just that – theoretical.

A decade passed. Having completed his degree with flying colours, as the young Augustin was embarking on a Masters degree, he decided to take his father’s challenge onto his own shoulders. The passage of time was on Augustin’s side: one year before, the year before he started his MSc, at a cost of over 10 million dollars, the illustrious Polytechnique university had taken delivery of a Cray II supercomputer. Scientific computers are measured in the number of floating point instructions they can process per second (dubiously known as FLOPs) and the Cray was capable of running almost two billion of these. That’s two billion adds, subtracts or basic multiplications – per second! At last, the younger Huret was able to test his father’s theories on computers that were powerful enough for the job. He could even improve the theories – and, for the first time ever, use them in anger on real data sets.

He was faced with the perfect problem – systems failures in the production of nuclear energy. The French government had become a significant investor in nuclear power, as a direct response to the oil crisis in 1973. In consequence, by the mid-Eighties over 50 nuclear power stations had been built across the country, and more were being constructed. With the events of the 1979 nuclear accident at Three Mile Island still front of mind, the occasional lapse of this enormous, yet fledgling infrastructure was met with high levels of concern. Each failure was accompanied by massive quantities of data on all kinds of things, from equipment measurements to core temperatures. Without algorithms, the data was useless however. At that point, nobody had been able to analyse the data sufficiently well to identify any root cause of the different failures.

The Huret algorithms worked exhaustively: that is, they looked at every single possible combination of data items and compared them to every other in the data set, to see if they were linked. To make things even more complicated the algorithms then randomise the data set and go through the same, exhaustive process, to be sure that the anomalies aren’t just there by chance. Given the size of the nuclear power plant data set, dealing with this processing task took inordinate amounts of processor time — 7.5 million seconds, or 15 million billion FLOPs! While this took the Cray a continuous period of 3 months to process, the results were worth it. The resulting rules highlighted what was causing the majority of systems failures across the network of nuclear power stations in France. Before Augustin knew it, he was presenting to heads of government and at international symposia. His father’s original principles, honed, modified and programmed, had more than proved their worth.

While it would be many more years before such algorithms as the Huret’s were viable for more general use, their ‘exhaustive’ approach was not the only way of making sense of such imposing data sets. What if, say, you started with a rough guess as to where the answer might lie, and seed the algorithm with this knowledge? This is far from a new idea. Indeed, it was first mooted over 250 years ago, by an English cleric, non-conformist minister Thomas Bayes. Bayes’ theorem1, which works on the basis of thinking of an initial value and then improving upon it, rather than trying to calculate an absolute value from scratch. It was largely ignored in his lifetime, and has been denigrated by scientists repeatedly ever since — largely because the amount of data available to statisticians made it seem unnecessary, or even unseemly, to add guesswork. “The twentieth century was predominantly frequentist,” remarks2 Bradley Efron, professor of Statistics at Stanford University. While Bayesians link a probability to the likelihood of a hypothesis being true, Frequentists link it to the number of times a phenomenon is seen in tests. Explains3 Maarten H. P. Ambaum at the University of Reading in the UK, “The frequentist says that there is a single truth and our measurement samples noisy instances of this truth. The more data we collect, the better we can pinpoint the truth.”

Frequentist models are better at working with large data sets if you want accuracy, it has been suggested4, but as data volumes start to get out of hand (and particularly given the coastline paradox, in this world of incomplete knowledge), Bayesian models become increasingly interesting. “Bayesian statistics is the statistics of the real world, not of its Platonic ideal,” continues Ambaum. Says5 security data scientist Russell Cameron Thomas: “Because of Big Data and the associated problems people are trying to solve now, pragmatics matter more than philosophical correctness.” Not least to companies such as Google and Autonomy, who built their respective businesses on the back of the reverend’s theorems. “Bayesian inference is an acceptance that the world is probabilistic,” explains Mike Lynch, founder of Autonomy. "We know this in our daily lives. If you drive a car round a bend, you don’t actually know if there is going to be a brick wall around the corner and you are going to die, but you take a probabilistic estimate that there isn’t.”

Google built its search-driven advertising empire entirely on Bayesian principles, a model which continues to serve the company well — roughly 89% of the firm’s $66 billion revenues in 2014 came from its advertising products. The blessing and curse of Google’s probabilistic approach was illustrated by the momentous success, then the crashing failure, of its Flu Trends program, which used6 the company’s search term database to track the spread of Influenza, connecting the fact people are looking for information about the flu, and their locations, with the reasonable assumption that an incidence of the illness has triggered the search. All was going well until the tool missed an outbrak by a substantial margin. Explained David Lazer and Ryan Kennedy in Wired magazine, “There were bound to be searches that were strongly correlated by pure chance, and these terms were unlikely to be driven by actual flu cases or predictive of future trends.” It’s a scenario reminiscent of Umberto Eco’s novel ‘Foucault's Pendulum’, in which a firm of self-professed vanity publishers stumble upon a slip of paper containing what looked, to all intents and purposes, like a coded message. The hapless figures are plunged into a nether world of Templars and Rosicrucians, Mafiosos and voodoo, causing a chain of events which end (look away now if you don’t want to know the plot) in their ultimate demise. The twist, it turns out, was that the scrap of paper was no more than a medieval laundry list.

Given that we have both exhaustive and probabilistic approaches at our behest, what matters ultimately is results. Top down is as important as bottom up, and just as science is now accepting7 the importance of both frequentist and Bayesian models, so can the rest of us. The creators of algorithms are exploring how to merge8 the ideas of Bayesian and Frequentist logic — for example in the form of the bootstrap, a computer-intensive inference machine.

The end result is that we will know a lot more about the world around us and, indeed, ourselves. For example, the Huret algorithms were used to exhaustively analyse the data sets of a ophthalmic (glasses) retailer — store locations, transaction histories, you name it. The outputs indicated a strong and direct correlation between the amount of shelf space allocated to children, and the quantity of spectacles sold to adults. The very human interpretation on this finding was, simply, that kids like to try on glasses, and the more time they spend doing so, the more likely are their parents to buy. Equally, did you know that in retail, brightly coloured cars make better second hand deals (they have more careful owners)? Or that vegetarians9 are less likely to miss their flights?

With the Law of Diminishing Thresholds holding fast, correlations such as these will become everyday, as will more deeply valuable topics, such as those in healthcare — such as monitoring10, and potentially postponing, the onset of Parkinson’s Disease. Some, seemingly unsolvable tasks, such as weather prediction, have not changed in scale since they were first documented — might we finally be able to predict the weather? And it’s not just unsolved problems that can be tackled — we are also starting to find insights in places nobody even thought to look, as demonstrated by our ability11 to compare every single painting in history to every other.

As we have seen, in a couple of years, we will be able to analyse such complex data sets on desktop computers or even mobile phones – or more likely, using low-cost processing from the cloud. Given the coastline paradox, some problem spaces will always be just outside the reach of either exhaustive or probabilistic approaches — we will never be able to analyse the universe molecule by molecule, however much we improve12 upon Heisenberg's ability to measure. But this might not matter, if computers reach the point where they can start to define what is important for themselves. Mike Lynch believes we may already be at such a fundamental point, with profound consequences. “We’re just crossing the threshold,” he says. “Algorithms have reached a point where they have enabled something they couldn’t do before — which is the ability of machines to understand meaning. We are on the precipice of an explosive change which is going to completely change all of our institutions, our values, our views of who we are.”

Will this necessarily be a bad thing? It is difficult to say, but we can quote Kranzberg’s first law13 of technology (and the best possible illustration of the weakness in Google’s “do no evil” mantra) — "Technology is neither good nor bad; nor is it neutral.” Above all, it would remove human bias from the equation. We are used to being able to take a number of information sources and derive our own interpretations from them — whether or not they are correct. We see this as much in the interpretation of unemployment and immigration figures as consumer decision making, industry analysis, pseudoscience and science itself — just ask scientist and author Ben Goldacre, a maestro at unpicking14 poorly planned inferences. Agrees15 David Hand, Emeritus Professor of Mathematics at Imperial College, London, “Obscure selection biases may conjure up illusory phantoms in the data. The uncomfortable truth is that most unusual (and hence interesting) structures in data arise because of flaws in the data itself.”

But what if such data was already, automatically and unequivocally interpreted on our behalf? What if immigration could be proven without doubt to be a very good, or a very bad thing? Would we be prepared to accept a computer output which told us, in no uncertain terms, that the best option out of all the choices was to go to war, or indeed not to? Such moral dilemmas are yet to come but meanwhile, the race is on: researchers and scientists, governments and corporations, media companies and lobby groups, fraudsters and terrorists are working out how to identify needles hidden in the information haystack. Consulting firm McKinsey estimates16 that Western European economies could save more than €100 billion making use of analytics to support government decision-making and elsewhere, big data is big business.

But right now, to succeed requires thinking outside the algorithm and using the most important asset we have: ourselves.


  1. http://www.stat.ucla.edu/history/essay.pdf 

  2. http://statweb.stanford.edu/~ckirby/brad/papers/2012250-YearArgument.pdf 

  3. http://www.met.reading.ac.uk/~sws97mha/Publications/Bayesvsfreq.pdf 

  4. http://oikosjournal.wordpress.com/2011/10/11/frequentist-vs-bayesian-statistics-resources-to-help-you-choose/ 

  5. http://exploringpossibilityspace.blogspot.co.uk/2013/07/the-bayesian-vs-frequentist-debate-and.html 

  6. http://www.google.org/flutrends/ 

  7. http://statweb.stanford.edu/~ckirby/brad/papers/2005NEWModernScience.pdf 

  8. http://www.ams.org/journals/bull/2013-50-01/S0273-0979-2012-01374-5/S0273-0979-2012-01374-5.pdf 

  9. http://blogs.informatica.com/perspectives/2013/03/26/vegetarians-miss-fewer-flights-and-other-surprising-insights-from-data-big-data/ 

  10. http://newsroom.intel.com/community/intel_newsroom/blog/2014/08/13/the-michael-j-fox-foundation-and-intel-join-forces-to-improve-parkinsons-disease-monitoring-and-treatment-through-advanced-technologies 

  11. https://medium.com/the-physics-arxiv-blog/when-a-machine-learning-algorithm-studied-fine-art-paintings-it-saw-things-art-historians-had-never-b8e4e7bf7d3e 

  12. http://www.computerworld.com/article/2863740/quantum-mystery-an-underestimate-of-uncertainty.html 

  13. http://en.wikipedia.org/wiki/Melvin_Kranzberg 

  14. http://www.theguardian.com/commentisfree/2011/nov/04/bad-science-eight-years 

  15. Focus/September 2014 

  16. http://www.mckinsey.com/insights/business_technology/big_data_the_next_frontier_for_innovation 

4.3. Thanks for the (community) memory

“There's a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can't take part! You can't even passively take part! And you've got to put your bodies upon the gears and upon the wheels…upon the levers, upon all the apparatus, and you've got to make it stop!”

Thus spoke1 Mario Savio, student and hero of the nascent Free Speech Movement, as he stood on the steps of Sproul Hall, at the University of California Berkeley on December 2, 1964. The Sproul Hall Sit-In, which had been instigated in support for a racial equality event across the country in Washington DC, had already lasted for weeks. After a wave of rapturous applause for Mario’s speech, folk singer and activist Joan Baez stood up and sang "All My Trials”, "Blowing in the Wind" and "We Shall Overcome.”

All, it has to be said, much to the disdain of the onlooking authorities. Over seven hundred people were arrested on that seminal day, one of whom was electrical engineering student and fan2 of Heinlein’s post-apocalyptic, empire-busting science fiction, Lee Felsenstein. As it happened, Lee had just been advised to resign from his job working for NASA at Edwards Airforce base in the Mojave desert, the semi-coercion due in large part to the discomfort caused by his previous civil rights activities, such as participating3 in the 1963 Philadelphia march on Washington. “I had to make a choice,” he wrote4 of his participation at the sit-in. “Was I a scared kid who wanted to be safe at all costs? Or was I a person who had principles and was willing to take a risk to follow them? It was like that moment in Huckleberry Finn when Huck says, ‘All right, then, I’ll go to hell.’”

Lee went on to be a computer engineer, but his experiences as a political activist continued to guide his work. He was particularly influenced by the anti-industrial writings of Austrian-born Ivan Illich, who wrote on the nature of machines to hold ordinary people in a state of servitude5: “For a hundred years we have tried to make machines work for men and to school men for life in their service. Now it turns out that machines do not "work" and that people cannot be schooled for a life at the service of machines. The hypothesis on which the experiment was built must now be discarded. The hypothesis was that machines can replace slaves. The evidence shows that, used for this purpose, machines enslave men.”

The alternative, as Lee and his commune-inspired peers saw it, was to design and use computers in ways that met the needs of the collective entity we call humanity. On this basis, in 19736 Lee and a handful of like minds founded the counterculture7 group Loving Grace Cybernetics. The group, named after a poem8 by Richard Brautigan, posited a very different relationship between man and machine, “Where mammals and computers live together in mutually programming harmony.” In practical terms, the group’s vision was that computer terminals should, and could be made publicly available, for example situated in libraries and shops, and thus rendered accessible to all. Having set themselves the task of delivering on this nirvana, Lee and his colleagues set to work. The resulting project may appear mundane by today’s standards, but at the time was nothing short of revolutionary. Known as Community Memory, it involved two ASR-33 teletype (think: typewriter) terminals, situated in a record shop. These were linked to a huge XDS-940 computer installed in a warehouse downtown, which Lee also happened to be managing on behalf of another collective (known as Resource One).

The ability for ordinary people to send messages to each other at a distance was groundbreaking. The spirit of the times are captured in a message from one of the group’s main characters, known only as Benway9. Surely there can be no better place to chew Johimbe bark, than out the back of Jody’s all-night pet shop…

IEF XQPRSTQXL SYSPRINT OFFSET INTERRUPT APPLIESTO: ALL BOOGIES, BEANERS, BOLOS & BOZOS ...... DOC BENWAY HERE .......... NURSE, SLIP ME ANOTHER AMPULE OF LAUDANUM .......... RECOLLECT ONCE ME AND CLEM CLONE WAS CHEWIN JOHIMBE BARK OUT BACK OF JODY'S ALL-NIGHT PET SHOP ....... NOT A FINER MAN IN THIS WHOLE ZONE THAN OL' CLEM 'N JODY CLONE ..... ****WHERE WAS WE, YEAH ---- USE AUTHORIZED DATA BASE ACCESS PROTOCOLS ONLY ..... SENSUOUS KEYSTROKES FORBIDDEN ..... DO NOT STRUM THAT 33 LIKE A HAWAIIAN STEEL GUITAR ..... GRAND CONCLAVE OF THE PARTIES OF INTERZONE: CHECK YOUR BOX FOR DETAILS..... PERSONAL ATTENDANCE REQUIRED; SEND NO REPLICA. BENWAY OUT. TLALCLATLAN ......

When it became clear that the Community Memory teletypes were not going to be flexible enough for broader use, it was on the same, fundamental principles of post-industrial emancipation that Lee started to design computer terminals he hoped might one day replace them. Named after the then-famous fictional teenage inventor, the Tom Swift terminal was designed and built in a spirit10 of low cost, open-ness and expansibility. “Subordination of man to machine signifies a potentially disastrous tendency of technological development,” Lee wrote directly in the design specification. Not only this but he incorporated one of the first documented uses of the term ‘open’ in modern computing: “This device… is “open ended” with expandability possible in several dimensions… its operation is “in the open", with a minimum of 'black box' components.”

The Community Memory project lasted only a year, but it served as ample demonstration of how people could use computers not just to run computational algorithms, but to facilitate communication and collaboration at a distance. In doing so, within the context of peace and love in which it took place, the project laid the philosophical foundations for many of the tools, and in particular the spirit of idealistic open-ness, that we see across today’s digitally enabled world. As technology historian Patrice Filchy wrote11, the project was “A utopia embodied in fledgling techniques in the mid-1970s on the fringes of the university world. This utopia combined two projects: an individual computer for all — the project for Apple and many others — and a communication network among equals.” And once this spirit had been created, it would — or indeed could — not be taken away.

Lee’s story doesn’t finish there. Based on his experiences with Community Memory, he went on to become an original member of the Homebrew Computer Club. Formed in 1975, the collective of maverick technologists boasted such members as Apple co-founder Steve Wozniak12 and John Draper, a.k.a. the notorious telephone hacker, Captain Crunch. The club was founded by peace activist Fred Moore, who had himself staged a fast on the steps of Sproul Hall back in 195913, long before the notion of student protest had become popular. The notion of the techno-collective was a driving force in its creation, as Lee himself said, “In order to survive in a public-access environment, a computer must grow a computer club around itself.” In other words, for innovation to grow and serve the broader community, it needs an impetus that only those outside the establishment can provide.

This relationship between computers and community-driven, socially minded connectivity has run in parallel with the engineering innovations of the computer revolution. From using software distribution mechanisms such as Unix-to-Unix Copy (UUCP14) as a social channel back in 1978, people have often turned innocuous software tools into collaboration mechanisms. In parallel with the creation of messaging systems for military use (the first email sent by the US Defencse Department’s ARPANET was back in 197115), communitarians have explored ways to use technological mechanisms as ‘social channels’: the advent of dial-up bulletin boards, email, news and ‘chat’ protocols have all been used to get the message through. This has been to the continued concern of the authorities, sometimes with reason as they have harboured some of the less salubrious elements of human culture, but other times simply to suppress debate.

Similarly, each generation appears to have needed its counter-corporate collective. The 1980’s saw the creation of the Whole Earth 'Lectronic Link (The WELL16) bulletin board system – its name derived from an eco-friendly mail order catalog set up in 1968 by one of Lee Felsenstein’s contemporaries, Stewart Brand. It is no coincidence that personal computing, typefied by Intel’s x86 processor range, the Apple II computer and IBM PC, was at the time breaking through a threshold of both processor power and cost, putting computer equipment into the homes of many. Thanks to general advances in electronics, Modems reached a point of affordability at around the same point. Connected computers were nonetheless still a home brew affair, with instruction sheets often little more than a poor photocopy. Indeed, given that much advice was available ‘online’ — for example from bulletin boards — it made for a certain amount of bootstrapping, with people first learning the basics from friends before having enough knowledge to connect to more detailed sources of . While communications were rudimentary, the resulting set-up enabled a much wider pool of people — largely geeks — to communicate with each other. As computers and modems became affordable, so did the Community Memory vision.

By the late 1980’s the social genie was well and truly out of its bottle. The Internet had expanded way beyond its military industrial complex beginnings and out of academia, bringing with it email lists and simple forums. For the authorities however, the consequence was a series of ham-fisted law enforcement incidents, driven by concern but coupled with a lack of understanding of what technology brought to the world. One victim was WELL member John Barlow (also a lyricist for the Grateful Dead), who coined the term ’electronic frontier’ to describe the relationship between old thinking and what he saw as a brave new world. In May 1990 Barlow was interviewed by Agent Baxter of the FBI, about the alleged theft (by someone under the name of NuPrometheus) of some source code from the Apple ROM chip. “Poor Agent Baxter didn't know a ROM chip from a Vise-grip when he arrived, so much of that time was spent trying to educate him on the nature of the thing which had been stolen. Or whether "stolen" was the right term for what had happened to it,” said Barlow. “You know things have rather jumped the groove when potential suspects must explain to law enforcers the nature of their alleged perpetrations.”

The following month Barlow co-founded the Electronic Frontier Foundation (EFF17) with Mitch Kapor, previously president of Lotus Development Corporation, who had contacted him with some concern about his experiences. Kapor literally dropped by on his private jet, explains Barlow: “A man who places great emphasis on face-to-face contact, he wanted to discuss this issue with me in person.” To the pair, the situation was very, very serious indeed: in what amounted to his manifesto, Barlow cited18 the words of Martin Neimoeller about Nazi Germany, “Then they came for me, and by that time no one was left to speak up.”

And so the pattern of collectives representing the little guy continued to grow throughout the 1990’s and into the new millennium. In recent memory we have seen the creation of the World Wide Web with its own online forums, then Web 2.0 - the interactive Web with the kinds of social tools we now see as standard — blogs, then wikis, then microblogging and social networking. Some might say that Google, Facebook and Twitter have become mass market tools, themselves the corporate face of interactive technology, social shopping malls funded by brands and occupied by our consumer personas; that the Community Memory vision has been realised, albeit accepting that it is in large part controlled by corporate interests. Numerous recent events suggest that the use of technology cannot be contained by any institution, however. From the use19 of the encrypted Blackberry Messenger tool in the UK riots, to Twitter’s role20 in the Arab Spring, the use of technology to communicate without authoritarian interference is alive and well. To echo a line from New York Times journalist John ‘Scoop’ Markoff21, “A small anarchic community of wireheads and hackers made the mistake of giving fire to the masses. Nobody is going to give it back.”

Of course the potential exists to use any communication technique to ill effect — whereas an individual might once require a face to face meeting to commit fraud (or far, far worse), they can now do so remotely, to far greater effect. Equally however, many believe however that such incidents are used as a precursor for governments looking for control. In 2014 for example, the Ukrainian authorities sent scaremongering SMS texts22 to people near a demonstration in Kiev: "Dear subscriber, you are registered as a participant in a mass riot,” the message read, much to the consternation of both recipients and telecommunications companies. And even with the revelations coming from whistleblowers such as Wikileaks and Edward Snowden, the debate between the authorities and the people continues to rage on the rights, wrongs and downright complications of allowing governments to have back doors into encrypted communications.

What such attitudes fail to take into account is that it would be as easy to lock away the sea. Online communities continue to thrive — today websites like 4chan and Reddit continue to thrive, the latter no stranger to controversy23 as it looks to strike a balance between free speech and community rules. Meanwhile maverick, ‘hacktivist’ groups such as Anonymous continue to work outside24 existing legal frameworks around use of technology for collaboration, such as they are, acting very much as the inheritors of a mantle created some fifty years ago. Taking matters even further, and perhaps the most underground of all movements is the Dark Web25, an alternative, encrypted Internet occupied by drug pushers, hitmen, file sharers and, indeed, civil rights activists. The Dark Web is only accessible accessed via anonymisation mechanisms such as The Onion Router. Not ironically, the service known as TOR was itself created in the 1990’s for military use, at the United States Naval Research Laboratory. When the code was released for general use in 2004, it was Barlow and Kapor’s EFF that provided continued funding. And of course, the US and other governments continue to explore ways of monitoring26 the population of the TOR-enabled underbelly of the Web.

Just as necessity is the mother of invention, so government interventions appear to drive technology innovations. And equally, just as so often in the past governments will continue, rightly, to look to control criminal activity, but are in danger of suppressing free speech in the process. While this debate will run and run, the dialogue between innovation and community looks like it will continue into the future. Indeed, as we shall discuss, it becomes more than a mainstay of the future: it is the future of technology and our role in it.

We shall return to these topics. But first let us consider another consequence of the digital revolution: mountains, and mountains, and mountains of data.


  1. http://www.americanrhetoric.com/speeches/mariosaviosproulhallsitin.htm 

  2. https://books.google.co.uk/books?id=mShXzzKtpmEC 

  3. http://www.leefelsenstein.com/?page_id=50 

  4. http://alumni.berkeley.edu/california-magazine/summer-2014-apocalypse/radicals-revisited-eyewitnesses-berkeleys-free-speech 

  5. http://www.amazon.co.uk/Tools-Conviviality-Ivan-Illich/dp/1842300113 

  6. http://en.wikipedia.org/wiki/Community_Memory 

  7. http://www.amazon.co.uk/Counterculture-Cyberculture-Stewart-Network-Utopianism/dp/0226817415/ref=pd_luc_top_sim_01_01_t_lh 

  8. http://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace 

  9. http://www.well.com/~szpak/cm/benway.html 

  10. http://www.leefelsenstein.com/wp-content/uploads/2013/01/TST_scan_150.pdf 

  11. https://books.google.co.uk/books?id=icwUrZavhHQC 

  12. http://www.atariarchives.org/deli/homebrew_and_how_the_apple.php 

  13. http://www.nytimes.com/2000/03/26/business/a-pioneer-unheralded-in-technology-and-activism.html 

  14. http://en.wikipedia.org/wiki/UUCP 

  15. https://www.cs.umd.edu/class/spring2002/cmsc434-0101/MUIseum/applications/emailhistory.html 

  16. http://www.computerhistory.org/timeline/?year=1985 

  17. https://www.eff.org/about/history 

  18. https://w2.eff.org/Misc/Publications/John_Perry_Barlow/HTML/crime_and_puzzlement_1.html 

  19. http://www.theguardian.com/uk/2011/dec/07/bbm-rioters-communication-method-choice 

  20. http://www.washington.edu/news/2011/09/12/new-study-quantifies-use-of-social-media-in-arab-spring/ 

  21. http://www.edge.org/digerati/markoff/markoff_chapter.html 

  22. http://www.theguardian.com/world/2014/jan/21/ukraine-unrest-text-messages-protesters-mass-riot 

  23. http://www.pcworld.com/article/2934532/reddit-bans-five-groups-on-its-site-for-harassment.html 

  24. http://www.mirror.co.uk/news/world-news/sandra-bland-anonymous-release-video-6147333 

  25. https://en.wikipedia.org/wiki/Dark_Web 

  26. https://pando.com/2014/07/16/tor-spooks/ 

4.4. Opening the barn doors - open data, commodity code

At 4.53PM on Tuesday, 12 January 2010, an earthquake measuring 7.0 on the Richter scale struck Port-au-Prince, the capital of Haiti. The event was a low blow — the country was already struggling1 with the effects of abject poverty, political unrest and a series of natural disasters. But nothing prepared the islanders for the effect of the earthquake. Thousands of buildings collapsed almost immediately2, roads were damaged beyond repair and electrical power was completely lost, leaving the city and surrounding area vulnerable to the encroaching darkness. Many thousands died and still more were buried; survivors dug at the rubble with their bare hands, in often-vain attempts to free these still trapped. Initial estimates of 50,000 dead continued to rise.

The world community watched from afar as sporadic reports started to reach them, checking phones and social media for updates from loved ones. As so often, many people including a diaspora of Haitians sat paralysed, in the knowledge that they could do little other than check news feeds and send money to aid agencies. One group in particular realised it didn’t have to sit on its hands in horror, however. The day after the earthquake, members of the Open Street Map online community turned their conversations3 from talk about GPS issues, cycleways and multi-polygon tags, towards how their expertise might help those dealing with the earthquake’s aftermath — the truth was, maps about Haiti had never been that good. “You have likely heard about the massive quake that has hit Haiti. OpenStreetMap can contribute map data to help the response,” wrote software developer 4Mikel Maron. Replied geologist Simone Gadenz, “Dear all OSMappers interested in the Haiti_EQ_response initiative. Is there any coordination?” Mike responded by instigating a conversation on the group’s Internet Relay Chat (IRC) page.

Over the days that followed, the OSM community got to work, building an increasingly clear picture of the devastation and its consequences. In the three days that followed, hundreds of people made some5 800 changes to the mapping data, initially gleaning data from Yahoo! imagery and old CIA maps, and then from newly taken, higher resolution aerial photos which provided not only better detail on roads and geological features, but also the locations of emerging camps of dislocated people — as requested by aid agencies, who were themselves co-ordinating efforts via another online community, the Google group CrisisMappers6.

As the resulting maps were far richer and more generally accessible than those available before the earthquake, they quickly became the main source of mapping information for groups including not only local organisations and aid agencies and Search and Rescue teams, but also the United Nations and the World Bank. As a consequence, the efforts of distant expertise resulted in a number of lives saved. The mapsters’ efforts did not end there: a year after the earthquake, a local group, Comunité OpenStreetMap de Haiti was set up7 to continue the task of developing better maps by, and for the Haitian people, with particular focus on aiding the response to the Cholera outbreak that occurred following the earthquake.

It’s not just Haiti that has benefited from the life saving efforts of groups such as Open Street Map. The crises in Somalia and Gaza, the nuclear disaster in Fukushima and others have all benefited from similar initiatives. “The incredible efforts following the Haiti earthquake demonstrated a huge potential for the future of humanitarian response,” wrote8 mapper Patrick Meier at National Geographic. But what exactly needed to be in place in order for such a group as OSM to even exist? The answer lies in the notion of open data, or more broadly, open computing. This has a long heritage: when maverick engineer Lee Felsenstein, who we met in the last chapter, first used the term ‘open’, he was reacting to the nature of computing at the time. Computers were monolithic and impenetrable, owned and rented out by equally monolithic and impenetrable corporations from his perspective.

No wonder that he and his colleagues worried about being slaves to the machine. This policy did of course grate with anybody who was quite happily making money out of selling software, not least Bill Gates. Microsoft was always different, as illustrated by the letter he sent to the Homebrew Club back in 1976. “As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?” he wrote⁠.9 But little did Lee or anyone else know at the time that the dominance of grey corporations would wane, in response to a new breed of upstarts who took it upon themselves to take it to the man. His own friend and colleague Steve Wozniak for example, who co-founded Apple with Steve Jobs; or Scott McNealy of Sun Microsystems; or indeed Bill Gates, all of whom decided they could do a better job than the entrenched, incumbent computer companies. “They were just smart kids who came up with an angle that they have exploited to the max,” writes10 Bob Cringely.

By the mid 1980’s, a key battleground was the Unix, with many computer hardware companies including Sun, Hewlett Packard and IBM offering similar, yet subtly different, proprietary versions of the operating system. For some, including MIT-based programmer Richard Stallmann, enough was enough. Deciding that access to software was tantamount to a human right, Stallman set about creating an organisation that could produce a fully free version of Unix. “GNU will remove operating system software from the realm of competition,” he stated in his 1985 manifesto11. “I have decided to put together a sufficient body of free software so that I will be able to get along without any software that is not free.”

Stallman and his merry men’s efforts were licensed under the GNU General Purpose License, which required12 any software that was created from it to be licensed in the same way. In addition, any software released to the general population had to be provided as both binaries and source code, so that others could also understand and modify the programs. And thus, the term ‘open source’ came into existence. While progress was slow due to lack of funding and general mainstream disinterest, GNU’s efforts resulted in a nearly complete version of Unix, with all the programs associated with it apart from one: the ‘kernel’. That is, the very heart of the operating system, without which nothing else can function.

In 1991, when Scandinavian student Linus Torvalds started developing a Unix-like system for the IBM PC as his university project, the potential impact of his efforts eluded both the young Finn and the proprietors of the major computer companies of the time. Neither did Linux appear as much of a competitive threat three years later, when the software was first officially released. By coincidence, 1994 was also the year that Swedes Michael Widenius and David Axmark kicked off a project to build an open source relational database package, which they called MySQL. And, by another fortuitous coincidence, it was the year in which the W3C World Wide Web standards consortium was formed. Tim Berners Lee’s first Web server, a NextStep machine, had run a variant of Unix — and the OS was the logical choice for anyone else wanting to create a Web server. Demand for Unix-type capabilities increased rapidly as the Web itself grew, but from a pool of people that were not that willing or able to fork out on a proprietary version. Over time, more and more eyes turned towards Linux.

This would have been less of an event had it not been for another, parallel battle going on in the land of the corporates. Intel processors — traditionally the used in personal computers, were becoming more powerful as predicted by Moore’s Law. As a consequence Sun Microsystems, Hewlett Packard and others were losing out servers based on Intel or AMD hardware. These were cheaper simply because of the different business models of the companies involved: the former used to charge as much as they could, whereas the latter were looking to sell a greater volume. The overall consequence was that the Web had a much cheaper hardware base and a free, as in both13 speech and beer, operating system. Almost overnight, Linux (and other open source Unix versions) moved from the realm of hobbyists to having genuine mainstream appeal — as did software built on the platform, such as MySQL. The scene was set for more: in 1996 the Apache Web server followed, with the Perl scripting language soon after. And thus, the LAMP stack was born.

Still the corporations fought, but as they did so they unwittingly played into the hands of what was essentially becoming a commodity. Microsoft was also interested in becoming the operating system of choice, developing its embrace, extend and extinguish tactics14 to damage the competition. In 1998 Netscape, itself crippled by Microsoft, decided to open source its browser software, forming Mozilla and creating Firefox as a result. It is no surprise that added impetus to the open source movement came from Microsoft, or at least the reaction to its competitive dominance. The company had been taken to the cleaners by the European Union and the US trade commission, due to repeated breaches of its monopolistic position.

But perhaps the biggest damage was self-inflicted as, try as it might, it could not slow the momentum of the growing open source movement. The challenge was exacerbated as Microsoft’s competitve targets — such as IBM, Hewlett Packard and indeed, Sun Microsystems — realised they could use open source not only as a shield, but as a sword. Even as they began to adopt Linux and other packages to remove proprietary bottlenecks on its own businesses, they used open source to undermine Microsoft’s competitive advantage. The fact that much funding of Linux came from IBM has to be scrutinised, not least because there was plenty to be had from selling consulting services. As wrote Charles Leadbetter in his book We Think15, “Big companies like IBM and Hewlett Packard make money from implementing the open source software programmes that they help to create.”

With every boom there is a bust. And indeed, as the company who had done the best out of the dot-com also became one of the most significant victims of the dot-bomb, it turned to Open Source for its salvation. Sun Microsystems spent a billion dollars acquiring a number of open source companies including MySQL, as then-executive Jonathan Schwartz put the company’s hat fully in the ring. It was not enough to save the company, which fell by the wayside as the bottom fell out of the e-commerce market. Bt still, it helped cement the position of open source as software that was suitable for mainstream use. To quote Charles Leadbetter : “Google earns vast sums by milking the web’s collective intelligence: never has so much money been made by so few from the selfless, co-operative activities of so many.”

The overall consequence was that software could indeed be delivered free, but this has had commercial ramifications. Open source is no magic bullet, but rather a metric of how software (and indeed hardware architecture, as illustrated by Facebook’s efforts to open source computer designs) is commoditising. Not every open source project has been a success — indeed, ‘open-sourcing’ code has sometimes been seen as a euphemism for a company offloading a particular software package that it no longer wants to support. But many open software projects pervade. As a result of both corporation and community attitudes, of competitive positioning and free thinking, we have seen a broad range of software packages created, to be subsumed into a now-familiar platform of services. The LAMP stack is now the basis for much of what we call cloud computing, aiding the explosion in processing that we see today. And when Doug Cutting first conceived Hadoop16, he released it as an open source project, even though he was working for Yahoo! at the time — without which it may never have achieved its global levels of success.

The main lesson we can learn from open source is that proprietary software cannot do everything by itself — and nor should it, if the model creates an unnecessary bottleneck for whatever reason. To understand ‘open’, you first have to get what is meant by ‘closed’ that is, proprietary, locked away, restricted in some way — which brings us to another area that has gone from ‘closed’ to ‘open’ models, that of data. As we have already seen, we have been creating too much of the stuff, far more than any one organisation can do anything with. At the same time, data has experienced similarly communitarian desires as were experienced in the computer hardware and software domains. In this case, responsibility lies with the scientific community: back in 1942, Robert King Merton developed a number of theories on how science could be undertaken, notably the principle that scientific discovery (and hence the data that surrounded it) should be made available to others. “The institutional goal of science is the extension of certified knowledge,” he wrote17. “The substantive findings of science are a product of social collaboration and are assigned to the community. They constitute a common heritage in which the equity of the individual producer is severely limited.”

Such an attitude continues to the present day in the scientific community — the UK’s EPSRC sees it as very important that, “publicly funded research data should generally be made as widely and freely available as possible in a timely and responsible manner,” for example. However it has taken a while for broader industry to catch up. Indeed, it wasn’t until 2007 that an activist-led consortium met in Sebastopol, California to develop ideas specifically aimed at ‘freeing’ government data. Among them was publisher and open source advocate Tim O’Reilly, law professor Lawrence Lessig, who devised the Creative Commons licensing scheme for copyrighted materials, and Aaron Swartz, inventor of the Really Simple Syndication (RSS) scheme. Together, they created a set of principles that ‘open’ data should be18: complete, primary, timely, accessible, machine-processable, nondiscriminatory, nonproprietary and license-free. The goal was to influence the next round of presidential candidates as they kicked off their campaigns, and it worked: two years later, during President Obama’s first year in office, the US government announced19 the Open Government Directive, and launched the data.gov web site for US government data.

As the closed doors to many data silos were knocked off their hinges, one can imagine the data itself heaving a sigh of relief as, finally, it was released into the wild. Almost immediately however, the need for a standardised way of accessing such data became apparent. The Extensible Markup Language, XML was a logical choice; but over time interest moved to another, slightly simpler20 interchange format known as JSON, as originally used by Javascript for Web-based data transfers. And so, it became understood that anyone wanting to open access their data should do so through by providing a JSON-based application programming interface, or API. Such interfaces became first de facto, and then de rigeur for anyone wanting to create an externally accessible data store.

The consequence of doing so has been dramatic. When Transport for London opened up information on its bus routes, app developers were able to create low-cost mobile applications which stimulated use of the buses. Says Martin Haigh at the UK Land Registry, “We used to sell data, but now we just make it accessible.” Experienced more broadly, this positive feedback loop has led to a fundamental shift in how software creators perceive their worth. Just about every modern web site that has anything to do with data, from sports tracking sites such as Strava or MapMyRun, to social networks like Facebook or Twitter, and indeed resource sharing sites like AirBnB and Uber, offers an API to enable others to interact with their ‘information assets’. Indeed, this business shift even has a name — the API Economy describes not only the trend, but also the opportunity for new business startups that can capture part of what has become an increasingly dynamic market. And the expectation — it would now be perceived as folly to launch any such online service without providing an ‘open’ API. We have not seen the end of it, as the drive towards increasingly interconnected, ‘smarter’ devices generates still more data, much of which is stored and then made widely accessible to the community — such as Xively, which maps use of electricity and other resources.

While neither software nor data asked to be open, they each have reasons to be so. In software’s case, the dual forces of commoditisation and commercialisation required a balance to be struck between communal benefit and corporate gain; and for data, the public drive for transparency coupled with a business reality that third parties can exploit data better than any one individual. The result is an interplay between big business, top down approaches, and start-up, bottom-up approaches. It becomes very easy to create a startup, as the barrier to entry becomes very low to do so. Indeed, as the Law of Diminishing Thresholds recognises, it becomes very easy to do just about everything you might want.

4.5. The return of the platform

Some elements of the history of technology read like ‘Believe It Or Not” stories. Believe it or not, it was indeed a Transylvanian travel agent called Tivadar Puskás who devised1 the2 telephone exchange, back in 1876. He took his ideas first to Thomas Edison, at his research facility in Menlo Park, New York (one doesn’t need to wonder who booked the passage) and, subsequently, deployed the first exchange for Edison in Paris. And then, believe it or not, it was a schoolteacher-turned-undertaker from Kansas City, Almon B. Strowger, who came up with the idea3 of the automated telephone switch, some fifteen years later — which of course led to the leaps of mathematical brilliance made by Konrad Zuse and Alan Turing. You really can’t make this stuff up.

The relationship between computer smartness and human smartness, as illustrated by these blips in a decades-long sequence of minor epiphanies and major breakthroughs, is too important to ignore. At each juncture, individual endeavour has frequently given way to corporate objectives, as the opportunity to monetise each breakthrough has become all too apparent. Frequently (and right to the present day) this has involved some level of disruption to whatever went before, as old intermediaries have been replaced by new, or discarded altogether — a recurring theme since the telegraph replaced the penny post, right up to e-commerce sites displacing travel agents, and indeed beyond. But even as computers become more powerful and one set of business models is replaced by another, the nature of the coastal paradox means complexity continues to win. Even today, with the best will in the world, the ‘smartest’ companies we know — Google, Amazon and the like — are more harvesting data, than really exploiting its full potential. And even capabilities such as Facebook and Twitter, clever as they are, are based on relatively straightforward ideas — of getting messages from one place to another.

This is not necessarily a bad thing. While technology has continued to advance during the 1980s and 1990s, a significant proportion of the effort has gone into making it more of a tool for the masses. It is not coincidental that the hypertext4 ideas behind Web links were conceived, by self-styled5 “original visionary of the World Wide Web” Theodor Nelson in 1967 (““Arguably, the World Wide Web is derivative of my work, as is the entire computer hypertext field, which I believe I founded,” he said), nor that the computer mouse was designed by Douglas Engelbart in 1968, nor that the Internet’s default operating system, Unix, was written in 1969. Simply put, they were good enough for the job, even if they had to be patient before they achieved their full potential. This corollary of the Law of Diminishing Thresholds, which we might call the Law of Acceptable Technological Performance — in that some ideas are simple enough to be correct — could equally be applied to the Simple Mail Transfer Protocol, or indeed the Ethernet protocol, both of which were seen as less reliable than other options (X.400 and token ring respectively). Simply put, they were good enough. And as things become good enough, they commoditise and become generally accepted. And as they do, they become more affordable, as they are shared. A Netflix subscription costs so little, for example, because so many people are paying the same money to watch a single film.

Technological progress has been equal parts of mathematical and scientific innovation, corporate competition and community-spirited free thinking and downright rebellion, each keeping the flywheel of progress spinning. This ‘magic triangle’ has created the quite incredible foundation of software upon which we are now building. Today there’s lots to build upon. Above all we have an infrastructure, we have a growing set of algorithms, we have an API economy and an innovation-led approach — in other words, we have our foundation for smart. The foundation itself has been getting smarter — smart orchestration algorithms provide the basis for the cloud, and as we have seen, massively scalable open source data management tools such as Redis and Hadoop have offered a more powerful replacement for traditional databases. And it will continue to do so. Despite diminishing thresholds, and whether or not one believes that we are heading towards a climate change disaster, the fact is that the future will be both highly resource constrained, whilst involving far more powerful compute platforms than today, coupled with a continuing appetite for network bandwidth, CPU cycles and data storage.

The impetus is there, we are told by economists who look at changing global demographics and the pressure this puts on global resources, pointing at diminishing reserves of fossil fuels and rare metals. Note that the former power our data centres and devices, and the latter play an essential part in electronics of all kinds. Indeed, the computing platform itself could be heading towards what we could call the orchestration singularity — the moment at which computers, storage, networking and other resources can manage themselves with minimal human intervention. If this happened, it would change the nature of computing as we know it for ever. While we are a long way from this, the near future is unassailably a globally accessible substrate of resources that requires is controlled by a dynamically reconfiguring stack of clever software to work. This is the platform upon which the future shall be built.

But what can we do with such a platform of technological resources? Therein lies the rub. The answer to the question is evolving in front of our eyes as, outside of technology infrastructure, the algorithm is growing up. Software already exists to enable us to communicate and collaborate, to buy and sell, to manage ourselves and the things we build and use. The platform is itself increasing in complexity, going beyond simple creation, manipulation and management of data. Historically, such capabilities were built or bought by companies looking to do specific tasks — payroll for example, or managing suppliers and customers. But even these packages are commoditising and becoming more widely accessible as a result.

Are we destined to another century of automation, or will computers become something more? All eyes are on the ultimate prize — to replicate, or at least run in a similar way to, the most powerful computer that we know, a.k.a the human brain. Back in 1945 John von Neumann stated, with remarkable prescience, “It is easily seen that these simplified neurone functions can be imitated by telegraph relays or by vacuum tubes,” kicking off another theme which has repeated frequently ever since. Science fiction authors, have been all over it of course, moving their attention from intelligent aliens to thinking robots or mega brains, such as Douglas Adams’ Deep Thought or Iain M Banks’ creations. In the world of mathematics and science, experts like M. Huret Senior led the vanguard of forays onto Artificial Intelligence was the thing, with luminaries like Patrick Henry Winston6 setting out how rules-based systems would support, then replace human expertise. Largely due to the inadequacies of computer power of the age however, the late 70’s also saw reality simply unable to deliver on the inflated expectations of the time. Artificial intelligence was the original and best “idea before its time” and the hype was not to last. Computers simply weren’t powerful, nor cost-effective enough to deliver on the level of complexity associated with what we are now calling machine learning.

Repeated attempts have been seen to deliver on the dream of thinking computers. In the 1990s, neural networks — which could learn about their environments, and draw increasingly complex inferences from them, were the thing. Today, computer algorithms are a core part of the tools used for financial trading on the stock markets, and as we have seen, big data analytics software can identify a fair number of needles in the gargantuan haystacks of data we continue to create. Will the platform itself become recognisably smart? Mike Lynch thinks so, and new concepts as Precognitive Analytics7 show a great deal of potential. But this does not mean that the old, ‘analog' ways are done with, far from it. For the foreseeable future, the world will remains more complex than the processor power available to model it. While we are only scratching the surface of tech's potential, we are still only scratching the surface of the complexity we are dealing with. But equally, it doesn’t have to be finished to be started. While we may not be at the brink of intelligence, we currently see the results augment, rather than supplant our own abilities.

We shall look at the longer term potential of technology soon, but right now we are back with people. Here’s a thought experiment: what if everyone suddenly had access to every computer in the world? First of all, they probably wouldn’t know what to do with it all – and people would create ways of using it and put themselves in the middle, and charge for the privilege. Which is exactly what is going on. We can see a small number of major corporations building walled gardens, in which they attempt to lock their users into a certain ways of working. We can see governments panicking and trying to control everything, even while using technology. No doubt we will face standards wars and vendor isolationism, new global entities emerging from nowhere (my money’s on a subset of the cloud orchestration companies), calamitous security events and threats of government intrusion.

And we shall also see the common man struggling to make sense of it all, even as his or her very behaviours are transforming. We’re all as confused as each other: like the scene in the Matrix, when one agent says to another, “He is only…” and the other replies, “…Human.” In the next section we consider just how profound the impact of technology has been, and the ramifications — positive and negative, though rarely indifferent — on ourselves, our daily and working lives, in our communities and around the globe.

To do so we first, once again, return to ancient Greece. Enter: Demosthenes.

5. Society is dead, right?

A conductor raises his baton, and a hush pervades the space behind him. You could hear a pin drop. The first, isolated notes of violin appear, soothing yet probing, before they are joined by a broader range of instruments. As the sound builds, the harmonies and rhythms touch some deep part of our brains to trigger emotions, memories and even physiological responses - heart rates quicken and slow, muscles tighten, hackles rise and fall.

Behind these complex stimuli lies a complex orchestration of sounds and silence as even the spaces in between the notes drive a reaction. Without the conductor and the solitary, flicking dance of his baton, the orchestra would struggle; with nobody to keep the beat, timings would start to diverge resulting in, if not a musical train wreck, an altogether muddier sound with less definition. The differences, which might start as microseconds, quickly move within the human ability to recognise them.

Even further back, however, exists an even more complex web — of materials, of craftsmanship, of tuning, of purposeful practice, of intuition, some of which may have taken years to perfect. All such moments come together on the night to form what the audience sees and hears, ultimately driving the experience. One weak player, one poorly maintained instrument could have profound consequences for the whole performance. But this night, for reasons nobody can explain, the result is sublime.

In the same way today’s startups and innovators are building on a complex yet powerful platform, each looking to steal a march on the competition. The winners will be those who delight their audiences, even if for a moment, and they will be rewarded for the value they give.

5.1. Talking 'bout a revolution

The ancient city of Sravasti, nestled at the edge of the Himalayan mountains in northern India, was a favourite haunt of Nepalese-born Siddhārtha Gautama Buddha. For two decades he came to a monastery just outside the city for his summer retreat; at one side of the monastery was a grove, where he would, on occasion, preach to the other monks present. Sometimes over a thousand would gather to listen to his words.

On one particular occasion he chose to join the other monks in gathering alms, going into the city with a bowl to do so before returning to the cushioned areas of the grove. On his return one of his more revered disciples, known as the venerable Subhuti1, asked him a series of questions. “World-Honored One, if sons and daughters of good families want to give rise to the highest, most fulfilled, awakened mind, what should they rely on and what should they do to master their thinking?” asked Subhuti. As the Buddha gave his answers, they were documented in a short book, for the benefit of all those who could not be present. The title of the book compared the words of wisdom with the perfection and accuracy of a diamond. For centuries the Diamond text, or Sutra.2, was painstakingly copied by hand.

Until, that is, some bright spark had the idea of carving the words into blocks of wood, which could then be coated with a layer of ink and pressed against a sheet of material. In doing so, of course, he or she unbeknowingly changed the world. As we saw with Darius the Great, the history of human conquest has been limited by the ability of those in power to get their point not only across, but also then distributed across long distances. It is unknown who first ‘invented’ the woodcut – but the practice of making such printing blocks developed at similar times in both Egypt and China, in the first few centuries AD. The earliest books we know about are 7-8th centuries AD — such as the Diamond Sutra itself. Without it, we might have known some vague tales about a historical figure who was renowned for his wisdom. With it, we know what he had for breakfast.

Wood cuts signalled the arrival of mass production, and the beginnings of literacy. Before it, the only way to get a message to the masses was to copy it wholesale, then distribute it as widely as possible via military staff, priests and town cryers, all of whom required the ability to read. While woodcuts may have become prevalent in what the West knows as the East, they took a long time to reach Europe. In 1215 for example, when the Magna Carta was signed by the King of England and two dozen of his barons, a team of scribes then painstakingly copied out the document so it could be sent to the key population centres of the nation. Even once distributed however, messages would not always reach an able audience: restrictions on availability of writing scholars also linked to the ability of the general populace to read — which may have been controlled by religious and secular authorities who wanted to keep such capabilities to themselves.

In consequence the two notions — supply of written materials, and demand through general literacy, went hand in hand. For writing to reach the broader population required a significantly more efficient mechanism of mass production. The principle of arranging blocks of letters and characters in a frame was not that massive an extrapolation from carving out entire pages (indeed it fits with the first two characteristics of Perl creator, Larry Wall’s remark, that “The three attributes of a programmer are; impatience, laziness and hubris.”) All the same the idea of moveable type took many more years to develop. It was first identified as an idea in China and Korea, but it was not until the middle ages, when Gutenberg developed the printing press, that it really ‘hit the mainstream’ in modern parlance. Gutenberg’s story was like that of the world’s first technology startup — he was short of money and his ideas were challenged at every turn.

But persevere he did, in doing so assuring that the world would never be the same again. Gutenberg’s press immediately solved the problem of having one painstaking process causing a bottleneck to many others. While Roman historian Tacitus’s writings could only3 reach a limited audience, diarist Samuel Pepys was bestowed with the advantage of printing to spread his message. Quite suddenly, there was no limitation on spreading the word, a fact jumped upon by individuals who felt they had something to say. Names we may never otherwise have heard of, such as Witte de With4, gained notoriety as pamphleteers — they could express their views to a previously inaccessible audience at relatively low cost, their paper-based preachings becoming a catalyst for establishments set up for discussion and debate (and good coffee) such as Lloyds of London, spawning whole industries (in this case, insurance) in the process.

With Gutenberg, the genie of influence was out of the bottle. It is no coincidence that we see the development of mass literacy in the period that followed, as the written word found its place as a fundamental tool of communication among the masses. What with pamphlets such as Shelley’s5, “The necessity of atheism,” one can see why the established Church sought to suppress such a devilish scheme. Once the telegraph also came onto the scene, either messages could be printed in bulk and passed out locally, or a single message could be transported to a distant place where it could be typeset and printed (incidentally, a precursor to today’s computer-based mechanisms that balance the constraints of the network with that of local storage). And the French and American revolutions were built on a trail of paper flyers from the likes of Maximilien Robespierre, Thomas Paine6 and Paul Revere. Indeed, alongside manifestations and battles was fought a ‘pamphlet war’ in which opposing sides presented their vies on paper. Biographer Thomas Carlyle called7 the period 1789–95 ‘the Age of Pamphlets’, as authoritarian censorship was washed over by a wave of considered writing.

Once the advent of the telegraph did away with the challenges of distance, it was only a matter of time before the considered thoughts of anyone in the world could have an influence on anyone else. Online bulletin boards and Usenet News sites had their audiences, but these were largely closed: even the arrival of broader-market services such as America Online tended to hide debate away in hierarchies of conversation. The advent of the Web started to change things of course, as suddenly the notion of pamphleteering could extend online. According to Wikipedia, one of the first8 ‘socially minded’ web sites was that of the human rights organisation Amnesty International, created in 1994. All the same however, both cost and technological literacy meant the threshold remained too high for all but a handful of individual voices.

One of which was student Justin Hall, who was inspired by New York Times journalist John Markoff’s article9 on the Mosaic Web browser. “Think of it as a map to the buried treasures of the Information Age,” Markoff had written. Duly inspired, Hall had built a web server for himself, and created his first web page — “Howdy, this is twenty-first century computing…” he wrote10. “(Is it worth our patience?) I'm publishing this, and I guess you're readin' this, in part to figure that out, huh?” Hall was by no means alone, but still, the ability to create HTML pages containing words and hyperlinks remained in the hands of a more technically minded few. Not least Jorn Barger, a prolific Usenet poster and ceator of the Robot Wisdom web site, upon which he created a long list of links to content that he found interesting. Barger coined11 the “fundamental’ principle of signal-to-noise in posting online: “The more interesting your life is, the less you post… and vice versa.” While this doesn’t say much for his private life, he would not have been particularly uncomfortable among the coffee drinkers of old. He also, as it happened, invented the term ‘web log’.

Over time, an increasing12 number of individuals and companies started to maintain online journals, posting in them anything that took their fancy. By 1999, when the term ‘web log’ was shortened to ‘blog’ by programmer Peter Merholz13, they were becoming a phenomenon. Finally individuals, writing about anything they liked, were connecting directly with other individuals on a global basis. Blogs may only have been text- and graphics-based, but they gave everybody everywhere the power and reach of broadcast media. From the Web’s perspective, it was the equivalent of blowing the doors off, resulting in a generation of previously unheard voices suddenly having a say. Unlikely heroes emerged, such as Microsoft engineer Robert Scoble, or journalist and writer David ‘Doc’ Searls. In a strange echo of the established church in Gutenberg’s time, the established community of influencers (including the mainstream press) was none too happy about the rise of ‘unqualified’ opinion. Still, it wasn’t going anywhere — exploiting the sudden release of potential, web sites such as Techcrunch and Gawker came out of nowhere and became billion-dollar businesses almost overnight.

The blogging model has diversified. Today, there are sites for all persuasions and models — for example, the question-and-answer site Quora, or the long-form content site Medium, not to mention YouTube, Soundcloud, Periscope and the rest. There really is something for everyone who wants to get a message out, so many options that it doesn’t make sense to consider any one in isolation. And indeed, mechanisms to create and broadcast opinion have become simpler and more powerful, with Twitter’s 140-character limit proving more of a challenge than a curse.

The downside for today’s Samuel Pepyses, of course, is that while they may feel they have a global voice, they are having to compete against millions of others to be heard. In this click-and-share world we now occupy, sometimes a message will grab the interest of others in such a way that the whole world really does appear to take it on — so called, ‘going viral’. In 2013 for example, the little-known Amyotrophic Lateral Sclerosis (ALS) association invited supporters to have a bucket of ice cubes poured over their heads in the name of the charity. The campaign proved unexpectedly, massively popular, raising14 some $115m for the charity and resulting15 in sales of ice cubes increasing by 63% in the UK alone. And indeed, as in previous centuries, blogs and social tools are being used to harness public opinion, through sites like Avaaz and PetitionSite. Even with examples such as UK retailer Sainsbury’s ‘tiger bread’ loaves, which were renamed following a letter from a child (the letter was reprinted millions of times online with the obvious benefit to the company’s reputation), it is easy to be sceptical about such efforts, particularly given the potential for abuse (for a recent illustration look no closer than Invisible Children’s Kony 2012 video, which was been watched 100 million times on Youtube and Vimeo in the first month it was uploaded. The problem was, it was later found to be a fake) and the fact that for every success story, a thousand more fail to grab the imagination. Today’s caring populace has too many pulls on its valuable time, which means campaign sites are looking for more certainty before they launch. For example, Avaaz adopted a process of peer review, followed by testing suggestions on a smaller poll group, before launching campaigns on a wider scale. Good ideas percolate through, hitting targets of relevance, currency and emotional engagement before making the big time. And if they do hit the big time? Then the cloud can ensure that the sky's the limit.

The nature of influence extends beyond such examples, as the virality of online social behaviour can affect whole countries, or even regions. In the UK the public backlash against the News of the World newspaper took place online, perfectly illustrating how the relationship between printed and online media has changed beyond recognition. And the harnessing of popular opinion in the Arab Spring might not have been possible without social networks. Of the ousting of President Mubarak in Egypt, Ahmed Shihab-Eldin, a producer for the Al-Jazeera English news network was quoted as saying, “Social media didn't cause this revolution. It amplified it; it accelerated it.”

Each time that a snowball effect occurs, millions of individuals contribute one tiny part but the overall effect is far-reaching. On the positive side, online influence this enables power to the people and new opportunities for democracy — a good example of a referendum-driven society is Iceland’s constitution, which was devised and tested online. At the same time, online debate is instrumental16 in broadening views on gender, race and other issues. The outcome may not always be so positive, however: the incitement to riot and loot in the UK, was partially blamed on social media and the encrypted Blackberry Messenger tool.

The even darker side of such minuscule moments of glory that our actions may not always be entirely our own. Psychologists are familiar with how people follow crowds, and there is no doubt that online tools are enabling group behaviour as never before. Not least we leave ourselves open to the potential for a kind of hive mind. A million tiny voices can drive good decisions, but can also yield less savoury kinds of group-think, which is as exploitable in reality TV as in incitement to riot. The mob should not always rule, as shown in the later stages of the French revolution.

One technique in particular – the information cascade17 – can lead us to respond in ways we wouldn’t if we actually thought about it. And in 2015, Facebook ran a controversial experiment to raise the importance of ‘good’ messages in someone’s news feed. What the company found is that people were more likely to click on topics that reflected their own opinion, meaning that the site demonstrates a clear echo chamber effect. Could this, even now, be used to influence our thinking on social issues?

The notion of influence is also highly open to exploitation. Propaganda is nearly as old as pamphleteering as a technique, so how do we know governments are not influencing social media right now? Obama’s election was massively helped by the careful use of social media: could this be put down to manipulation, or should we simply see it as “he who uses the tools best, wins”? Obama had the loudest megaphone with the best reach, for sure, but was it right that he won on the strength of being better at technology, rather than a better manifesto? Social media is already being treated as an important tool in conflict situations such as Gaza18.

One group that is not waiting for the final answer is the media industry, which has its own journalistic roots in pamphleteering. The media only exists by nature of the tools available, indeed, the name itself refers to the different ways we have of transmitting information: each one a medium; more than one media. And social networking is just the latest in a series of experiments around tools of influence. As first remarked remarked PR guru Lord Northcliffe, “News is what somebody somewhere wants to suppress; all the rest is advertising.” Right now, media researchers are identifying how to accelerate and amplify for themselves, applying such techniques to achieve the own goals and those of their clients. An ever-diminishing boundary exists between such efforts and manipulation of audience, customer and citizens, and history suggests that if the opportunity for exploitation exists, some will take it.

It would be a mistake, however, to see corporations as faceless. The changes that are going on within them are as profound as outside as technology challenges not only how we live, but how we work.

5.2. The empty headquarters

“Well, in our country,” said Alice, still panting a little, “you'd generally get to somewhere else—if you run very fast for a long time, as we've been doing.”

“A slow sort of country!” said the Queen. “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

Lewis Carroll, Through the Looking-Glass, 1871

A couple of years ago, UK mobile phone company O2 undertook an intriguing initiative — to keep the doors of its Slough head office closed for the day, to tell people to work from home and see what happened. At the time, sustainability was all the rage — what with rising oil prices, all attention was on how companies could prove their green credentials. The headline statistics were impressive — 2,000 saved hours of commute time, which corresponded to £9,000 of O2 employees' own money; a third of the 2,500 staff claimed to be more productive; electricity consumption was reduced by 12 per and water usage by 53 percent.

All in all, a thoroughly good advert for working from home. What nobody seemed to notice, however, is that nobody outside the company batted an eye. O2’s business did not fall, or even stumble. Even with the doors closed, for O2’s customers, suppliers and partners it was like nothing at all happened. Business took place completely as usual — begging the question, precisely what purpose did having a head office serve at all?

While technology is not the only reason why companies across the board can operate more flexibly, it certainly helps. As we have already seen (and as most, if not all readers have experienced for themselves), our mobile devices can connect, via the Internet, to virtually (no pun intended) anywhere. Whereas once people would have to go into the office to meet or indeed, to access a computer system, they can now do so from the comfort of their own home.

Even more significantly, perhaps, has been the growing ability to use the cloud, a.k.a. “other people’s computers” to perform processing tasks that previously would have had to have taken place on a company’s own computer systems. Over the past decade or so, purveyors of computer services have grown and evolved, from the Application Service Providers at the turn of the millennium, to what has come to be termed Software as a Service, or SaaS. Pretty much anything a company might want to do in software — managing your customers or projects, keeping track of inventory, doing your accounts and so on — can be done using a Web-based package rather than installing something locally.

SaaS is very quick to get going: the low cost of entry, coupled with the minimal installation overhead, compared to deploying an in-house application, make it ideal to suit specific needs being treated as a one-off. And then some: a company can build an application on top of a business-class hosted back-end (such as Amazon Elastic compute cloud, Google App Engine or Microsoft Azure) and start delivering services to customers much more easily than 'the old way' of buying and installing servers, routers and all the other gubbins required.

This, minimal-overhead approach is ideal for anyone wanting to found a technology start-up. When I last went to visit my good friend Steve at his new business venture, above a high street coffee shop in Mountain View California, the only computers visible were laptops. “Even our telephone system is in the cloud,” he told me. SaaS has extended way beyond provision of basic services: whereas a few years ago, companies may have had to shrug off cloud-based software and start building their own data centres having grown beyond a certain point, these days it would be unlikely in the extreme for any organisation to make such a leap. Indeed, the last major US company to do so was probably Facebook.

As a result, new companies have everything they need both to start with very few infrastructure costs (a hundred quid a month or so is not unheard of), and then to scale. The consequence is a proliferation of possible ideas, not all of which will succeed — the very brainstorming effect we saw in the first section of this book. But every now and then, an organisation will appear, then grow to a global entity in the order of months, taking on the traditional ‘players in the market’ as it does so. The business term is disintermediation, as some unassuming startup recognises that it offer a service or supply a product to a certain customer base, in a way previously ignored by the incumbents. Uber and AirBnB are just the latest examples of how relatively new organisations can completely change how business is done, in the former’s case much to the consternation of Taxi companies around the world.

Companies new and old are responding to the technology-driven realities of modern business. The ‘platform’ upon which such startups are built is, of course, also accessible to traditional organisations, both private and public. As a result it has become increasingly easy to engage and communicate with others, so the costs of doing so decrease relative to the benefits. This gives us a business corollary to the Law of Diminishing Thresholds: in pharmaceuticals for example, it is recognised that you can get more bang per buck by funding a science park and supporting research start-ups, than trying to do all the research in-house. Unsurprisingly, big pharma companies are also big advocates of open data approaches: simply put, it’s good for business. Meanwhile doors are being opened between corporations and their clients, under the auspices of co-creation, for example working together on product designs.

What with social media and generally improved communications, any organisation large or small can get a direct line to its customers, which it can use to better understand what products to develop, test out new services, , to support marketing initiatives and even to act as service and support, in the case of GiffGaff, or, indeed, ask for money in advance. First to discover the potential of such models was popular beat combo Marillion. Back in 1997, unable to tour the US due to a series of convoluted circumstances, faced with broken promises and inadequate representation from the industry, the band turned to their devoted online fan base for help. The masses may have been marshalled via email lists rather than social networks back in the day, but Marillion can quite comfortably claim to be the first band to undertake a crowdsourcing campaign. Even the idea itself was triggered by the fans; the resulting ‘tour fund’ not only raised $80,000 and enabled the tour to take place; it set the scene for an album pre-order campaign which, effectively, freed the band - and all bands — from the clutches of an oft-indifferent industry.

A couple of decades on, pledge-based platforms like Kickstarter and PledgeMusic have become part of the vernacular for established acts and fledgling bands. Indeed, when Steve Rothery, Marillion’s longest-serving member launched a Kickstarter campaign to produce a solo album, The Ghosts of Pripyat, he achieved his £15,000 target within 48 hours of launch. “Isn't it good when a plan comes together,” he remarked. And meanwhile, Seedrs, Angellist, Indiegogo, Kiva and a wide variety of other funding platforms ensure that just about anyone in the world, with any kind of idea or need, can have access to funding. Indeed, these models are now big business: peer-to-peer (P2P) lending platforms also exist for wealthy and institutional investors, disintermediating even the banks.

This change is starkly illustrated1 by research company Ocean Tomo's presentation of S&P 500's tangible assets over the past 40 years. In 1975, 17% of assets were listed as 'intangible', that is, not directly owned by the organisation but still a source of value. in 2015, this figure was 84% — a complete flip. These are “sea changes in the world right now in terms of the way we are globally transforming the way we live and work,” says2 Dion Hinchcliffe, one of a group of bloggers3 called the ‘Enterprise Irregulars’. But such a mindset or the presence of certain tools might not have an impact on real business outcomes, beyond general notions of knowledge sharing and engagement, argues4 Information Week’s Rob Preston. “The movement's evangelists employ the kumbaya language of community engagement rather than the more precise language of increasing sales, slashing costs, and reducing customer complaints,” he says. While the startup working environment might sound exciting to some, it doesn’t necessarily have the mass appeal that the pundits and commentators who embrace them might believe. “I’m an electronic engineer,” said one person to me when I asked about his company’s use of social enterprise tool Yammer. “What time do I have for filling in my profile or joining in some chat?”

As they try to balance the need for change with the need to ignore transient distraction, the main issue for older companies is their own inertia: they simply can’t change the way they have always done things, for reasons to do as much with internal politics and structures, as any technical or process challenges. In some cases, whole businesses have crumbled because they failed to keep up with innovations that undermined their business models — Kodak5 is a reminder of the cautionary words documented by Lewis Carroll, now known as the Red Queen’s Race. As organisations new and old are moving to a new state, ’business transformation’ — that is, helping organisations through the changes they need to make to stay competitive — is big business in itself.

Governments could also be gaining more benefit from the potential of technology, not to make a profit but to reduce the burden they put on taxpayers — but they are also undergoing difficult changes in how they operate. For example central government has not, traditionally, had a great track record with procurement. Big departments have tended to buy from big companies, for the simple reason that the overhead of responding to the still-onerous tendering process remains too great. A number of initiatives are underway to make things simpler — both for smaller companies to tender for business in general, and (in the shape of G-Cloud and its associated Application Store) for smaller SaaS companies to get a foot in the door. “The government sees SMEs as key players in the new Government ICT market economy,” says John Glover, director of SaaS-based collaboration software company Inovem.

The Law of Diminishing Thresholds has no national boundaries, of course. Over the past 10 years for example, developing nations have moved to a majority position, delivering nearly three quarters of the 5 billion global mobile phone subscriptions. For various reasons, from copper theft to straightforward building expense, it has been more efficient for such countries to jump straight to mobile, without bothering with the ‘fixed’ telephone service. This ’leapfrog effect’ describes how countries without existing infrastructure are in a position to get ahead those who had decades of embedded kit to cope with. In consequence, what developing countries lack in internal compute infrastructure, they make up in terms of mobile comms and device prevalence. The massive adoption rate offers a salutary tale for programmes such as Nicholas Negroponte's "One Laptop Per Child" (OLPC) project which has been dogged by controversy.

As business is globalising, it opens the door to companies from a much broader palette of countries. Indeed, given the potential of the cloud/mobile combo, the opportunity for developing countries is not so much to leapfrog traditional infrastructure, but to get a head start in terms of innovation — sparking success stories like Zoho in India, and indeed, Alibaba in China: the latter deserves a chapter to itself.

The net effect for any organisation is that the once-valid model of creating a centralised corporation which sought to control the world from its monolithic headquarters has largely been eroded. We are moving from a business landscape of castles, to a dynamic marketplace in which big and small can co-operate, and compete, as peers. And as power moving from the entity to the ecosystem, there is one, additional factor which ensures that the door is slammed on traditional models: that is, the people.

Not an awfully long time ago, the simple reason why most people didn’t own a computer was the fact they were too expensive. The main option for most people (a PC running Microsoft Windows) would cost hundreds, if not thousands of pounds new; and the number of things that anyone could do with it were limited — one could write one’s memoirs, or perhaps send emails over a dial-up modem connection. Even a decade ago, computers were a luxury item. But then came the smartphone (essentially, a miniaturised computer with a built in wireless modem) and everything changed. Today, it is possible to buy such a device for tens of pounds, enabling access to all kinds of online services. Most people don’t, however — they’d rather spend hundreds of pounds and buy a more expensive model, or even a tablet computer such as an iPad. Which they do, sometimes as a luxury item but equally frequently, as a necessity.

As people think about technology differently, such views are spilling into the workplace. We get used to being able to send a message instantly, to see whether it has been read and to react to the messages of others. We expect to be able to send photos, often taking a picture of a document as a way of transmitting the information. And then we go to work, we see the kit we have been given to do the job. You can see where this is going: the fact is, all too frequently, people find themselves held back or even less productive in their jobs, due to inadequacies in office systems, compared to what they are used to.

The phenomenon of people using their own kit is known as consumerisation, or BYOD — quite literally, ‘Bring Your Own Device’. People in all roles and at all levels — from petrol station attendants to CEOs, from doctors and nurses, to truckers and sales people, have access to better technology than their organisations provide. Companies can try to harness the potential of not having to provide kit, with some offering a stipend instead — but its business benefits are not that obvious: indeed the net-net productivity gain can be marginal, suggests a study6 on consumerisation from technology industry research firm Freeform Dynamics. “Distraction and time wasting can sometimes offset any potential efficiency gains,” commented CEO and lead researcher, Dale Vile.

At the same time however, the workforce is changing: in some ways, the BYOD trend is as much a symptom of changing attitudes to computer equipment. While a company can’t just adopt a tool and become like a startup, staff and their habits are changing due to how individuals are adopting technology for themselves — including mobile and social tools, but also online services such as Dropbox. Quite often it can be a company that is dragged along by its staff’s use of, say, Facebook or Twitter to enable more direct relationships with customers. In social technology’s case, for example, getting a quick expert answer to a customer query, or as Rob Preston mentions, finding a spare part. Technology-savvy youngsters arrive with different expectations of how business could be done, in some cases they will be right. But they aren’t going to have things all their own way given the dual pressures of corporate inertia and the need to maintain governance.

All the same, we can see two major changes — the first being a reduced dependency on corporate infrastructure inside the company, and the second (as a direct consequence) being an increased flattening of the business ecosystem. That is, the corporate world is becoming a global village, with companies and their employees increasingly connected to their suppliers and consumers.

How can businesses ever get the balance right? Some might say existing companies are still in the grips of the industrial revolution, with Henry T Ford’s efficiency-driven approaches driving how we operate. Others might comment that corporations have become too powerful, that money and power have gone to the heads of the few, to the detriment of the many. Both are probably right, but overall, the thresholds have fallen to enable new to compete with old.

Who knows what the future holds — we are already seeing the up-starts becoming as big as the incumbents, suffering the challenges that only very large companies know about — such as managing an internationally distributed pool of talent. For many types of company the playing field is levelling, at least for a while. Those under biggest threat seem to be the ones whose bread and butter is based on fact that they are purveyors of content. Let’s take a look at those.

5.3. Music, books and the Man

In February 19831 two Parisian archaeologists, Iegor Reznikoff and Michel Dauvois, faced a conundrum. The caves of the Ariege, at the foot of the Pyrenees were already renowned for their paintings, but it seemed strange how some of the famous paintings appeared in caves that were otherwise insignificant: side passages, for example. Surely our ancestors would want to paint the larger, grander chambers? As they traversed three caves in particular, they stumbled upon a fascinating discovery, that some caves were more resonant to certain musical frequencies than others. Reznikoff had a habit2 of humming to himself when entering a space, in order to ‘feel’ how it sounded: in the case of one cave in particular, Le Portel, his humming echoed noticeably, leading him to propose an experiment. The pair whistled and sand their way through the cave systems, building a resonance map. Most cave paintings happened to be very close — to within a metre — of the most resonant zones of the caves. Indeed, some ‘paintings’ were little more than markers, red dots indicating resonant areas of the cave system.

More profoundly, what became clear was that some resonances only worked with singing, or playing higher or deeper instruments — in other words, our forebears were choosing particular spaces for particular types of music-led ritual. “For the first time, it has been possible to penetrate into the musical world of Palaeolithic populations, to understand the link between musical notes and their use in ritual singing and music; this research is based on direct evidence, rather than simply on analogies or suppositions…” the pair surmised. While Reznikoff and Dauvois recognised their study led to more questions than answers — “The goal was above all to open the chapter on the music of painted caves,” they said — they demonstrated just how important music was to our ancestors, some3 14,000 years ago. Writes4 Professor Steven Errede at the University of Illinois, “Perhaps these occasions were the world’s first ‘rock’ concerts – singing and playing musical instruments inside of a gigantic, complex, multiply-connected organ pipe, exciting complex resonances and echoes as they sang and played!”

The capability to create music, to sing, to dance is part of what it means to be human — it was Darwin himself that suggested our musical abilities, shared with other animals and birds, emerged even before our use of language. Indeed, as the link between resonance, cave paintings and ritual suggests, our abilities to create and to perform are inherent to our very existence, going back into the depths of our history.

About 500 years ago something changed, however. At roughly the same time as Gutenberg was designing his printing press, in the late 1600s luthier Antonio Stradivari was working out techniques to apply varnish to wood, enabling it to hold a note better, and Bach was experimenting with the ‘well-tempered’ clavier, tuned such that the majority of notes were mathematically aligned, meaning most scales could be played without any ‘off’ notes. For the first time, the notion of distribution was introduced into the arts. Suddenly it became possible for one person to write a play, or a piece of music, which someone else could then print, and a third person could enact without the whole thing needing to be written out by hand — in music’s case, using instruments able to replicate the original author’s wishes. Not coincidentally, it was only shortly after, in 1709 that the Statute of Anne first enshrined the notion of copyright into law. A hundred years later, in the post-Napoleonic, heady musical times of Chopin, Liszt and Paganini, book and music publishing was already big business.

It was only to be a matter of decades before electromagnetic discoveries led to the creation of sound recording devices, the first5 of which (from another, fantastically named Frenchman Edouard-Leon Scott de Martinville) managed to capture a garbled version of Claire De La Lune. Another Frenchman, Louis Le Prince made the world’s first film6 in October 1888, sixty-odd years after his compatriot Joseph Nicéphore Niépce took7 the world’s first photograph. Quite why the French had such a deep involvement in such world-changing technological creations is unclear; what is better known is how they spawned a global empire of industries based on the business of making art and then getting paid for it. known as either the creative, or the content industries. The former referring to what is being delivered, in terms of music, writing, film and TV, and indeed videogaming, which couldn’t exist without technology. “Content” references the format of delivery, in that any of the above can be seen as a stream of data which can be transferred, watched, listened to or otherwise interacted with. While humanity may enjoy a range of experiences, the fact that they share the ability to be digitised is important to those who deliver the experiences from source to consumption. And, by this very token, each of these industries has been impacted quite dramatically by technology.

Fast forward (to coin a phrase) to today and on the surface at least, all is apparently quite unwell across the creative industries. US recorded music revenues are down by almost two thirds since their height at the turn of the millennium, to8 $21.50 per capita, for example9; total album sales plummeted steadily from 1999’s figure of 940 million to only 360 million in 2010. It isn’t just mainstream bands such as Metallica that claim to have suffered; many smaller, independent bands have felt the same pain, without having a major label’s lawyers to defend them. The fault has been squarely placed at the door of technology, first with home taping, then CD ripping, then file sharing and torrenting, and most recently streaming10. A bonus was that technology (in the shape of the CD) did offer a temporary injection of cash into the system. This influx of capital, coupled with carefully marketed controls on demand, remained unchallenged until 2004, when digital formats once again enabled consumers to access music they actually wanted to listen to. By 2008 single sales had once again overtaken album sales, and the industry was once again in freefall. The bogeyman at the time was piracy, as few appeared able to resist the lure of simply copying the digital version of a musician’s hard work, or using platforms like Napster, then Bit Torrent to do so. Still today, the industry claims that up to 95 per cent of all music is illegally distributed, and therefore un-monetised — that is, the artists don’t see a penny.

Add to this, the alleged daylight robbery from streaming services like Youtube, Spotify and Apple Music. Since it was founded a decade ago, YouTube’s journey has not all been a bed of roses. Back in 2006, even as Google paid 1.6 billion for the site, The Economist reported11 that it was losing 500,000 per month. Two years later Eric Schmidt, then-CEO of Google, remarked12, “I don't think we've quite figured out the perfect solution of how to make money, and we're working on that.” The site was expected13 (finally) to make a profit by the end of the 2010. Nobody doubts YouTube’s dominance today: four billion videos are watched daily, a third of which are music related. Meanwhile, Spotify now has some 20 million paying subscribers and many millions more who use the advertising-supported version of the site, Apple first launched iCloud with its built-in “piracy amnesty14” for music, then its fully fledged Apple Music service; and Google15 and Amazon have launched their own music offerings. Each service is seen as working with, or conspiring against the music industry or individual artists, depending on who you ask at the time. 

The Gutenberg-inspired world of book publishing has its own nemesis, Amazon, which dropped its first economic bombshell into the world of book sales — it turned out that books are sufficiently a commodity to fit the nature of e-commerce better than many other product lines. According to publishers, two markets now exist for books: high street bookshops for bestsellers, and Amazon for everything else. Even the largest publishers are still working out how to deal with the 'revelation16' that Amazon was not going to be the friend of booksellers, nor even necessarily authors.

Amazon’s second bombshell came in the form of the Kindle device. For a while, as sales of both e-books and readers surged, it looked like the pundits were right about the ‘end of print’. Amazon reputedly17 sold five times as many ebooks in Q4 2012 compared to the previous year), which is quite a hike. And meanwhile sales of printed books were eroding. Overall US book sales increased by just over 6 percent, of which 23 percent is now ebooks, up from 17 percent. But the rate of growth of e-books is levelling off, as illustrated by recent figures18 from the Association of American Publishers. This may be temporary: Tim Waterstone, bookshop founder and all round publishing guru, gives19 it 50 years before all formats are digital — but even this suggests humans will lose all touch with tactility.

And meanwhile, film has suffered from a similar challenge as music, in that physical formats such as DVD remain a major target of piracy, as are films themselves given the availability of high resolution recording devices — together these are said to be costing20 the industry half a billion dollars a year. Online services such as Netflix offer new forms of competition, though at least negotiations are with studios rather than individual actors, who tend to be paid a fee rather than a royalty. While it looked like TV would go down the tubes, success stories like Game of Thrones and The Sopranos show that quality, plus good use of the same online tools that are creating part of the threat, show a way forward. Indeed, perhaps it is the earliest of all content industries, photography that has suffered the most, as cameras have been put in the hands of everyone, with results instant and the quality threshold relatively low. Video games have had a strange advantage over other content. Not only have producers benefited from a diversity of consoles, but also because of a knock-on effect of Moore’s Law: the format has grown in size right up to the limits of what is possible at the time, in principle making ‘Class A’ high-end games more difficult to pirate. All the same this, too, has risen fast over the past five years — indeed, one research study21 suggested nearly $75 billion was lost to piracy in 2014.

So what’s the net net for the creative, or content industries, and indeed for the artists that make it possible? Alongside artists such as Pure Reason Revolution, culled from Sony/BMG’ rosters in 2006 in an effort to ’streamline’, the real casualty of the digital revolution, it appears, has been the industry itself: of the big six, once-proud behemoths of music for example, three have already been merged into the others. “Technology created the music industry, and it will be technology that destroys it,” remarked Marillion’s lead singer, Steve Hogarth. And while film and TV studios work differently to music, they nonetheless are struggling. Meanwhile, while book publishers don't lack for challenges — questions around digital rights, self-publishing, discoverability, the role of social media and so on — this part of the industry appears to have managed to withstand the digital onslaught better. “At least we haven't made the same mistakes as the music industry,” says an editor from a larger publishing house. Unit pricing has largely survived the transition from print to digital, and piracy is not the runaway stallion being experienced by other industries. Perhaps the biggest sign of optimism in books is the fact that Amazon has itself chosen to open not one, but several book stores. And meanwhile, in 2015 UK chain Waterstones recently turned a small, yet genuine profit22.

But what of the artists themselves? Technology is proving both a blessing and a curse, now that the constraint of intermediation has been removed both on who can record and deliver content, and who controls its flow to the general public. “Here we are at the apex of the punk-rock dream, the democratisation of art, anyone can do it, and what a double-edged sword that’s turned out to be, has it not?” remarks23 DJ and polymath Andrew Weatherall. Indeed, in this day and age it is difficult to know whether we are listening to a professional musician recording a song in an expensive studio, or some troubled kid making a song in their apartment24 — if indeed it matters.

As we saw in the last chapter, technology is changing the very nature of our organisations, with more and more control being put into the hands of a broadening number of individuals. As illustrated by Marillion, Radiohead, Nine Inch Nails and a host of others, perhaps the collapse of parts of the music industry is a reflection of the increasingly empowered relationship between ‘content creators’ and ‘content consumers’. Services such as Soundcloud and Bandcamp (offering direct sales of music and merchandise) are springing up to support artists in these goals, enabling them to interact more directly with their audiences. Indeed, for this very reason, streaming appears to be less of an issue to artists than industry representatives. Comments25 musician Zoe Keating, “The dominant story in the press on artist earnings did not reflect my reality, nor that of musical friends I talked to. None of us were concerned about file sharing/piracy, we seemed to sell plenty of music directly to listeners via pay-what-you-want services while at the same time earn very little from streaming.”

Another positive consequence of the removal of a bottleneck is massive diversification, driving a positive explosion of culture. In music the big success story has been YouTube, with artists as diverse Korea’s Psy26 and Morocco’s Hala Turk27 gaining almost-immediate international attention when their compositions went viral. Outside of music, YouTube has created a generation of video-loggers or vloggers28, characters such as Olajide "JJ" Olatunji, better known as KSI, who has nine million subscribers to his channel. We’ve seen this phenomenon in publishing as well, in the form of 50 Shades of Grey. Of course, Spotify, YouTube and Amazon are still The Man, and working for The Man comes at a price. Musicians and authors may choose to say, “Non, Merci!” to the patronage models of today, but they still need a platform upon which to perform.

The most important bottom line for creative types is financial, which is where the blessing and the curse becomes most apparent. Despite taking a recession-based hit, royalty payments from US rights organisation Broadcast Music, Inc (BMI, representing 600,000 members) have been increasing year on year since 2000. In 2014, the organisation some distributed $850 million, up 3.2% on the year before. Meanwhile, American Society of Composers, Authors and Publishers (ASCAP, 460,000 members) distributed $883 million in 2014, having collected over a billion dollars in revenues. UK licensing revenues are also up year on year, and have been for 5 years.

On the downside, what this also illustrates is a massive increase in supply. Given that the total number of musicians in the US was 189,510, clearly the rate of growth of ‘rights holders’ far surpasses that of ‘professional’ musicians. It isn’t a coincidence that one of the biggest issues is seen as ‘discoverability’ — for example, how to identify a new piece of music without being told about it? We can expect to see innovation in social data driven search (Twitter has bought WeAreMusic for example). A frequent complaint is that listeners are not that discerning: the bar for ‘pop’, as represented by Simon Cowell, is not set that high so artists that invest more of themselves in their work are feeling quite rightly jilted. Perhaps this goes right back to the origins of our desire, and ability to perform: all it takes is one person wishes to present a poem, or a song, or a story and ask nothing for it, for any notion of a ‘monetisable business model for content’ to be undermined, and to create new opportunities for exploitation by platform providers from Huffington Post for the written word, to Pinterest for photographic images. Based on the very nature and purpose of creativity, it is highly likely that it will pervade. Equally however, it does appear that we are unable to resist the lure of sharing what we create, whether or not that causes problems for others, or indeed, for ourselves.

Which brings to a bigger question. As we share more and more, are we right to do so? Let’s consider the nature of the information we are all creating.


  1. http://www.persee.fr/doc/bspf_0249-7638_1988_num_85_8_9349 

  2. http://news.nationalgeographic.com/news/2008/07/080702-cave-paintings.html 

  3. http://www.tourism-midi-pyrenees.co.uk/home/things-to-see-and-do/sightseeing-and-exploring/natural-heritage/caves-and-shallow-holes/prehistoric-caves 

  4. https://courses.physics.illinois.edu/phys406/Lecture_Notes/Acoustics_of_Palaeolithic_Caves/Acoustics_of_Palaeolithic_Caves.pdf 

  5. http://www.noiseaddicts.com/2008/08/earliest-recording-human-voice/ 

  6. http://www.bbc.co.uk/news/entertainment-arts-33198686 

  7. http://petapixel.com/2013/10/02/first-photo/ 

  8. http://thetrichordist.com/2015/03/18/2014-us-recorded-music-sales-per-capita-down-2-75-from-2013/ 

  9. http://www.businessinsider.com/these-charts-explain-the-real-death-of-the-music-indust 

  10. http://www.dailydot.com/opinion/apple-music-spotify-self-destruc 

  11. http://www.bivingsreport.com/2006/youtube-show-me-the-money/ 

  12. http://www.cnbc.com/id/24387350/page/3/ 

  13. http://blogs.ft.com/fttechhub/2010/01/exclusive-youtube-profits-coming-this-year/ 

  14. http://www.pcpro.co.uk/news/cloud/367876/music-industry-split-over-icloud-piracy-amnesty 

  15. http://www.alphr.com/news/367231/google-set-to-launch-cloud-music-service 

  16. http://selfpublishingadvice.org/blog/amazon-plays-indie-authors-like-pawns/ 

  17. http://www.thebookseller.com/news/amazon-ups-uk-e-book-sales-five-fold.html 

  18. http://paidcontent.org/2013/04/11/ebooks-made-up-23-percent-of-us-publisher-sales-in-2012-says-the-aap/ 

  19. https://www.guardian.co.uk/books/2013/apr/09/tim-waterstone-reading-entirely-digital 

  20. http://www.theguardian.com/film/2014/jul/17/digital-piracy-film-online-counterfeit-dvds 

  21. http://gearnuke.com/video-game-piracy-rise-will-cost-industry-much-makes/

  22. http://www.theguardian.com/business/2015/nov/20/waterstones-profit-books-amazon 

  23. http://www.theguardian.com/music/2016/feb/25/andrew-weatherall-interview-dj-disco-maverick 

  24. http://www.soundonsound.com/sos/nov00/articles/tracks.htm 

  25. http://www.forbes.com/sites/georgehoward/2015/06/05/bitcoin-and-the-arts-and-interview-with-artist-and-composer-zoe-keating/2/ 

  26. http://blogs.wsj.com/speakeasy/2012/08/28/gangnam-style-viral-popularity-in-u-s-has-koreans-puzzled-gratified/ 

  27. http://www.albawaba.com/entertainment/hala-turk-gets-whopping-101-million-youtube-views-her-happy-happy-music-video-729600 

  28. http://www.mirror.co.uk/news/technology-science/technology/top-10-british-youtubers-rich-5610558 

5.4. (Over)sharing is caring

The Church of Latter Day Saints holds the view that we are all descended from Adam, one way or another. This belief has turned Mormons into avid genealogists, and therefore keen innovators, at the forefront of technology to help people trace their ancestors. As far back as 1996 for example, the church was already organising DNA testing of its members. At one such initiative, in the state of Mississippi, a willing donor was the father of film-maker Michael Usry.

Some years later, the church chose to sell its database of genetic markers to the genealogy web site Ancestry.com, which later opened up access to the data to the general public. And, it transpired, to law enforcement agencies on the trail of a 1998 murder inquiry. Whether or not their intent was pure, the consequence1 for Michael Usry was to be held for 33 days as a potential suspect in the case. Due to this, and other such situations, public access to the data was removed. “The data was used for reasons that were not intended,” said the site. All the same, the terms and conditions of many such sites still allow for sub-poenaed access by law enforcement agencies.

A corollary to the Law of Diminishing Thresholds is the Law of Unintended Consequences. Michael Usry’s father had no idea how his data might be used, nor of the technological advances that would make such a DNA comparison possible, nor how law enforcement would still act upon inaccurate information. But the genie was already out of the bottle. As mentions the policies of UK testing site BritainsDNA, "Once you get any part of your genetic information, it cannot be taken back.” It’s not just investigators we need to worry about; even more important are ourselves, and how we might act given new information about our heritage or our health. Says BritainsDNA, “You may learn information about yourself that you do not anticipate. This information may evoke strong emotions and have the potential to alter your life and worldview. You may discover things about yourself that trouble you and that you may not have the ability to control or change (e.g., surprising facts related to your ancestry).”

Perhaps, like Julie M. Green2, you would rather know that you had a risk of a degenerative condition. Or perhaps, like Alasdair Palmer3, you would not. But you may not know the answer to this question in advance. As indeed, you might want to think through the consequences of discovering that the person you have known for 30 years as your father turns out not to be. It’s not hard to imagine the potential for anguish, not indeed the possibility of being cut out of a will or causing a marital break-up of the people you thought of as parents.

Despite examples such as this, we find it impossible to stop sharing our information. The majority of consumerist Westerners will have ‘form’ in giving up data to purveyors of financial services and consumer products, for example. Few people systematically erase their financial and spending trails as they go — pay by cash, withhold an address, check the ‘no marketing’ box and so on. In many cases we accept recompense for giving up elements of our privacy, such as with retail loyalty cards. We know that we and our shopping habits are being scrutinised, like “transient creatures that swarm and multiply in a drop of water.” But, to our delight, we receive points, or vouchers, without worrying whether we've got a decent return on our investment.

Social Networking is also enticing, but we know it comes the expense of personal privacy. We share our stats, personal views and habits via Facebook, Google and Twitter, deliberately blasé about how the information is being interpreted and used by advertisers. “If you’re not paying, you are the product,” goes the often-quoted, but generally ignored adage. This is despite the evidence: when Facebook launched its Graph Search algorithm, launched on January 15th 2015, sites sprang up4 to demonstrate how you can hunt for Tesco employees who like horses, or Italian Catholic mothers who like condoms. Facebook is now embedded in the majority of sites we use: when we log in to a third-party site using a Facebook login to avoid all that rigmarole involved in remembering usernames and passwords, we have entirely given over any rights we might have had on the data, or conclusions that cold be drawn from it.
However clunky today’s algorithms appear to be, every purchase is being logged, filed and catalogued. The Internet of things is making it worse: for example, we will confirm our sleep patterns or monitor our heart rates using checking our Fitbits or Jawbones, uploading data via mobile apps to servers somewhere in the globe, in the cloud. Of course, what is there to be read from sleep patterns? It’s not as if we’re talking about drinking habits or driving skills, is it?

Every act, every click and even every hover over a picture or video results in a few more bytes of data being logged about us and our habits. It seems such a small difference between buying a paperback from Amazon or paying in cash for the same book from the local shop; but the purchase of one will remain forever, indelibly associated with your name. And even our disagreements are logged: “No, thank you” responses are stored against our identities, or if not, our machines, or web browser identifiers. And what about other mechanisms less scrupulous advertisers use to identify computer users, such as AddThis5, which draws a picture on your screen and uses this to fingerprint you? Even now, mechanisms are being developed which look to stay the right side of the increasingly weak legal frameworks we have, all the while slurping as much data as they can about us.

Outside of the consumer world, today’s technology advances inevitably result in whole new methods of surveillance. London has become the surveillance capital of the world, according to CCTV figures. At the other end of the spectrum are ourselves, of course: today we carrying around powerful recording and sensor-laden devices, in the form of smartphones and, increasingly, smart watches.

Right now, our culture is evolving towards a state where our actions and behaviours are increasingly documented, by individuals and , institutions and corporations, all of whom are thinking about how to make the most of these pools of data. They’re all at it — any organisation that has access to information is trying to get more of it, whether or not they know what to do with it yet. For example, consider the announcement that both Visa and MasterCard are looking to sell customer data to advertising companies. Data brokerage is becoming big business. For the time being, simple transfer of data — data brokerage — is becoming big business. Even public bodies are getting in on the act, witness the selling of data by councils, the UK vehicle licensing authority and indeed hospitals — at least in trial

How’s it all being funded? Enter online advertising, itself subject to regulation in various forms including the DPA. It is unlikely that the Web would exist in its current form without the monies derived from advertising, from click-throughs and mouse-overs, to ‘remarketing’ with tracking cookies and web beacons. Advertising is the new military or porn industry, pushing the boundaries of innovation and achieving great things, despite a majority saying they would rather it wasn't there.

Corporate use of data frequently sails close to the wind, as they are not necessarily acting in the interests of the people whose data they are collecting. We’re seeing examples of malpractice, even if within the letter of the law. In May 2014 data brokers such as Acxiom and Corelogic were taken to task6 by the US Federal Trade Commission for their data gathering zeal and lack of transparency. many examples demonstrate. Every now and then, an alarm bell goes off. For example, while the Livescribe pen has been around for a good few years, you've got to hand it to whoever decided to use 'spy-pen' to describe the device that led to the resignation of the chairman of a Scottish college. The term has everything: popular relevance, gadget credibility and just that frisson of edgy uncertainty. The trouble is, the device at the centre of the controversy is no such thing. Yes, it can act as a notes and audio-capture device, in conjunction with special sheets of paper. But calling it a spy-pen is tantamount to calling the average tablet device a spy-pad. "It's quite a clunky kind of thing — not the sort of thing you can use without folk knowing," said Kirk Ramsay, the chairman in question, to The Scotsman. “I have had it for three and a half to four years — you can buy it on Amazon.”

Fortunately, because of the coastline paradox, they have largely been unable to really get to the bottom of the data they hold. For now. We are accumulating so much information — none of it is being thrown away — about that many topics, that the issue becomes less and less about our own digital footprints, however carelessly left.

Looming without shape and form — yet — are the digital shadows cast by the analysis of such vast pools of data. A greater challenge is aggregation, a.k.a. the ability to draw together data from multiple sources and reach a certain conclusion. Profiling and other analysis techniques are being used by marketers and governments as well as in health, economic and demographic research fields. The point is we don’t yet know what insights these may bring, nor whether they might be fantastically good for the race nor downright scary. The kinds of data that can be processed, such as facial, location and sentiment information, may reveal more than people intended, or indeed ever expected. All might have been OK if it wasn’t for the Law of Unexpected Consequences. Purposes change, and so do businesses, and as we have seen, isn't all that easy to map original intentions against new possibilities. For example, what if family history information could be mined to determine, and even predict causes of death? What would your insurer do with such information? Or your housing association? Or your travel agent?

And that’s just taking the ‘good guys’ into account; every now and then, however, someone sees sense in breaking down the barriers to private information for reasons legion. Consider for example when web site AshleyMadison was hacked in July 2015, despite being called “the last truly secure space on the Internet,” according to an email sent to blogger Robert Scoble. The reason: because the site was seen as immoral. Which it most certainly will be, to some. As a broader level, this example pitted data transparency against some of our oldest behavioural traits, which themselves rely on keeping secrets. Love affairs are possibly one of the oldest of our behaviours, their very nature filled with contradictions and a spectrum of moral judgements. Whatever the rights and wrongs, rare would be the relationship that continued, start to finish, with absolute certainty or a glance elsewhere. But the leaky vessel that is technology leaves gaping holes open to so-called ‘hacktivists’. The consequences of the affair continued to unfold for several months — the first lawyers were instructed, the first big names knocked off their pedestals, not only husbands contacted but wives7

And consider the fitness device being used in court as evidence, or the increasing use of facial recognition. Soon it will be impossible to deny a visit to a certain bar, or indeed, say “I was at home all the time."As Autonomy founder Mike Lynch remarked, “In an age of perfect information… we're going to have to deal with fundamental human trait, hypocrisy.”

Even if we have not been up to such high jinks, our online identity may be very difficult to dispense with. And meanwhile our governance frameworks and psychological framing mechanisms are falling ever further behind. There was a certain gentle-person's agreement that was in place at the outset, given that nobody ever reads the T's and C's of these things. Namely: that the data would only be used for the purpose it was originally intended. Indeed, such principles are enshrined in laws such as the UK Data Protection Act (DPA).

Ongoing initiatives are fragmented and dispersed across sectors, geographies and types of institution. The UK’s Data Protection Act, international laws around cybercrime, even areas such as intellectual property and 'digital rights’ were all created in an age when digital information was something separate to everything else. Meanwhile the UN's resolution on “The right to privacy in the digital age” overlaps with proposed amendments to the US 'Do Not Track’ laws, as well as Europe's proposed 'right to be forgotten' (which has already evolved into a 'right to erasure’) rules. All suggest that such an option is even possible, even as it becomes increasingly difficult to hide. In such a fast-changing environment, framing the broader issues of data protection is hugely complicated; the complicity of data subjects, their friends and colleagues is only one aspect of our current journey into the unknown. Celebrities are already feeling this — as all-too-frequent examples of online and offline stalking show. But how precisely can you ensure that your record are wiped, and is it actually beneficial to do so?

Perhaps the biggest issue about the data protection law is that it still treats data in a one-dot-zero way — it is the perfect protection against the challenges we all faced ten years ago. Even as the need for legislation is debated (and there are no clear answers), organisations such as the Information Commissioner’s Office are relaxing8 the rules for ‘anonymised’ information sharing — in fairness, they have little choice. While we can comforted that we live in a democracy which has the power to create laws to control any significant breaches of privacy or rights, it won’t be long before the data mountains around us can be mined on an industrial scale. Over the coming decades, we will discover things about ourselves and our environments that will beggar belief, and which will have an unimaginably profound impact on our existence. The trouble is, we don’t know what they are yet, so it seems impossible to legislate for them.

What we can know is that protecting data is not simply “not enough” — in a world where anything can be known about anyone and anything, we need to focus attention away from the data itself and towards the implications of living in an age of transparency. Consider, for example, the experience9 of people who happened to be located near to the 2014 uprisings in Kiev, the capital of Ukraine. Based on the locations of mobile phones, nearby citizens were sent text messages which simply said, “Dear subscriber, you are registered as a participant in a mass riot.” The local telephone company, MTS denied all involvement. Such potential breaches of personal rights may already be covered under international law; if they are not, now could be a good moment to start treating them.

Sun Microsystems founder and CEO Scott McNealy once famously said, “Privacy is dead, deal with it.” Privacy may not be dead, but it is evolving. There are upsides, downsides and dark sides of living in a society where nothing can be hidden. Before we start to look at where things are going, let’s consider some of the (still) darker aspects.

5.5. The dark arts

"No, Mr. Sullivan, we can't stop it! There's never been a worm with that tough a head or that long a tail! It's building itself, don't you understand? Already it's passed a billion bits and it's still growing. It's the exact inverse of a phage -- whatever it takes in, it adds to itself instead of wiping... Yes, sir! I'm quite aware that a worm of that type is theoretically impossible! But the fact stands, he's done it, and now it's so goddamn comprehensive that it can't be killed. Not short of demolishing the net!"
John Brunner, The Shockwave Rider 1972(?)

It was another ordinary Thursday1 in the office for Sergey Ulasen2, head of the antivirus research team of a small computer security company, VirusBlokAda, headquartered in Minsk, Belarus. He had been emailed by an Iranian computer dealer, whose clients were complaining of a number of computers stopping and starting. Was it a virus, he was asked. He was given remote access to one of the computers and, with the help of a colleague, Oleg Kupreyev, set about determining exactly what was the problem.

The answer, as the situation unraveled, was that he was looking at more than a virus. Indeed the “Stuxnet worm”, as it became known3, was one of the most complex security exploits the world has ever seen. It used a panoply of attack types, from viral transmission via USB sticks, through exploiting unpatched holes in the Microsoft Windows operating system, to digging deep into some quite specific electronic systems. Unbeknown to Sergey, its actual targets were the Programmable Logic Controllers of some 9,000 centrifuges in use by Iran at its Natanz Uranium enrichment plant. These controllers — simple computers, made by Siemens — were used to control the speed of a number of centrifuges: run them at too high a frequency for too long, and they would fail. As the controllers could not be targeted directly (they were too simple), the malicious software, or ‘malware’ took advantage of the Windows consoles used to control them. However, the worm knew every detail of the centrifuges and the speeds at which they would fail. “According to an examination4 of Stuxnet by security firm Symantec, once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights,” wrote5 Kim Zetter for Wired. It’s these frequency converters that control the speeds of the centrifuges; furthermore, the only known place in the world that they were in such a configuration was at Natanz.

As it happened the centrifuges had been going wrong for at least a year. Indeed, inspectors from the International Atomic Energy Agency confessed6 to be surprised at the levels of failure. However the malware had done well to hide itself, making operators believe that the controllers were working correctly — it must just be the centrifuges that are unreliable, they thought. A previous version of the worm (known as7 Stuxnet 0.5) had been used to steal information from the nuclear facility a year before, including how the centrifuges were set up. While the malware was designed to work on the Natanz centrifuges, it didn’t stop there. By the time it reached its full extent and Microsoft had issued patches for the vulnerabilities it exploited, Stuxnet had been infecting computers in “Iran, Indonesia, India, Pakistan, Germany, China, and the United States,” according to reports8 from the US authorities.

But here’s the twist. While nobody came out and said so directly at the time, as the nature of the worm was established, the consensus became that the source of Stuxnet could be none other than a joint effort between US and Israeli security forces. Journalists were circumspect at the time — stated9 the UK Guardian for example, it was “almost certainly the work of a national government agency… but warn that it will be near-impossible to identify the culprit.” Interestingly, it was America’s own ‘insider threat’ Edward Snowden, a contractor who happened to have access to a wide range of government secrets, who blew the cover on the two nations’ involvement. “NSA and Israel co-wrote it,” he said succinctly, in an interview10 with German magazine Der Spiegel in 2013.

This was, of course, major news. Traditionally, viruses have been associated with ‘the bad guys’, a nefarious underworld of criminals whose intentions range from the amusing to downright corrupt. Ever since there was technology, there have been people looking to turn it to their own ends, be it the use of a whistle given away in cereal packets to fool telephone exchanges (a technique made famous by John Draper, a.k.a. “Captain Crunch” after the cereal), or through stories such as Clifford Stoll’s The Cuckoo’s Egg, in which the author managed to trace Hannover-based hacker Markus Hess, who was selling secrets to the Russian KGB. This spectrum of behaviours has made it difficult to categorise bad behaviour, with the result that innocuous breaches have sometimes been treated with the full force of law. Indeed, even the term ‘hacker’ is in dispute, with some members of the technological fraternity maintaining it refers to very good programmers, nothing more.

Whatever their goals, computer hackers have several options — to find things out (thereby breaching data confidentiality), to change things (data integrity) or to break things (data availability). Sometimes they exploit weaknesses in software; other times they will write software of their own, called such sci-fi terms as viruses and worms. Indeed, in good life-imitating-fiction style, Xerox Research’s John Shoch used John Brunner’s ‘worm’ to describe a program he had created to hunt out unused processor use and then turn it to more useful purposes. While such worms occasionally got the better of their creators, it took until 1988 for such a program (known as the Morris Worm11) to cause broader disruption, infecting (it is reckoned) one in ten computers on the still-nascent Internet. The worm’s unintended effect was to take up so much processor time, it brought machines to a standstill, creating one of the world’s first distributed denial of service (DDOS) attacks.

Since these less complicated times, a wide variety of malicious software has been developed with the intent to exploit weaknesses (software or human) in computer protection. We have seen the emergence of botnets – networks of ‘sleeper’ software running on unsuspecting desktops and servers around the world12, which can be woken to launch DDOS attacks, targeting either computers or indeed the very infrastructure underlying the Web. Such as an attack on the Spamhaus Domain Name Service (DNS), for example: what marked the attack13 was not only its nature, but also the rhetoric used to describe it. “They are targeting every part of the internet infrastructure that they feel can be brought down,” commented Steve Linford, chief executive for Spamhaus, in a strange echo of John Brunner’s prescient words.

Despite such episodes14 (or indeed, as demonstrated by them), the Internet has shown itself to be remarkably resilient — its distributed nature is a major factor in the Internet’s favour. For any cybercriminal to bring down the whole net, they would first have to have access to a single element which tied it all together – even DNS is by its nature distributed, and thus difficult to attack.

The botnet and malware creation ‘industry’ — for it is one — is big business. The bad guys also don’t particularly care about where the holes are. The black hat hacker world is a bit like the stock market, in that its overall behaviour looks organic but actually it is a consequence of a large number of small actions. Some exploits are happy accidents, flukes of innovation. Others are as a result of people trying to out-do each other. And the bad guys are equally capable15 of exploiting the power of the cloud, or to offer out virtualised services. And so it will continue, as good and bad attempt to outdo each other in a continuation of what is an eternal conflict.

But the admission, direct or otherwise, of government involvement in Stuxnet added an altogether different dimension. The principle was not untested: in 198216 the USA was making its first forays for example, using a trojan horse to cause an explosion in the Trans-Siberian gas pipeline. All the same, the idea of one government actively engaging against another using malware remained the stuff of science fiction. As many pundits17 have suggested18, the game changed19 with Stuxnet, the starting point of ‘proper’, inter-national cyberwarfare. In a somewhat ironic twist, US Deputy Defense Secretary William J. Lynn III suggested20 that once its intent and scale had been revealed, others could follow suit. “Once the province of nations, the ability to destroy via cyber means now also rests in the hands of small groups and individuals: from terrorist groups to organised crime, hackers to industrial spies to foreign intelligence services,” Which does beg the question — did the US Government really think it could be kept under wraps21, given the damage it wreaked? Citing a government-led attack as a reason why governments need to be better protected against attack, but so be it!

The Law of Unexpected Consequences is writ large in the world of cybersecurity, as different sides attempt to both innovate and exploit the weaknesses of the other. In business for example, corporate security professionals no longer talk about keeping the bad guys out, but rather how to protect data, wherever it happens to turn up. Many are represented by the Jericho Forum, an organisation set up to work out how to ‘do’ security when everyone has mobile phones, when working from home is a norm rather than an exception, and when people spend so much time connected to the Internet. In the words of the song, the corporate walls “came tumbling down” a long time ago. As we have seen, the corporation has been turned inside out; in security as well, cathedrals of protection have been replaced by a bazaar, where everything can be bought or sold.

This new dynamic — an open, increasingly transparent world in which both bad guys and governments are prepared to step over a line in order to achieve their goals — sets the scene for the realities of cybersecurity today. The immediate aftermath of the Snowden’s 'revelations' may have come as a disappointment to many activists, in that people didn't flee from their US-hosted online service providers in droves. Many industry insiders were neither surprised nor phased - as a hosting company CEO commented last week, “It would be naive to think [^the NSA] were doing anything else.” However, that we are in a post-Snowden world was illustrated by the decision of Brazil's President Dilma Rousseff to cancel a US gala dinner in her honour. To add potentially 'balkanising' insult to injury, Brazil also proposed creating its own email services and even laying a new trans-atlantic cable, to enable Internet traffic to miss out the US. Experts including Bruce Schneier have expressed22 their fears at such suggestions. After all, isn't this controlling traffic in similar ways to that “axis of Internet rights abuse” — China, Iran and Syria, who apply traffic filters to citizen communications?

From a technical perspective the router-and-switch infrastructure of the Internet doesn't really care which way a packet goes, as long as it gets through.Far from Brazil's proposals suggesting a reduction in the Internet's core capabilities, they actually increase them by providing additional routes and new, non-obligatory service options. Unless, that is, one believes that the US has a singular role in policing the world's web traffic — in which case it makes sense to route it all that way. Indeed Brazil's move doesn't actually prevent surveillance, rather, it delegates such activities to within national boundaries. As commentators have suggested, the indignant rhetoric from some nations can be interpreted as, “We don't want the USA to monitor our people, that's our job.”

For those who would rather not have their data packets monitored, there is The Onion Network, or Tor. In another twist of the dance between light and dark, Tor was originally conceived by the US Navy in 2002 as a way to hide the source of Internet traffic, preventing anyone else knowing who was accessing a specific web site for example. The point of Tor was to enablethe military to cover its tracks: as a strategic decision therefore, the software was made generally available and open source. “It had to be picked up by the public and used. This was fundamental,” explained23 its designer, Paul Sylverson. “If we created an anonymous network that was only being used by the Navy, then it would be obvious that anything popping out or going in was going to and from the Navy,” Good call — but the decision may have backfired, given that Tor has become the place to hang out, for anyone who doesn’t want to be monitored — which includes a wide variety of cybercriminals, of course. Tor has become a Harry-Potter-esque Diagon Alley of the Internet, where anything can be bought or sold from drugs and weapons, to ransomware and cyber-attacks, to buying and selling the stolen identities that result from such attacks. Billions of identities are now available for sale on the Tor-based black market, to such an extent that they are relatively cheap, like bags of old stamps. While we should perhaps be worried24, the rumour is that there are simply too many stolen identities to be dealt with.

The dance continues, and no doubt will continue to do so, long into the future. Another of Snowden’s revelations (and again, this has not been publicly stated) was how encryption algorithms created by security software company RSA (now part of Dell) were given ‘back doors’, so that the US authorities could access the data they contained. The Tor network has also been infiltrated by the powers that be, as has the crypto-currency, Bitcoin (we’ll come to this). The one thing we can be certain about is that every technological innovation to come will be turned to dubious ends, and every unexpected positive consequence will be balanced by a negative. All eyes right now are on how the Internet of Things can be hacked to cause disruption — in the future the tiniest sensors will give us away, and our household appliances will hold us hostage (in some ways this is nothing new — toasters are already25 more dangerous than sharks). To repeat Kranzberg’s first law of technology, “Technology is neither good nor bad; nor is it neutral.”

As battle fronts continue to be drawn upon ever more technological lines, we continue to ask ourselves exactly what purpose technology serves, and how it can be used for the greater good. As pundit Ryan Singel has suggested26, “There is no cyberwar and we are not losing it. The only war going on is one for the soul of the Internet.” But can it be won?


  1. July 17, 2010 

  2. http://eugene.kaspersky.com/2011/11/02/the-man-who-found-stuxnet-sergey-ulasen-in-the-spotlight/ 

  3. http://www.nytimes.com/2011/01/16/world/middleeast/16stuxnet.html 

  4. http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_dossier.pdf 

  5. http://www.wired.com/2010/12/isis-report-on-stuxnet/ 

  6. http://arstechnica.com/tech-policy/2011/07/how-digital-detectives-deciphered-stuxnet-the-most-menacing-malware-in-history/ 

  7. http://www.symantec.com/connect/blogs/stuxnet-05-missing-link 

  8. https://cyberwar.nl/d/R41524.pdf 

  9. http://www.theguardian.com/technology/2010/sep/24/stuxnet-worm-national-agency 

  10. http://www.spiegel.de/international/world/interview-with-whistleblower-edward-snowden-on-global-spying-a-910006.html 

  11. http://en.wikipedia.org/wiki/Morris_worm 

  12. http://www.huffingtonpost.co.uk/hilary-wardle/computer-hacking_b_5607288.html 

  13. http://www.bbc.co.uk/news/technology-21954636 

  14. http://blog.cloudflare.com/the-ddos-that-knocked-spamhaus-offline-and-ho 

  15. http://www.techradar.com/news/world-of-tech/hackers-used-amazon-cloud-to-scrape-data-from-linkedin-profiles-1213888 

  16. http://www.theregister.co.uk/2004/03/16/explosive_cold_war_trojan_has/  

  17. http://www.langner.com/en/2010/09/14/ralphs-step-by-step-guide-to-get-a-crack-at-stuxnet-traffic-and-behavior/ 

  18. http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook 

  19. http://www.langner.com/en/wp-content/uploads/2013/11/To-kill-a-centrifuge.pdf 

  20. http://www.defense.gov/news/newsarticle.aspx?id=54787 

  21. http://www.nytimes.com/2011/01/16/world/middleeast/16stuxnet.html 

  22. http://bigstory.ap.org/article/brazil-looks-break-us-centric-internet 

  23. http://www.telegraph.co.uk/culture/books/11093317/Guns-drugs-and-freedom-the-great-dark-net-debate.html 

  24. https://www.riskbasedsecurity.com/2014/02/2013-data-breach-quickview/ 

  25. http://blogs.reuters.com/environment/2008/01/17/toasters-deadlier-than-sharks/ 

  26. http://www.wired.com/2010/03/cyber-war-hype/ 

5.6. The Two-Edged Sword

In 1926, electricity pioneer Nikola Tesla predicted1 that, “When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole.”

While we may not have achieved Tesla’s vision just yet, what a long way we have come. Technology is the great enabler, perhaps the greatest we have seen in our history. As thresholds of cost and scale fall, and as we get better at using technology through the use of clever software and algorithms, so the world is getting noticeably smarter: already today, people can communicate and collaborate, perform and participate on a global basis, as if they are in the same room; and startups can form and grow, enabling them to compete with some of the world’s largest companies without having to create a product, or even own a building. Technology continues to transform healthcare, helping us to live longer; through mechanisms like micro-loans it helps people out of poverty and supports sustainable development; the transparency it has created has powered civil rights, has enabled the little guy to stand up against the big guy, has been instrumental in engaging the disenfranchised.

It all seems so, well, positive, but… and it’s a big but. Technology has powered revolutions, but at the same time riots and no small amount of criminality both from the ‘bad guys’ and indeed, from the authorities and corporations that are supposed to have our interests at heart. This is whether or not they are acting within the law — or indeed, are the law — given that this is still struggling to catch up. In many cases we only have our own morality to fall back upon, and what with the pace of change, even this can prove inadequate.

With every up, there is a down. Our abilities to influence each other have reached profound levels through social media tools, but in doing so we have created a hive mind, in which we react en masse, sometimes for better and sometimes for worse. Jon Ronson’s book2, “So You've Been Publicly Shamed,” which covers trolling on social media, is more than an illustration; it is a state of play. We are all complicit in what has become a global conversation, warts and all.

Traditional companies and start-ups are building upon increasingly powerful platforms of technology, lowering the barriers to business. But sometimes in doing so they sail close to the wind, exploiting weaknesses in national laws in terms of their tax affairs or the data protections they offer. Very often we go along with what is available, as we are not sure whether it is a blessing or a curse.

As technology improves. it enhances and challenges us in equal measure. Cameras, tracking devices and databases appear to confirm, in the words of Hari Seldon, psychohistorian3 and erstwhile hero of Isaac Asimov’s Foundation novels, that “An atom-blaster is a good weapon, but it can point both ways.” Psychohistory, from Asimov’s standpoint, concerned the ability to foresee the behaviours of huge numbers of people — simply put, when the numbers are large enough, behaviour is mathematical and therefore predictable. We are a long way from this, however: creating mountains of information, far more than we know what to do with, from our personal photo collections to the data feeds we can now get from personal heart rate monitors and home monitoring devices. That algorithms are not yet able to navigate all that data may be a good thing, as the ‘powers’ seem unable to look upon it with anything other than avarice. All that lovely personal data, they say. Now I can really start targeting people with marketing. Surely, we ask, there has to be more to it; but what it is, we do not know.

Despite these fundamental plusses and minuses, technology itself is indifferent to our opinions, progressing and evolving in its own way regardless of whether it causes good or ill. To repeat Kranzberg’s first law: “Technology is neither good nor bad; nor is it neutral.” As we have seen in ‘going viral’, crowdsourcing and crowdfunding, often it is the collective actions of individuals that have the greatest impact: individual acts can be the straws that drive the camel over its threshold, millions of tiny actions that together lead to a global swing of opinion, an explosion of demand or a trend. In this, too, we are all culpable as our individual behaviours can be for the greater or lesser good. When someone says “I bought it from Canada, it was so much cheaper without tax,” can we really expect heads of companies to do any different?

The indifference of technology is more than just a glib remark: rather, it tells us something profound about the nature of what we are creating, and our relationship to it. In politics, in life, in the evolution of our race, so often what happens one year seems to be a backlash to what has gone before; as a consequence we re-tune, we rebalance, we re-establish ourselves in the ‘new world’. By this token, neither are we necessarily going to become 100% technology-centric— consider the resurgence of vinyl records, or the fact that wine growers are looking once again at their traditional approaches, or indeed the hipster movement, with its tendency towards beards and retro kitsch. Digital has failed to kill the physical shop, meanwhile: retail analysts report that the high street is becoming a social space, a place for people to meet and communicate; and that shops offering both keen pricing and good service are more likely to thrive, whether or not technology is in the mix (a great UK example is John Lewis, with its JLAB initiative). Perish the thought that the shops of the future will be the ones that balance the power of digital with traditional mechanisms for product and service delivery!

It’s not over yet, of course. Even as we speak, we are changing, or at least technology is changing beneath our feet, ripping up the rule book once again. We are on the threshold of a whole new wave of change, so if we want to understand how we should cope, we need to know a bit more about where it is taking us next.

Welcome to the age of the super-hero.

6. The instant superhero

In ancient times, a series of beacons once topped the hills of Wessex. Kept dry against the rain, they were used only in times of need. The only message they could send was one word: “Danger.” On the other side of the world, on the broad, dusty plains of North America, indigenous peoples would light leaf fires and use a blanket to smother, then uncover them in turn to create patterns of smoke. These enabled more nuanced signals to be passed.

On the sea, ships would use complex series of flags to create detailed messages. The scale of the network was limited by what the human eye could see through a telescope, the limits further complicated by the curvature of the earth and visibility of the horizon. For a flotilla, simply keeping a ship in sight was already sending a signal of its own: “We are still here, and heading in the right direction.”

And so, over thousands of years, we have used the tools at hand to extend our reach as far as it will go, even if this means passing the minimum of data. Even today, we blow our whistles, flick our mirrors in the sun and flash our headlights to pass the simplest of signals — “Return to base”; “I am over here”; “You are free to turn”. Even as we arrive at a point where everything can be connected, from farm gates to individual tablets, somewhere above us, deep in space, the Voyager spacecraft continues to pulse its message: “I’m still here.”

6.1. The Quantified Identity

I’ve been under surveillance for a while now. I don’t know quite when I was sure, I’ve had a nagging suspicion for some time. But I do know it really started a month ago when I put on that so-called ‘fitness watch’. On its underside is a heart rate monitor, shining dual LEDs into my wrist and reading my pulse. Within it are a temperature gauge, GPS-based location sensor and tiny accelerometers which detect the smallest of movements. Every now and then it links to my phone to upload data; an app informs me of various aspects of my progress. “Well done,” it says, “You’ve been wearing the watch for 12 hours, three times a week! You’ve walked more than 2,000 paces a day! You’ve burnt enough calories, got enough sleep!” Before long, the emails started. “Here’s your weekly sleep summary,” they told me. It was at that point I realised that I was being monitored — first and foremost by myself. I was wandering alone in a wilderness of mirrors, each reflecting back some aspect of myself. My existence, writ large on a management console for my own perusal.

In 2015, according to1 industry research company Parks Associates, 33% of US households with a broadband Internet connection (that’s just shy of 100 million households2) also had some kind of ‘connected fitness tracker’ — the definition included everything from treadmills with app support, to fitness watches such as Fitbit and Jawbone devices. Another research house, CCS Insight, predicted3 in 2014 that over 250 million ‘smart wearables’ would be in use globally by 2018. Perhaps even these figures underestimate the actual demand of this expanding range of gadgets, coming not only from big names such as Microsoft and Garmin, Fitbit and Jawbone but also, and far more cheaply, from the far East. As a consequence, such devices are now appearing in their thousands. It’s big business.

We can capture our every movement, whether we are awake or asleep. Thanks to apps such as Under Armour’s MyFitnessPal, we can log our meals and alcohol intake and link these daily factoids to our heart rates, body temperatures and levels of activity. And meanwhile of course, what we collect, we share. At the turn of the millennium, technology industry analyst James Governor coined the term ‘declarative living’ to describe our innate drive to tell the world what we are up to, where we are, what we eat, how fast we can run. Others refer to this as ‘oversharing’ — what works for some is an anathema to others. And as we have also seen, we do so without really thinking about what the cleverly-named service providers — Strava, MapMyFitness and the like — might do with such lovely data. Sharing might have been fine for our book choices and opinions, but even if we should be more circumspect with our vital signs, we show no sign of doing so. It is as if we have been softened up by technology to such an extent that we now see it as acceptable, without any T’s and C’s, to give away data about our core functions.

Overall our identity is becoming something which can be defined in terms of the data that surrounds it. To what end? The most immediate effect is that with which amateur athletes are all too familiar, in that we already do know more about ourselves, and our health, than we did. Our ability to measure is aiding our ability to manage — we can see the progression of a lowering heart rate or a weight loss programme, for example; and indeed, those suffering from some conditions, such as Type A Diabetes, now have far more accurate tools at their disposal for the management of their situation. In more general health and fitness, providers have made certain features into more of a game: for example, receiving points for non-drinking days, or gaining ‘prizes’ through having walked more than a thousand steps a day. Such mechanisms have themselves been awarded with a name — ‘gamification’ has become an essential element of any fitness device or mobile app.

The second benefit is that we can make better judgements, not just about ourselves but about others. If we understand how many calories are in a croissant compared to a piece of fruit bread, for example, we might be more likely to choose the latter. Tracking devices can be used to keep tabs on animals, children and indeed, ageing relatives who might have a tendency to ‘go walkabout’. Indeed, it wouldn’t be a surprise to find, in five years, that a ‘safety’ wristband becomes standard issue for the older and more doddery, to detect an unexpected fall, or even a heart attack or stroke. Indeed, the ability to monitor heart rate will likely become a default rather than an exception, not only for the older population. And hospitals are already starting to use technology for patient monitoring — with advances as they are, the main question may well become, what took them so long? Remote monitoring is helping people stay at home longer, as well as ensuring that help is at hand when they need it in cases like Emphysema which involve long periods of relative calm between high-alert needs for treatment. As Ray Kurzweil notes4, “Some people such as Parkinson’s patients already have computers connected into their brains that have wireless communication allowing new software to be downloaded from outside the patient.” In healthcare, this is seen as shifting the balance from doctor to patient. "It's really the democratisation of medicine," says5 Dr. Eric Topol, director of the Scripps Translational Science Institute.

The positive use of technology for personal safety also extends to people working in risky environments — not least of course, the military. In 20136 Lieutenant Corporal Edward Maher, Lieutenant Corporal Craig Roberts and Corporal James Dunsby died as a result of ‘neglect’ on the 16-mile march during selection for the Special Air Service. The official autopsy ruled hyperthermia on what was a very hot day. While candidates were given GPS trackers, it was understood that the equipment was not ‘fit for purpose’ — in other words, the tragedy could potentially have been avoided with better monitoring devices, to work in parallel with staff supposed to be monitoring the trainees. Outside of the armed forces, devices are being tested to health-monitor drivers of trains and long-distance transport7, with the obvious benefit of keeping tabs on whether drivers are likely to fall asleep at the wheel. Another area is sport. In American Football for example, players have the option8 of wearing head sensors in case of concussion.

Yet to be seen is how such technologies will affect play on the pitch. Considering American Football again, it has long been speculated that helmets make players hit each other harder; equally, truck drivers are dubious about whether technology is being installed entirely for their benefit — if it feels like they are being spied upon, they probably are. Equally, it might it mean that we push people more to physical limits, if their vital signs are still below ‘acceptable thresholds’? In sport, we are already seeing a change in attitudes to rehabilitation from injuries, and athletes could potentially take ‘informed risks’. The broader potential for misuse is not hard to point out — for example the technology to determine whether an aged parent has had a fall could be misused in the accelerometer-based version of happy slapping9.

Meanwhile our brand-obsessed consumer culture could create, for our benefit, the cyber equivalent of the Jack Russell, snapping and yapping at our heels with product suggestions. Fitness information takes such possibilities to a new level — for example, could there be an additional insurance risk of having a higher than normal heart rate? We have already seen insurers penalise people who like motorbikes, whether or not they have ever ridden one. What, indeed, about pre-determination – what do our histories reveal about our future actions, particularly if based on incomplete data sets? Given the coastline paradox, the picture is impossible to complete — so will it ever be good enough? Marketers seem content that such representations are still perceived to be better than the alternative. Knowledge is power, but in reality a partial picture could prove to be worse than none at all.

And who will be picking through the information? “Where, precisely, am I going to have time to look at the ongoing health statuses of my patients,” was the gist of an interview with a US doctor in the US, shortly after Apple released its HealthKit framework (on the upside, another US doctor, Jae Won Joh, MD noted that its correct use could, “potentially solve one of the single worst problems in healthcare today: the inability to easily transfer patient records from one care location to another.”). Our hard-pressed medical services are already bowing under the weight of the so-called ‘walking well’. We are creating a potential haven for hypochondriacs, as people know more about their health and depend on the powers of Google — in which more extreme cases bubble to the top — for initial diagnosis. It’s a dilemma, undoubtedly, as we are encouraged to think more pre-emptively about our health. Simple hypochondria is one potential consequence, but there could well be more deeply felt psychological impacts. “An obsession with self‐quantification and self‐monitoring has the potential for individuals to become hypochondriacs or, more likely, hyper anxious,” write10 C. D. Combs, John A. Sokolowski and Catherine M. Banks in their book The Digital Patient. From current fears of internet addiction11 we shall experience the psychological ramifications of sensory overload, even as our offspring learn to live in a world where everything is measurable. Our online personas could be the loneliest on the planet.

This will take time, however. At the moment we still only have a weak ability to analyse data at the same level that we can with our own brains. Consider something as simple as a smile: not a big, beaming smile but… let’s say, a slightly embarrassed one, like you remembered doing something you shouldn’t. Or an eyes-widening, still-can’t-quite-believe it smile, about a remembered piece of unexpected good news. Or a wistful shrug of a smile, as if you knew it was too good to be true. The human face has some forty-three muscles in the face, from which we can represent a plethora of emotions to others, and indeed to ourselves. According to a 2014 computer-based study12 from Ohio State University however, only 20 distinct facial states exist — for example happy surprise was different to angry surprise. Which, it has to be said, will come as no surprise to any human being. A kind of technological hubris exists around all such studies, which suggest that only compute-proven features of any complex system are worth thinking about. As humans however, we instinctively know there is ‘more to it than that’.

And indeed, there is more — sometimes it seems like the world is moving so fast that, as we live in the moment, it becomes easy forget how short is our existence on this planet. But what happens to our online identity when we die? While not unexpected, given his decade-long battle with cancer, the death of Robert Ebert, long-time film critic for the Chicago Sun Times, was13 nonetheless sudden. Just two days before he died, the writer published a blog, titled “A Leave of Presence,” which announced his intent to work a bit less hard. “I’ll see you at the movies,” he wrote, little expecting those to be his last words. Following his passing, Robert’s Twitter feed has been taken over14 by his wife Chaz and colleague Jim Emerson, both of whom he had asked to contribute. “Last week R asked me to tweet new stuff on re.com. Never dreamed it would be memorials,” Jim tweeted. Both have also been contributing to his blog, and his wife updated his Facebook page, as right and fitting.

Some of us may be lucky enough to have partners, family members and colleagues that adapt our better-known digital presences – our blogs, Twitter and Facebook pages – and turn them into memorials15. But this requires time, care and in some cases, technical ability. And indeed, who knows what such sites will have become in even a few years’ time, when our fickle race turns its attention to the next big thing? And what happens to all the data accumulated about our living selves? At the moment, with data quality as it is, it festers in databases and on email lists. Perhaps our online identities are merely the vapour trails of human existence, data points destined to languish as mere background noise in the ever increasing information mountains we create. So what if, in the future, we can still catch a glimpse of the day-to-day doings of friends and heroes alike, snapped in that moment just before we logged off for the last time, our unfinished email conversations and online scrabble games held in eternal limbo.

Or maybe, in this barely started business we call the technology revolution, some bright spark will come up with a way to bring together the different strands that we created in life, enabling16 e-commerce vendors to mark accounts as ‘deceased’ and take them offline, even as they archive and clear away the detritus of our digital lives. The challenge, in the fast-moving world of technology, is assuring a such a service will exist when it is needed – four of the seven services listed five years ago for this purpose no longer exist. Perhaps (as some have speculated17 and now tried18) we will reach a stage where our personalities live on, beyond our physical deaths. This is still the stuff of science fiction – potentially a good thing. Until then, the only real answer is to create a presence in life which one would be happy to have in death. Because, for now, the information we create online is not going anywhere, fast or slow.

While we live at least, over the next decade we will be able to know more about ourselves and others than we ever could. Our every action is, to the information mountains we are creating, the digital equivalent of firing a bullet into a grapefruit. Every move we make, every interaction, every purchase, every login, every click and scroll sends off alerts in all directions, defining us not only in terms of login/password combinations but also casting behavioural shadows that we are only just learning how to tap. As in sport, increasing knowledge about ourselves will inevitably lead to changes in our behaviour, akin to a very human version of Heisenberg’s Uncertainty Principle. As time passes, we shall have to come to terms with the fact that the data-rich world we are creating is akin to stumbling upon a new dimension, driving the requirement for, and acceptance of our virtual identities. We are, to all intents and purposes, becoming our online representations. Are we losing or augmenting ourselves? As we shall see the answer, potentially, is both.

6.2. Augmentation in progress

In 19911 German art-cinema director Wim Wenders released his future-facing masterpiece of prescience, ‘Until the End of the World’. Within the convoluted, county-hopping plot, a sub-story involved a device that could record, then play back images ‘recorded’ from someone’s head. While initially planned as a mechanism to enable blind people to ‘see’, it became apparent that it could be used to record and playback dreams. At first people saw it as smart, but then they realised just how addictive it could be. One hapless character was lost forever to the screen, watching their deepest thoughts on repeated loop. Around her the world carried on, regardless.

As we create such inordinate quantities of information about ourselves and our environments, it becomes a relatively small step to use such it to enhance what we see and experience. In much the same way as the car dashboard provides direct feedback to our driving, so technology can offer a dashboard for our lives. Even smarter, potentially, is linking such feedback to what we see, hear and otherwise sense, as we go about out daily existence. Just as computer gamers are used to seeing words and stats hovering around the characters they control on the screen, so we can add data to the people and objects in our field of view.

This, ‘augmented’ vision of vision was first documented in 1945, by a US engineer called Vannevar Bush. Just as the war was coming to an end, he wanted to extend some of his wartime learnings in a more peaceable direction. “It is the physicists who have been thrown most violently off stride, who have left academic pursuits for the making of strange destructive gadgets,” he wrote2 at the time. “Now, as peace approaches, one asks where they will find objectives worthy of their best.” One of Bush’s ideas was to create a head-mounted camera, “a little larger than a walnut,” that would enable scientists to record images as they worked. He also suggested a way of converting speech to text, and using a stenotype-like device to make other notes. Bush recognised that his advanced ideas would not immediately become reality. “Two centuries ago Leibniz invented a calculating machine which embodied most of the essential features of recent keyboard devices, but it could not then come into use,” he remarked.

Indeed, it would be some 50 years before Dr Mark Spitzer3 founded MicroOptical, a company built on US Defense research[^ DARPA] funding to create a spectacles-mounted display. Designed (inevitably) for military use, the MicroOptical concept was to use a clever combination of lenses and prisms to provide a large amount of information through a very small screen, positioned close to the eye. In reality, it was also an idea before its time — as well as being very expensive, the display resolution was too low and could result in headaches and feelings of nausea. While the company found itself a victim of the post-millennial financial meltdown, it was not to vanish completely: in 2013 Google acquired the MicroOptical patents4, shortly after which, its Google X division — director Mark Spitzer — announced the Google Glass project. With Glass, the company was looking to pick up on augmentation and bring it to the mainstream— while it was announced with much aplomb, only to be withdrawn a year later, before coming back again as a corporate platform in 2015. The bottom line, it appeared, was that the concept still wasn’t good enough to be of sufficient general use.

The phrase augmented reality’ (AR) has been used to describe a combination of technologies, work in tandem. First we need the data feed, of course. Then we need a digital video stream of physical objects or scenes, adding information to the stream in real-time, displaying it on a suitable screen. We may have had a wait, but today, all such pieces are now in place. Right now AR can be achieved with a smart phone with a camera and internet access — a video is captured, uploaded and recognised, then additional information is downloaded to add to what is seen on the display.

Examples of AR at work fall into three groups. The first, which we might call ‘symbol-based’ is the simplest, it is perhaps the most fun. The 'symbol' is a pre-defined, fixed image that can be recognised by a program installed on a smart device, which can then add information in real time — for example adding a 3D avatar, or replacing somebody's head with a cartoon image5. This model is not dissimilar to using QR codes6 — squares of pixels which can be photographed and interpreted to link to online information. Indeed, examples exist7 of using QR codes as the basis for symbol-sensitive AR. ‘Object-based’ AR takes things one step further, in that image recognition software can identify specific objects and then construct a virtual world around them. Examples include Metaio's digital lego box8 and apps to show how, say, to remove toner cartridges or other products from their packaging. Finally we have ‘context-based’ AR, which captures the entire surroundings and adds information prior to displaying both. Google Sky Map9 is a simple, effective example of what can be done; other obvious applications are for travellers and direction finding, picking up specific street features and identifying the nearest pizza outlet, say. In these cases the video feed is supported by GPS information — so the software doesn't have to work out which street one is on from scratch! And other organisations are looking into visualisation10 of complex data.

While AR has yet to find its killer app, it continues to develop. All of these models are being tested out in various ways, with augmented capabilities finding their way into games, such as incorporating a camera into a model helicopter and turning it into a virtual gunship11. Gravity-related features12 or face recognition (as incorporated13 in Google Android's Ice Cream Sandwich) are being used now. Indeed, technologies don’t have to move with you — they can sense your movement. The company Leap first came to market with a tiny device called the Motion, which uses two infra-red cameras to detect the location, and movement, of hands — while some14 saw it as an idea before its time, it is now being integrated as a controller directly into laptop computers. Meanwhile, Microsoft’s Kinect controller can detect and interpret whole body movement, both adding to the ‘augmented’ experience. Augmented reality techniques are also being used in sport, for example by German company Prozone, whose technology is being used by many European football teams to analyse team movements during and after a match.

MicroOptical was among those recognising the need for a more immersive experience, indeed, it was its experiments with the MyVu heads-up display (renaming the company to MyVu) that led to it running out of money. The link, to all and sundry however, was obvious, particularly given the growing interest in three-dimensional, virtual reality environments. 3DVR environments started to come to the fore at the turn of the millennium — Second Life was the poster child of this movement, another area that connected with people’s desires to share some aspects of their lives (including their desires) in a virtual environment. Unfortunately, Second Life also appeared to be an idea before its time, reaching a point beyond which it started to have less point. Of course it is typical of the technology industry to try technology combinations and see what sticks[^ Sites like Augmented Planet speculate about integration of near-field communications.], but the lure of the immersive environment refuses to go away. The Oculus Rift is the latest in a series of attempts to deliver on Bush’s original ideas — the brainchild of Palmer Luckey,. In a style characteristic of many techno-entrepreneurs, Luckey left college to found OculusVR in 2012, starting a Kickstarter campaign which raised $2.5m in a remarkably short period: his lucky break came when he met gaming industry hero John Carmack, founder of Doom, following which (one way or another) the project has seen a meteoric rise, culminating in being bought for $2 billion by Facebook. The Rift has been some 4 years in development, during which time a number of other technologies have been advancing — not least display resolution, with today’s mobile phone screens working at scales almost invisible to the human eye. In addition, the Rift now comes with built-in devices that enable the display to ‘move relative to the head position of its wearer — that is, so-called accelerometers.

Accelerometers also have a long heritage. McCollum & Peters created15 first ‘resistance bridge’ accelerometers in the 1920s, weighing about a pound — their use was perceived for engineering, such as sensing for movement in bridges. It was the advent of silicon-based (“Micro-electromechanical systems” or MEMS) accelerometer technology that changed the sensor world in the 1980s, adding to the pool of sensors that could drive data into the Internet of Things and, of course, providing a basis for a number of unexpected uses of today’s mobile phones — such as the plethora of apps now available to help you with your golf swing. Accelerometers are finding its way into gloves and all kinds of other appendages (it should come as no surprise that the porn industry is investing a great deal of time and money in a variety of ‘attachments’ — another form of ‘augmentation’).

As such ideas illustrate, this more immersive world, combining real-life and computer-generated information, can deliver a great deal more than simply providing heads-up displays. It looks probable that AR will trigger a resurgence of the virtual worlds we saw at the beginning of the millennium. One has to wonder what might have happened if Second Life had become popular at the same time as the rise of the virtual reality headset — indeed, the site is still online. But now, with computers more powerful and with interfaces more engaging, such immersive environments can have a second bite of the virtual cherry. As is also the case with massively multiplayer role playing games (MMORPGs) such as Lord of the Rings Online and World of Warcraft, themselves going through a period of reducing demand. All that people are waiting for, potentially, is appropriate kit.

But where will we go if we get it? For better or worse, we are undoubtedly experiencing a closing of the gap between the real and the virtual world. While it’s worth thinking about the two separately, self-proclaimed16 ‘social media theorist’ Nathan Jurgenson warns against what he calls ‘digital dualism’ — that is, seeing the data and ourselves as separate. “I fundamentally think this digital dualism is a fallacy. Instead, I want to argue that the digital and physical are increasingly meshed,” he said in 2011. At the time, just a few years ago, his words may have been controversial, but now they appear prophetic. “A new heuristic for human experience now blends physical and virtual space in personal, asynchronous time and physical and virtual space in group oriented, synchronous time,” explain17 Sally A. Applin and Michael Fischer, anthropologists at the University of Kent.

The potential for such technologies are profound. Surgeons for example can shrink to the size of the cell structures they are looking to repair, or the tumours they are trying to destroy. Examples that aren’t particularly exciting are nonetheless profound: consider, for example, the use of Zebra Technologies’ AR capabilities, which enable warehouse managers, to keep an eye on their shelves of stock with immediate information about inventory levels, wrongly catalogued items and so on. And meanwhile, yes, Astronauts can travel to distant planets, and geologists can descend deep inside the earth without having to be physically present. At the same time however, they — we — will feel that we are present. As many philosophers have said, “what we perceive as real” is more correctly stated as “real is what we perceive”. It appears highly likely that we shall be able to see and experience far beyond our current field of vision.

As both AR and VR find their way more into the mainstream, some downsides will inevitably emerge. For example there are clear privacy questions around using facial recognition: right now we can sit on public transport or indeed, walk into a shop with relative anonymity, but this could easily change if, say, our facial features could be mapped onto a database of images. Cases of mistaken identity are also possible — what if you happened to look very similar to a known stalker? There's also the possibility of the AR equivalent of social networking takes hold: when Facebook acquired^18 the Oculus Rift, there was much consternation the our very movements could be sold to third parties, for example. We always have the option to turn such facilities off, but we can’t stop other people from using then, and it is in providers’ interests to discourage us from doing so. And indeed, we may like our augmented lifestyles too much to want to return to merely existing.

And what about our children, born into this augmented world? Examples of AR books for kids already exist^19, but what about when they are better able to navigate the immersive world than their parents? Panic not, we are not yet ‘there’ — the linkage of data and the physical world has some way to go. However, as AR and VR start to integrate other forms of information and as new applications are brought to the fore, we may yet reach a point where we drop the A or V, and it simply becomes 'reality', at which point the bar of what augmentation means has to rise.

The next couple of years are going to see some significant advances in both AR and VR, as screens become sufficiently readable, mobile data transfers sufficiently fast and local processing (that is, on the devices) sufficiently capable to tie all the necessary pieces together. The ramifications may be most visible in the consumer space but it is in industry — in healthcare, policing, education — that the impact will be most profound. How precisely will it be felt? Perhaps we don’t have to look any further than the many books, games and films that have already been created that cover this topic, from Tron to the Matrix (and of course Wim Wenders’ own magnum opus), Myst to Warcraft, John Brunner’s Shockwave Rider to Tad Williams’ Otherworld . What we learn from such films, in general, is that the fight remains between good and evil, between acts of immense inner strength to overcome situations of utter peril. Perhaps this suggests a failure of our own imagination to fully understand how our behaviours might change; or more likely, the fundamental facets of the human psyche, honed over millennia, will pervade whatever the backdrop, in the physical or digital world.

Whatever the ramifications, AR and VR are coming — but even then, technology isn’t stopping with simple augmentation or the creation of immersive worlds. Not only will we be able to interact with the environment in new ways, we will also be able to create it. Welcome to the world of 3D printing.

  • read this again :)

6.3. We are all makers now

While the exact origins of the lowly screw1 are lost in history2, it is hard to imagine today how we would function without such a device. The ability to create a helical thread on metal lay first in the domain of the smith. In the 14th century, metal workers who could create locks were in high demand, particularly among3 European nobility. Creating a helical thread on metal was just one technique required by craftsmen who turned their hands to locks, then clocks and indeed, guns, torture equipment4 and other contraptions.

The use of a helically threaded device, if tapered, to hold two pieces of wood together did not go unrecognised. It’s a simple question of maths — a screw works in much the same way as a pulley, in that it is easier to twist something and cause its forward motion, than it is to push the thing directly. Wine presses first took advantage of this capability back in Roman times (and probably earlier), as did the printing press, both of which used large wooden screw mechanisms to exert the maximum pressure with minimal effort. It was only a matter of time before someone spotted the opportunity to use the same principle to fasten one piece of wood to another: once the flexibility of the screw was recognised, it quickly became an essential element of a craftsman’s toolkit. Indeed, it is no coincidence that people who work with wood are still referred to as ‘joiners’.

The screw-based mechanism could also be used as a central element of a lathe. By twisting it back or forth, the element could be moved forwards and backwards with a certain degree of accuracy. In a stunningly simple example of a feedback loop, it wasn’t long in historical terms before someone worked out that lathes could be used to speed up the production of screws. One of the first mentions5 was in Das mittelalterliche Hausbuch — literally, the Medieval Housebook — written in 1480 by Waldburg Wolfegg. In the 1700s a Frenchman, Antoine Thiout, used a screw mechanism to further enhance the accuracy of the lathe, meaning better screws. This relationship between devices and their method of manufacture underpinned the industrial revolution, with screws the archetypal example — in turn being manufactured and enabling the manufacture of so many things. Aided by the standardisation of screw threads in 1841[^ Whitworth standard], the industrial revolution was, in fact, one big feedback loop, exploiting the potential differences created by human-powered automation and pushing the ‘West’ to the front of the global economic race, at the same time as nearly destroying itself: it is no coincidence that the two greatest conflicts in history took war to an industrial level; nor, for that matter, that the First World War immediately followed what has been termed the ‘second industrial revolution’; nor that some of the biggest advances in technology appeared to take place immediately post-1945.

Over recent decades the West has looked to the East for its manufacturing needs, with changes in the balance of power once again a consequence of economic potential difference. Cheaper production started in Japan, extending to other countries including South Korea and of course, China. As the communist regime started to open up its borders to trade, it has stumbled upon a source of economic growth. As technology eased challenges of communications, the gap closed still further, as a consequence enabling one of the poorest families in the world to become one of the richest. Today, if you are a small business looking to import some equipment or components from China, you need look no further than Alibaba. The company’s online trade catalogue contains 400 million products, from over a million manufacturers and suppliers — not just electronics, but everything from mood rings to car spares. If it can be bought and is legal, you will find it listed.

In 1999 the charismatic, if diminutive entrepreneur Jack Ma gathered 17 people in his flat in Hangzhou, spoke for two hours about his aspirations, then asked them to invest. “Everyone put their money on the table, and that got us $60,000,” he said. A more fundamental example of an unlikely person stumbling upon a moment of staggering potential difference would be hard to find. Ma’s parents were practitioners of Pingtan6, a traditional form of musical storytelling originating in the Suzhou region of China. His early life reads like an anthology of thinking creatively in difficult circumstances — his parents were not that well off, his father would beat him, his grandfather (a Nationalist Party member) was persecuted as an enemy of the Communist revolution and Pingtan was banned7 during the Cultural Revolution in 1966, when Ma was 2 years old. According to the stories, Ma frequently8 failed exams, and was rejected even from KFC; all the same, he learned English as a tour guide from the age of 12, taught English and spent a formative month in Australia (where he learned that the whole world was not like China).

In 1994 Ma launched a translation service, and when he was working for a trade delegation in the US as a consequence, he discovered the Internet. Shortly after, he decided to launch a Yellow Pages service dedicated to China. “We searched the word ‘beer’ on Yahoo and discovered that there was no data about China,” he remarked9. “I borrowed $2,000 to set up the company. I knew nothing about personal computers or e-mails. I had never touched a keyboard before that. That's why I call myself, ‘Blind man riding on the back of a blind tiger.’ ” A few years later (and after other business ventures) Ma was working for the Chinese Ministry of Foreign Trade and Economic Cooperation when he had to return to his tour guide roots — he was assigned the task of taking Jerry Yang, co-founder of Yahoo, to see the Great Wall. All such events led Ma to realise that Internet entrepreneurs were just ordinary people, like him: if he didn’t seize the opportunity, someone else would.

Alibaba was the first in a series of incredibly successful ventures for Jack Ma. In 2003 he launched an eBay-like site, Taobao, for consumers. Ma had no shortage of new ideas — as his father said10 to him, “If you were born 30 years ago, you'd probably be in a prison, because the ideas you have are so dangerous.” Indeed, his Kung Fu nickname is, Feng Qingyang, which means ‘unpredictable and aggressive’. Ma’s history has not been without controversy — a decision to hand email data over to the Chinese government led to journalists being incarcerated, for example (In his own words, he said Alibaba should “be in love with the government but never marry it.”). But the market was mostly interested in his platforms. When Alibaba IPO’ed in 2014 for a record-breaking $150 Billion, Ma instantly became worth $25 billion and has been ranked11 the 30th most powerful person in the world[^ Most recently he has become an advocate of sustainability. He had been quoted as saying, “Our water has become undrinkable, our food inedible, our milk poisonous and worst of all the air in our cities is so polluted that we often cannot see the sun. Twenty years ago, people in China were focusing on economic survival. Now, people have better living conditions and big dreams for the future. But these dreams will be hollow if we cannot see the sun.”].

Ma’s success story goes to the heart of the changing nature of manufacturing and supply. We have become used to a relatively small number of conglomerates controlling the products we buy and use, from car manufacturers to horribly acronymed ‘fast-moving consumer goods’ (FMCG) companies, which sell everything from yoghurts to toilet cleaner. In the new model, an enormous number of small companies are fronted by a common portal which facilitates the relationship, provides a shopping cart and offers some recourse in case of issues. Of course there’s nothing to stop such businesses becoming larger — companies like Foxconn grew from similarly small beginnings — but size is no longer a gating factor. Neither is the model unique to China. In the West, Etsy offers a home for home-made and artisanal products co consumers can buy direct, and Fiver connects creative types and designers to small companies. Sites also exist for collaboration on designs, such as Libelium’s Cooking Hacks12. Looking even more locally, there are sites for fast food and for odd jobs, indeed (and as you might imagine), any opportunity to connect supply with demand has now been attempted in one way or another.

The result is a complex series of positive feedback loops, driving what some13 are calling the ‘third industrial revolution’. The fact is that just as anyone can be a customer of such sites, and indeed anyone can be a supplier — it is noteworthy that Alibaba’s latest growth plans include the ability for western manufacturers to sell direct to China. Companies are no longer distinguished by their size — if you have a desire to bake cakes and distribute them locally in your spare time, or you want to advertise your copywriting skills, the chances are you can do so. Large-scale manufacturers that previously had the monopoly on supply and demand now have to compete and collaborate as peers with organisations a tiny fraction of their size. ‘Co-creation’ is becoming a common theme in the corridors of big business.

Just as people are being gathered to develop ideas and create things at a corporate level, for example through brainstorming hackathons in technical communities, so are get-togethers taking place for quilting or knitting in what is being termed the ‘maker’ movement. All the same, with its hippy-esque outlook, the maker movement looks awfully like the early days of computing — indeed, with the Home Brew Club, beer seems to be a common theme. To understand where ‘making’ is going and the relationship between corporate and individual, we would do well to look at the upsurge in craft beer production. At one point the number of independent breweries was down to a very small level, as the market was being absorbed into consortia. But thousands have sprung up in recent years. They succeed or fail on the basis of how good they are. But what has really enabled them to thrive has been the lowering of costs of manufacturing equipment, a factor for which we also have China to thank. Brewing equipment and supplies are available from Alibaba, of course!

Some might say that this hipster-esque reversion to craft is all nostalgia, but equally it is a return to traditional norms as diminishing thresholds bring the power of production — which created the industrial revolution in the first place — into the domain of individual people. As equipment costs fall, the consequences of this feedback loop go right to the heart of manufacturing, right to the direct creation of stuff itself. We can now produce our own stuff, through the power of what is being called 3D printing. Just as printers connect to a computer and enable images to be reproduced on a sheet of paper, so 3D printers can take computer-based designs and generate physical objects. And the impact is profound.

A wide variety models of 3D printer exist. The most commonly considered are those creating objects through extrusion of quick-setting materials, which can then be built upon layer by layer. The result isn't necessarily an object of beauty — there's only so much you can achieve by squeezing epoxy out of a tube. However already uses are emerging that were not even considered by the original creators. For example the epoxy resin can be replaced by chocolate or sugar paste to make confectionery; printed parts can be milled to add more complex features. And meanwhile, ‘printers’ (or more accurately, computer-controlled cutting tools) can work with layered wood, fibre board or indeed metal to create shapes that can then be assembled. MIT has even ‘cracked’ — sorry — how to 3D print glass.

The gating factor for all such equipment is the cost. Like many other types of manufacturing equipment, 3D printers started expensive, and were therefore available for specific uses only, at places that could afford it — such as the R&D departments of well-funded Formula 1 teams, where they were used to create prototypes of aerodynamic parts to be tested in the wind tunnel (In this context the printer was the flat-bed type, using lasers to heat up specific points in a bed of resin. Drain away the remaining fluid and a part would emerge, almost majestically.). As 3D printing increases in popularity — a consequence of falling thresholds itself — is also following the laws of shrinkage, on a parallel path to Moore’s Law.

In parallel with cost, however, comes how 3D printing fits with everything else. It’s not just the device — equally important are the designs that can then be printed. And when we look at these we see them following a similar, open path to other parts of computing. Open source 3D printing communities exist which offer, share and advise on design templates. Consider, for example, that which has been built around the ‘RepRap’ open source 3D printer project. In an interesting mirror of the screw-lathe dynamic, RepRap printers can print themselves, or at least their component parts. Such a device does require some components that cannot be printed - steel bars, circuit boards and so on - so using the term "self-replicating" is a bit of a stretch. However the majority of pieces can, as can spares, should anything break. Meanwhile, designs can be downloaded and used, or indeed uploaded for comment, enhancement and integration to create more complex objects. Both the capability, and the pieces that emerge, open new doors to down to earth, home-brew innovation.

In several years' time 3D printers may be more common in homes and businesses, for example to create coat hooks, doorknobs and other fittings. New possibilities are coming online all the time, for example printing14 ‘bionic’ hands, or even genuine body parts. The long and the short is that we will be able to create new objects and devices in very short orders, to help us in all sorts of ways. The ability to create products from recycled raw materials in so-called ‘developing’ countries is very attractive where transportation is an issue.

3D printing is not all a bed of roses. 3D printing of handguns is often cited as a cause for concern, for example. Meanwhile some are saying that 3D Printing is threatening to take the toy market away, or for stealing ideas — it probably won’t achieve the former, but the latter perhaps. This will certainly make uncomfortable reading for manufacturers of simpler objects, but in some ways, such organisations have been using industrial scale 3D printers for years, and they will continue to extend their industrial capabilities (or fall by the wayside). As a consequence of this complex array of feedback loops, we are most likely to see a bifurcation between the creators of small-footprint manufacturing equipment and the users of such equipment. At which point, the recipes and experience become the most important factors. Indeed, a couple of decades ago plumbers seemed to think that their businesses would be ruined by push joints, but it hasn’t. Quite the opposite. Overall, it looks like we will continue to have a relationship between big and small, between commodity and smart, between community and corporation.

3D printing will undoubtedly push up against the Internet of Things, the world of Big Data, of Machine Learning and so on, no doubt we will be printing sensor housings and creating new tools and items. But will we all have complex equipment in our kitchens, or will we rely on local ‘3D print shops’ to create the objects we need? Will, “Hang on, I’ll just print one off,” may become a familiar cry? No doubt big business will play a substantial role, as Amazon has intimated, with its suggestions of housing 3D printers in trucks so it can create objects en route to the destination. It is no surprise that ‘hackathons’ garner a great deal of sponsorship from the corporate world, which continues to watch with interest.

At a deeper level, smarter ‘printers’ are starting to look a lot like the kinds of robots intrinsic to large-scale manufacturing, and which are on the brink of becoming more of a familiar sight in our daily lives. Indeed, we may be able to automatically build our own robots, at which point, the 3D printed gloves will be off. With this in mind, and to round off this journey into the near future, let’s take a look at the robot revolution.

6.4. Is that a drone in your pocket?

“How1 bout’ it out there DX land, Big Red down here in Nevada looking for conditions…”

On a desert highway, a convoy of trucks drives from dawn to dusk. Even as drivers are talking on shortwave CB radio, the engines are also conversing, comparing notes on the road situation. When one vehicle changes gear, they all change gear to account for the change in incline. When one brakes due to a slow-moving vehicle or an obstacle on the road, they all brake. When one starts to veer due to high winds, the others adjust their steering to compensate. This is not some futuristic scenario but reality, at least in Nevada where semi-autonomous vehicles are now legal transportation; it is reasonably clear that automation will not stop with such ‘core’ features. “It’s not a matter of if we’ll have autonomous vehicles on the road, it’s when,” remarked2 Jules Moise, vice president of transportation technology for United Parcels Service.

Co-ordination between our forms of transport have a long heritage: in 300BC use of the heliograph — a mirror mounted on a stand — extended to the sea as well as land, continuing well into medieval times, supported by “The manipulation of sails, the location of national ensigns or other flags or shapes upon the masts, lanterns and gunfire,” according3 to Captain Linwood S. Howeth. The invention of the telescope in the 1600s triggered the creation of systems of maritime flags, as described4 in the British Fighting Instructions of 1665, for example:

“If the admiral put up a jack4-flag on the flagstaff on the mizen topmast-head and fire a gun, then the outwardmost ship on the starboard side is to clap upon a wind with his starboard tacks aboard, and all the squadron as they lie above or as they have ranked themselves are presently to clap upon a wind and stand after him in a line. And if the admiral make a weft with his jack-flag upon the flagstaff on the mizen topmast-head and fire a gun, then the outwardmost ship on the larboard side is to clap upon a wind with his larboard tacks aboard, and all the squadrons as they have ranked themselves are presently to clap upon a wind and stand after him in a line.”

One can only surmise that the term ‘flagship’ came about as a central point to enable such co-ordination to take place. As the versatility of such system became understood, the actions it described became more complex, first5 by Captain Frederick Marryat's Code of Signals for the Merchant Service in 1817 which evolved (after several revisions) into the International Code of Signals, drafted in 1855. As all true seamen know, blue and red indicates “I am altering my course to starboard,” while a black spot on a yellow background shows “I am altering my course to port.”

Such visual signals are still in use today, on sea and on land (for example the railways). As technology advances however, the need for line-of-sight based mechanisms diminishes, with the advent of more ‘advanced’ techniques such as radio transmission and of course, data communications. In the main these are still primarily used to provide information to a driver or pilot; but, as we have seen, this is interchange is moving between the vehicles themselves. We saw in Chapter 2 just how sensors are changing the transport systems upon which we depend. As planes, trains and automobiles gain processing capabilities, they become better able to make sense of the signals they are receiving. As a result, the notion of the smarter, ‘connected’ vehicle has moved very quickly from futuristic articles to the design rooms of manufacturers.

Indeed, traditional vehicle manufacturers such as BMW are all over it, even if they admit their current efforts are more to showcase the potential of the new capabilities, rather than to deliver viable products. Most recent attention on how cars and therefore drivers can share information — for example, cab drivers in Boston have been testing setups to record and share information about potholes, so they can be given advance warning of danger areas.

Right now, there’s a bit of a land grab as car manufacturers look to engender brand loyalty among drivers. BMW has its drivers community for example, and there are others. This goes beyond the manufacturers, indeed, we are bringing our own sensing technology into cars. Even as our GPS-enabled mobile phones and TomTom navigation devices tell us the most appropriate route (using a tried and tested Travelling Salesman algorithm), they are broadcasting our collective speeds, information which is read and interpreted to update potential journey times. As Mihir Parikh, digital consultant at BearingPoint has noted, “With the way we share information about our own locations when we look at Google Maps, we are doing more than looking at a map. We are the map.” This involvement of people in the context of mapping is profound. But does it actually need a person at all, to own the phone that sits in the car and provides a GPS indicator? Indeed, is whole concept of the driver already becoming moot?

A couple of years ago, Google started testing what has become known as a ‘driverless’ car. So far it has been quite successful, tested on regular roads and in cities. While Google’s efforts require a human co-driver, they have logged up many thousands of miles of driving time to date. Initial forays have been so successful that US legislators have recently changed the law to make the vehicle, not the passenger liable should there be an accident. And in 2016 a pilot project will be rolled out across Milton Keynes’ walkways to use driverless cars for late night commuters.

The heritage of the driverless car has some elements in common with both the restricted world of airport narrow-gauge railways. But look beyond the clunky repurposing, and we can start to see that the driverless car is much, much more than that. Consider the component parts: we have a mechanism that enables a vehicle to move; we have a series of sensors to ensure it understands its environment; we have a data interchange mechanism to enable it to co-ordinate with the wider world; and we have a central processing capability to interpret the data it receives, and make decisions; and we have electric servo motors that can adjust speed, steering and gears automatically. The car’s wheel-based limitations still gives them broad application. The ‘car’ concept only exists because of its use at transporting people. As soon as it extends beyond this relatively limited role, a variety of other kinds of transport could be considered — which do not need to consider the need to incorporate a driver. Vans of the future could be tall and thin, or short and squat, or run around like the annoying little droids in Star Wars. They could be used exclusively to transport apples — bringing a whole new meaning to ‘upsetting the applecart’. Or, we could imagine pizza boxes with built-in transport mechanisms. Or we are already seeing minesweepers6 and roadside bomb detectors.

In other words, ladies and gentlemen, we have a wheel-based robot. Too much, you say? Surely robots are humanoid? Sensor-based robots have ‘arrived’ in limited forms in a variety of shapes and sizes, such as vacuum cleaners and lawnmowers. And what are these but devices that can follow pre-defined routes while avoiding obstacles? Indeed, it could also be argued that semi-autonomous cars share a number of similarities with that incoming wave of handy, fun, highly useful and sometimes dangerous airborne devices known as drones. As we saw in an earlier chapter with their use in vineyards, is the drone. Drones are little more than pilot-less planes — they still need to be remote controlled. And, like many technologies, they are being seen both as benign — Amazon is talking about them for parcel drops, for example; equally they are finding a use in film making and sports, and to go where it is unsafe for people to go in oil fields7; and they can also be seen as downright scary, particularly when considered8 in the military context.

This is not without some controversy. One of the main uses of drones is to carry camera equipment, enabling a birds-eye view of virtually anything from9 fireworks displays and rock concerts to the neighbour’s house, a sports match or a government building. The number of conflicts of use make for a complex array of legal issues, so it is no wonder that organisations such as the US Federal Aviation Authority is still getting its legal head around drone use. “If a realtor films buildings for fun using a remote controlled quadcopter that’s legal. But if she takes that same quadcopter and films buildings as part of her job, that is illegal. If a farmer flies a model aircraft over his cornfield doing barrel rolls and loops, that’s legal. But if he uses the same model airplane to determine how to conserve water or use less fertilizer that’s illegal. This is government regulation at its worst,” wrote Gregory S. McNeal in Forbes10.

As we are already seeing, the threshold of affordability is dropping to such a point that drones become quite common. In addition, drones can become very small. The mosquito drone may still11 be a proof of concept but that will not be the case for ever. But still, if you were thinking they are no different to model aeroplanes, you would be right. The difference is that they can incorporate satellite links and, potentially, some primitive intelligence to evade obstacles. From remote-controlled drones we shall start to see increasing smartness, until self-flying drones become practical. Companies like TOR Robotics12 (whose software does not distinguish between cars and flying objects) and the Lily13 “personal cameraman in the sky” are investigating the potential — the latter follows its subject via GPS, and it’s not hard to see both the fun to be had, say as a camera follows you down a ski slope, nor the potential disaster if everyone starts having one.

And why stop with wheel or wing? We are starting to see self-navigating boats, of all shapes and sizes — there is a competition for these too: “The Microtransat Challenge is a transatlantic race of fully autonomous sailing boats,” explains its web site14. The positives are numerous — for example, the US Navy is testing out autonomous autopilots15 that can run on any boat, enabling danger zones to be investigated, these have earned the title ‘swarmboats16’ which can be used either as guides, as protection, or as decoys. We can also imagine shipping containers loading themselves onto crew-less ships (or simply making their own way). But equally, of all the les salubrious thoughts of where this can go, self-navigating boats in use by drugs cartels are the most obvious. Potentially dressed up to look like typical maritime litter. Don’t be at all surprised to see a fly-float-drive delivery drone in the future.

The long and the short of all of this, however, is that we are seeing the rise of the robot. The very simple function of moving something from one place to another will more than occupy ourselves for the next few years; as they do so, we shall become more comfortable with robot-type devices extending to other tasks, from road sweeping and house sweeping, to ultimately doing the washing up and feeding the cat. Be in no doubt that robots are already among us — if not walking, they are rolling themselves around, which may be the most appropriate form of motion. Or simply standing still, such as Jibo17 which incorporates face recognition software and can interact with a person.

The rise of the robots will extend our reach still further: they will become our eyes and ears, and, potentially, offer self defence. And they are getting smaller, as illustrated by the use of pollination ‘creatures’ in agriculture. Robots will enable us to transcend our mere humanity, extending our field of vision and our capabilities way beyond what we can do currently. For good, and for ill.

Too soon? Not really, as we already see one-off examples. While it may have cost billions to build, the Mars Lander represents the shape of things to come — no doubt we shall soon see similar contraptions criss-crossing our deserts, our ice floes and our oceans, going about their business in ways and places we could not otherwise. And, indeed, flexible joints and active suspension systems are enabling robots to stand on their own two feet. Google-spin out company Boston Dynamics has been testing18 a variety of two-and four-legged mechanical beasts, partly in response to DARPA’s robotics challenge, which19 had $2 million in prize money — this was won by a robot called DRC-Hubo, which could both use wheels for efficiency, and legs for climbing. In 2014 the first robot soccer world cup took place; while this mainly involved human-form robots falling over, stumbling and otherwise failing to get a ball into a net, they indicate the shape of things to come.

For good, and potentially for ill. It is not hard to imagine controversy surrounding front-line military use of robots (in a bunker in the Mojave desert, a research centre is already creating such capabilities). Nor, for that matter, robot muggers or indeed incendiary-carrying drones threatening the safety of city streets, or robot police identifying and disarming them. Robots will generate even more data, and remain just as hackable as any other computer-based device so, we need to think about the security of data they generate, and the mechanisms to prevent them from being hacked. One has to wonder, will we see outbreaks of robot flu, if our nearest and dearest animatronic ‘catch’ a virus?

But perhaps, an even more pressing question than how robots might change the world, is how robotic-capabilities could augment our own. As with so many areas of technology, this has a long heritage — the world’s first automobiles were not called driverless carriages for nothing. But we are not that far away from the kinds of cyber-suit that have thus far remained largely in the realms of science fiction. In Bochum in Germany, for example, at the Centre for Neurorobotic Movement Training, tests are underway20 of what is being called the Hybrid Assisted Limb (HAL). Originally developed in Japan, this enables paraplegic patients to control robotic legs, using weak movement signals from their damaged nerve endings. Similar initiatives are underway for damaged arms. Extrapolating these models, it isn’t hard to see a use for the kinds of cyber-vehicles currently a feature of films such as Avatar. This is yet another example of cyber-enhancement, as is the drone that maintains a steady position over an individual’s head.

So, yes, we are seeing the end of the motor industry. And of the postal service. And of pizza deliveries. And hoovering. But beyond this, these examples of cyber-enhancement are the most concrete indicators yet of our inevitable trajectory towards becoming super-human — not only all-seeing and all-knowing, but also equipped with previously unheard of physical strength and stamina. And, as we will see, new ways of dealing with each other. Enter: the blockchain.

6.5. The truth is out there

In April 2014, Ben Schlappig chose1 to sell his house in Seattle. To anyone else this would be a normal affair, but for Ben it was different: rather than move to another property, he had decided to dispense with such earthly considerations. Not that he was financially rich, as such; rather, he had managed to amass so many frequent flyer points, using a variety of schemes and tricks (from affiliated credit cards to deliberately getting bumped from a low-cost flight), he can now travel wherever he likes, whenever he likes.

Ben Schlappig is neither the first, nor the best at the game. This mantle has to go to Randy Petersen, who founded his frequent flyer consulting service back in 1985, embracing the Internet with his WebFlyer.com2 site a few years later — indeed, Ben’s own blog is hosted by one of Randy’s sites, BoardingArea.com3. Across hotels, flights, retailers and other services, the market for loyalty points in the UK alone is suggested to be running at about £6 billion’s worth of (potentially untapped4) value.

The fact that frequent flyer miles and other loyalty points can be bought and exchanged5, for products and even money, should leave nobody in any doubt: these are, to all intents and purposes these are forms of currency. It has been estimated6 that an ‘airmile’ is worth 10-12 cents. Not a great deal perhaps, but as we all know, look after the pennies… And it’s not just loyalty points. M-Pesa, Kenya’s mobile money success, started out as credit points for mobile phone calls. The story is that when the banks complained that its provider, Vodafone, wasn’t a bank, the company was immediately granted a banking license. Meanwhile, in gaming we have XBox and Steam7 points, and in-game auction houses with the ability to top up with cash.

So perhaps it should have come as no surprise when Bitcoin arrived on the scene to almost immediate acceptance. Bitcoin isn’t even the only ‘crypto-currency’ in play8, though it is the de facto currency on the dark net. This is not only to its virtual nature, but it comes with an additional twist: it was designed to be distributed. Whereas other currencies — from traditional money to loyalty schemes — each required their own central authority, Bitcoin and others use a powerful technology which is known as the Blockchain. Blockchain started out as a “system of record” for Bitcoin. Simply put, if I give you a Bitcoin, how can the transaction be verified as having taken place? The answer is for a third party to oversee the creation of a block — a package of data containing not only this, but multiple other transactions and some ‘other stuff’ to enable the authenticity of the block to be proved.  This does not exist in any one place, rather, each Bitcoin account keeps a copy. Every time a Bitcoin transaction takes place, it is written to each and every copy of the Blockchain. This, plus the (also distributed) checking process in place, makes it almost impossible to conduct a fraudulent transaction.

Note that this means Bitcoin was never designed to be anonymous. “Bitcoin is probably the most transparent payment network in the world,” explains Bitcoin.org. Each Bitcoin transaction9 contains a ‘hash’, a 25-byte address directly traceable to its instigator. It has to be, otherwise it wouldn’t be provably correct. Once independently verified, each transaction is then stored in what is, to all intents and purposes, a globally, publicly accessible database — the blockchain. While it might take some digging, if we all used Bitcoin, it would be like having access to each other’s account details. (As a consequence, for those wanting to cover their virtual tracks, the general advice is to create a new address every time. While this can prevent a series of transactions being linked to a single source, it’s still public.) 

As it happens, transactions stored on the Blockchain don’t have to involve the transfer of bitcoins, they could represent any event — say (and why not), a declaration of undying love. Indeed, any piece of information can be captured and stored as a blockchain event: once created, the record will exist for as long as the concept of the blockchain exists.  Once the transaction has been added to the system of record, it is duplicated across every computer storing a copy of the Blockchain. As a result of both their indelibility and programmability, blockchains have been seen as a way of managing a whole range of situations and transactions. Of course money transfers (crypto- or traditional currency) but also such situations as preventing forgery and pharmaceutical fraud, as an identifier associated with an item (a painting or a drug) can be proven to be correct.

Equally, the virtual world has room for more than one blockchain. Bitcoin has its own, and other crypto-currencies have theirs. You can create a (different) blockchain for whatever purpose you like. While Bitcoin is gaining interest therefore, it is the Blockchain that has really inspired. For example, the Australian stock exchange has announced10 that it will be testing a blockchain for the next version of its trading platform. Blockchains have also been seen as a useful way of storing medical records, in a way that is both secure for the individual and difficult to corrupt.

The immediate impact on certain types of business has not gone unnoticed. A particular area of interest is the arts, as these mechanisms enable transactions to take place directly between artists, consumers and other stakeholders. For example the use of bitcoin-based models to create a financial conduit directly from consumers to artists, enabling11 ‘fairness’. Imogen Heap12, who released a single with a Bitcoin pay-what-you-like mechanism, has also launched the Mycelia project to investigate this further. "I dream of a kind of Fair Trade for music environment with a simple one-stop-shop-portal to upload my freshly recorded music, verified and stamped, into the world, with the confidence I’m getting the best deal out there, without having to get lawyers involved.” … “Enabling record labels and streaming services – who aren’t all bad to say the least, and who still will of course have a valid and much needed place in this new landscape – to find, nurture and be a beacon for artists but in a fair environment,” she says.

And the model is being extended, to incorporate Blockchain13. Qointum14 is one example[^ http://www.blockchainsummit.io], as is Ethereum. And meanwhile, blockchains have been founded to store information about both transactions and what has been termed ‘smart contracts’ — that is, programmable code that defines when a transaction should take place, or links to where and when an item was created. Explains15 Zoe Keating, “I’m interested in using the blockchain to track derivative works. What if you could know the actual reach of something? … I can imagine a ledger of all that information and an ecosystem of killer apps to visualise usage and relationships. I can imagine a music exchange where the real value of a song could be calculated on the fly. I can imagine instant, frictionless micropayments and the ability to pay collaborators and investors in future earnings without it being an accounting nightmare, and without having to divert money through blackbox entities like ASCAP or the AFM.”

In other words, if you want to listen to a song, you have a mechanism which enables you to directly and automatically pay the artist, and enables the artist to set the price, then directly and automatically pay other people involved. When used in this way the whole process, and resulting transactions, can become completely transparent and verifiable.  As a result the way in which we produce, buy and consume our ‘content’ could be deep in transition. As people do it more for themselves, as production mechanisms become easier, the way we pay is being looked at again, potentially spawning new intermediaries — after all, somebody has to manage the technology.

Companies like Kobalt, for example, who are in no doubt of how to differentiate against the opaque business models the music industry has traditionally employed. “If Kobalt has been able to gain market traction based on the competitive advantage of transparency, what does that say about the industry at large?" says George Howard, the company’s counder. More transparency looks like a way forward. There’s a going rate that can be defined, then mechanisms that can reinforce it - so that the ultimate goal, that both effort and enjoyment are recognised in monetary terms - is achieved. Such models will doubtless be fought against by the industry. Comments George, "Bitcoin can’t save the music industry because, the music industry will resist the transparency it might bring.” Elaborates Ashton Motes at Dropbox, "Even indie labels – it’s not clear that they’d be willing to disclose who makes what, and what people sell. The whole industry is driven on smoke and mirrors.”

While Bitcoin and Blockchain offer new mechanisms, and therefore hope to a number of sectors, numerous challenges lie ahead. Some of these are technical — for example, that current blockchain mechanisms were not designed to handle the volumes of transactions, nor resulting sizes of records, that could result from mass adoption in such a wide variety of domains. The Blockchain used by Bitcoin now contains gigabytes of data, which needs to be stored (or at least referenced) by anyone looking to conduct a Bitcoin transaction.

The knock-on effect is that the blockchain mechanism is now pretty difficult to change. “Implementation changes to the consensus-critical parts of Bitcoin must necessarily be handled very conservatively. As a result, Bitcoin has greater difficulty than other Internet protocols in adapting to new demands and accommodating new innovation,” explains16 a paper on the subject, which offers the concept of ‘pegged sidechains’ in response — essentially mechanisms to allow transfers between blockchains. “This gives users access to new and innovative cryptocurrency systems using the assets they already own,” continues the paper. Sidechains are indeed driving a number of initiatives, as are other efforts[^ exemplify] to create alternative blockchains, and indeed, alternatives to blockchain.

These are very early days, it is clear to see. It is neither obvious what platform or tool to deploy to what end, nor are blockchain facilities yet straightforward for organisations or consumers to access. Skills in blockchain design and integration are in short supply, as is experience in writing smart contracts that make sense. Which makes for a fascinating time ahead, as a head of steam builds up around the technology and as standards start to appear, not just in terms of base mechanisms but what becomes de facto in how people use them. For the record, nobody should expect cryptocurrencies and the broader world of distributed systems of record as ushering in any brave new world of open value exchange — we’re not likely to return to some technologically enhanced barter system. Such tools can just as easily be used by incumbent organisations and intermediaries as altruism-oriented startups; just as the big banks are investing in blockchain, equally, it is by no means clear that music consumers will default to more artist-friendly models. 

Finally of course, such mechanisms will inevitably be exploited in non-salubrious ways for nefarious purposes. Vast quantities of Bitcoins have already been stolen, for example (in one case by a supposed law enforcement agent); while it might be difficult to defraud the Blockchain, it is possible to use Bitcoin as the currency du choix when conducting other, fraudulent or otherwise dodgy activities — not least tax evasion, but also money laundering, paying for illegal goods and so on. It’s the potential for cryptocurrencies to be misused that has led authorities around the world to propose17 they should be better controlled, bringing them within the auspices of existing legislation.

Not that, as we have seen, the Blockchain can be made less ‘anonymous’, as transactions are already transparent? The issue is rather, whether transactions can go under the radar, as says a recent EU report on the topic: "Highly versatile criminals are quick to switch to new channels if existing ones become too risky.” The concerns are understandable given how cryptocurrencies are evolving. While the idea of an untraceable transaction does not yet exist, it remains technologically possible — it would not be hard to conjure a sidechain-based mechanism which would make it nigh impossible to trace the originator of a transaction. Bitcoin and its alternatives may create challenges for law enforcement but should be seen as the latest stage of an eternal game of cat-and-mouse. Equally, so we probably have laws already in place to deal with it. For example18, Bitcoin ’tumbling’ services (essentially virtual coin-for-coin currency exchanges) could potentially be used for money laundering or fraud — both of which are already illegal19.

Just as water finds its way through rock, so does criminality seek the easiest route. The dark truth is that humans have an equal propensity for good and evil, an eternal battle which has been fought throughout our histories, across our stories and indeed, within our daily lives. We are all corruptible, should sufficient opportunity come our way, or circumstance drive our behaviour. Trying to suppress this reality plays to the dark side, inadvertently creating more problems. We can see it in the ongoing encryption debate in the UK’s ongoing legislation[^ check title], which pitches civil liberties and personal privacy against the institutional desire to keep bad things from happening. And in the case of Bitcoin, attempts to control an transparent currency mechanism could drive less transparency, not more. While the future may be marked by more transparency, not less (not just in Bitcoin but other20 data-based domains), if the authorities want to protect their citizens against the sometimes downright nasty vagaries of human behaviour, they need to think way beyond what could become a mere way post on the road to our technologically augmented future.

Behind all such debates, perhaps currency itself is already under the cosh. While money has served a useful purpose as an mechanism to simplify trading between goats and timber, since the dawn of the institution of banking it has become a slave to its own mathematical nature. Simply put (though there is no ’simply’ about it) most financial mechanisms benefit in one of a number of ways: by adding a margin as an intermediary; or by controlling the flow of currency; or by betting on an outcome, potentially in such a way that changes its probability. One way or another, the idea of replacing cash with near-frictionless data transfers is going to have a significant impact on our financial systems — either by removing the advantages of controlling existing flows, or by creating new ways of doing so.

And as we have discussed, it’s not just about the money. The high levels of interest across such a variety of industries suggests that blockchain-based capabilities will have a considerable role to play in our near future. For better or worse, cryptocurrencies in general, and blockchains in particular, are here to stay. But are we ready for the consequences?


  1. http://www.rollingstone.com/culture/features/ben-schlappig-airlines-fly-free-20150720?page=6 

  2. http://webflyer.com 

  3. http://boardingarea.com 

  4. http://www.telegraph.co.uk/finance/personalfinance/money-saving-tips/11912823/Shoppers-waste-6bn-of-loyalty-reward-points.html 

  5. https://www.points.com 

  6. http://www.obj.ca/Opinion/2012-05-22/article-2984695/Whats-an-Air-Mile-really-worth%3F/1 

  7. https://store.steampowered.com/steamaccount/addfunds 

  8. http://www.bankrate.com/finance/investing/cryptocurrency-alternatives-to-bitcoin-1.aspx 

  9. https://www.biteasy.com/blockchain/blocks/00000000839a8e6886ab5951d76f411475428afc90947ee320161bbf18eb6048 

  10. https://www.cryptocoinsnews.com/australian-stock-exchange-confirms-upcoming-blockchain-for-settlements/ 

  11. https://www.berklee.edu/sites/default/files/Fair%20Music%20-%20Transparency%20and%20Payment%20Flows%20in%20the%20Music%20Industry.pdf 

  12. http://imogenheap.com/home.php 

  13. http://recode.net/2015/07/05/forget-bitcoin-what-is-the-blockchain-and-why-should-you-care/ 

  14. https://qointum.com 

  15. http://www.hypebot.com/hypebot/2015/07/bitcoin-and-music-an-interview-with-artist-and-composer-zoe-keating.html 

  16. https://blockstream.com/sidechains.pdf 

  17. http://eur-lex.europa.eu/resource.html?uri=cellar:e6e0de37-ca7c-11e5-a4b5-01aa75ed71a1.0002.03/DOC_1 

  18. https://www.cryptocoinsnews.com/bitcoin-tumbling-privacy-advocates-money-launderers/ 

  19. http://www.forbes.com/sites/cameronkeng/2013/12/16/bitcoin-is-not-anonymous-is-always-taxable/ 

  20. http://techcrunch.com/2014/05/27/ftc-targets-the-dark-underbelly-of-big-data-says-data-brokers-need-to-be-more-transparent/ 

6.6. Technological Kryptonite

When Superman reached a certain point in his life story, he had to decide whether to use his powers for good or ill. He chose good, of course. It’s a fundamental element of many classic stories, from Star Wars to Kung Fu Panda. And rightly so, thinks storytelling scholar1 Christopher Booker, as all such stories serve us as handed-down reminders, about how to become true heroes we need to reach inside ourselves and discover our innate humanity. From the depths of time the principle, that love conquers all and the hero and heroine live happily ever after, remains a constant.

Back in the real, yet now digitally augmented world, it is clear that the reduction of technology thresholds such as cost, space and power, mean we are merely scratching the surface of what will become possible over the next few years. Technology is becoming inexorably smaller and more powerful, reaching places we have not previously considered. We’ve seen the creation of world-crossing networks and processor clouds, of super-robots and tiny, bee-sized drones. Where will it all take us, you might well ask. And us take it — as technology would not exist without humanity, to drive its progress, to innovate, to create, to use technology in ways our predecessors did not even imagine. We are gaining cyber-powers — with technology, we really can jump tall buildings or make ourselves heard on the other side of the planet. We are, in a number of ways, becoming superheroes, and even as we learn to use our new-found skills we are becoming even more powerful.

While we can all conjure examples of what might happen, nobody has a clear crystal ball. What we do know is that technology is the great leveller, as the Law of Diminishing Thresholds make it — and therefore, what it brings — more accessible, more usable, lower cost, giving connected peasants a path out of poverty even as it creates new opportunities for exploitation and control. Technology is the ultimate democratiser, uncaring as it is about who uses it for what. All the same, what the information revolution has not yet done (and so far has not shown any potential to do) is change what it means to be human. Its indifference to the human propensity to both good and ill is its greatest strength, and its biggest weakness. The human factor is all too important, not least because we are acting as individuals and collectives in the face of global technological change. We’ve already seen how we can act collaboratively with technology, at the same time as becoming corporate — we have the capacity for, and are likely to do, both. Our innate characteristics will drive the future — our desire to make a better world, to strive for success and look after our own, as well as to get ahead of everyone else, to beat the competition, to take more than our fair share of the pie.

Technology-based knowledge brings power: as we have seen the bad guys are likely to continue to do bad things, even as the good guys get new capabilities for dealing with them. Technology enables us to develop relationships which would otherwise be impossible, in businesses and as a society as a whole, while simultaneously threatening our abilities to structure and deal with the world around us. Whole new branches of therapy are now devoted to dealing with the psychological fallout caused by technology.

Should we worry about where technology is taking us? Sometimes it seems that we can only hang on, enjoy the ride and see what they think of next. But as humans we are above all adaptive: as a majority we seem to have a propensity to run with such changes, indeed given what we have dealt with over the past few decades, it is unlikely that another order of magnitude will make any difference. It is as if we can think logarithmically, or indeed in terms of octaves — the song will remain the same, even if it is playing at a higher frequency.

Against this well of new capability, like Superman, we all have to take choices. Some believe we will need to embrace our cyber-powers — “The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs,” says this article2. Indeed, others have even postulated that we already exist within some huge, Matrix-like simulation. While we may not be headed towards such absolute futures, we nonetheless need to be prepared. Just as Superman found to his chagrin, even as we create such huge potential, we also inject the keys to its own destruction. Technology is not perfect, nor could it ever be in all its complexity. But all it takes is one flaw — a race condition in an algorithm, a loss of power or of a data connection, a mistyped variable or a software bug — for the whole, technologically enhanced house of cards to come tumbling down. As we see with blockchain mechanisms, the digital revolution also has its Kryptonite: with so much to go right, there is also a great deal that could go wrong.

This means we need to incorporate such realities into our thinking about the future. With such inordinate power comes great responsibility, about how we ensure that our superpowers are used for good, and how they can avoid being subverted by those who would exploit them — not only cybercriminals but, sometimes, our corporations and institutions. At the same time we need to think about the kinds of failures that might exist — technological and human — how to mitigate them, and how to protect against their potential consequences.

There is no going back to an un-globalised, non-technological world, how ever attractive it might sometimes appear. So, if the only way is forward, we need to build a new understanding of our responsibilities. In the next section we consider what form these responsibilities need to take.

7. The world is you and me

In a factory, a worker takes a piece of steel off a production line and puts it in an oven, where it is heated to above 1500 degrees Fahrenheit before being allowed to cool slowly. When it enters the oven, it is hard and unmalleable, but on exit, it is altogether more flexible and open to be worked. Annealing is the process of heating metals to a certain temperature and then cooling slowly, to allow the internal bonds to align. While metals cooled by plunging into water tend to be hard and stiff, annealed metals such as mild steel are more flexible. Annealed glass is also more resistant to shock, and is less likely to shatter.

The principle can also be applied to areas that straight logic doesn’t apply, using the mathematical concept of simulated annealing in which a problem space is ‘heated up’, that is made more dynamic and unconstrained, before slowly being brought under control again. The information revolution has seen unprecedented change and has been massively dynamic, with a result that in many cases it is out of control. Cyber-bullying and trolling suggest people do not yet know how to be civil within the new tools now available. New business models undermine the old without being able to turn a profit.

It will, inevitably, find a level as new norms and principles as established, for how people with technology and behave to each other. No doubt we shall look back on this period as a time of great change, which slowly became standardised and controlled as attempts to do so faster met with failure.

7.1. The long road to dystopia

In an economy based on resources we could easily produce all of the necessities of life and provide a high standard of living for all.

A Resource Based Economy would make it possible for us to use technology to overcome some scarce resources by applying renewable sources of energy, computerizing and automating manufacturing and inventory, designing safe energy-efficient cities and advanced transportation systems, providing universal health care and more relevant education, and most of all by generating a new incentive system based on human and environmental concern.
The Venus Project, www.thevenusproject.com1

A familiar image of the city of the future involves large, tree-lined plazas and waterways, these towered over by dome-shaped buildings with points reaching into the sky. A monorail slides soundlessly into the horizon, carrying contented citizens from one leisure activity to another. We will be as well-educated tourists, going about our lives and bringing up smiling children without a care. No doubt, underground service tunnels will supply food and other supplies, ensuring our every need is met. How wonderful.

Just how realistic is this vision? Very, thinks New Yorker Jacque Fresco2, architect and instigator of utopia itself (though he denies use of the term). Fresco grew up in the depression to Jewish immigrant parents, and is no stranger to setbacks; despite this he has remained true to his vision. In 1969 Fresco released his seminal work, ‘Looking Forward’ (whose name harks back to Looking Backward, a 19th Century work by Edward Bellamy); 25 years later, he announced the culmination of everything he had worked towards — a globally ambitious vision of resource sharing he called the Venus Project.

At the heart of Fresco’s vision is the principle that, despite a growing population, the planet’s resources are more than enough to meet human needs in a sustainable manner. Indeed, the concept of ‘needs’ goes away if they can always be met. Such an idea is not without its challenges, not least that it does away with capitalism. For it to work, the few would have to hand resources over to the many. But some believe that technology will drive out such archaic concepts as market forces, however entrenched they appear. , “Different modes of production are structured around different things. Feudalism was an economic system structured by customs and laws about “obligation”. Capitalism was structured by something purely economic: the market. We can predict, from this, that post-capitalism – whose precondition is abundance – will not simply be a modified form of a complex market society,” wrote commentator Paul Mason.

Interestingly, while many are looking optimistically towards the practicalities of such a future, darker predictions remain the stuff of science fiction and academic essay. Technological dystopias see resources in the hands of very few, exploitation of the general population the norm rather than the exception, happiness sacrificed on the altar of commerciality, choice non-existent, and people hungry, unclothed and lacking basic sanitation. Authors from Aldous Huxley, John Brunner, William Gibson and Margaret Atwood, and films from Blade Runner, The Matrix and, indeed, the Terminator franchise capably set out just how wrong things might become. One way or another either the powerful, or the computers themselves, deliver nothing but slavery and mass control of an ill-equipped population.

Mapping out a realistic view of the technology-enabled future is a challenge, but we can nonetheless start to map a path based on a relatively simple premise: that while more positive outcomes cannot be guaranteed, they stand a higher chance of happening if the probability of negative outcomes can be reduced. Indeed, to many commentators, the dystopian view does not sound awfully different to some aspects of the world that we see around us today. Just as we’re getting lots of things right, there’s plenty to we have already done wrong. At the risk of creating too great a generalisation, our digital future is subject to the same forces as the rest of the universe: that is, do nothing and it will slide into chaos. So, what precisely do we have to ‘do something’ about? Let’s review the potential opportunities, and dangers, based on what we have seen so far.

First, the wealth of data we now have at our fingertips enables us to know so much more than we could. On the upside this is already leading to unprecedented levels of insight, giving us the potential to eradicate some diseases and generally make our lives better. Equally however, we are faced with the potential for such insights to be abused. Consider, for example, the tragic case3 of 92-year-old Olive Cooke, who was receiving 10 letters and repeated phone calls from charities every day, before jumping to her death from the Avon Gorge bridge. Or the bereaved father who received4 a letter addressed to ‘Daughter Killed in Car Crash’. We are all well aware of government desires to harvest as much data as possible, ostensibly in the name of law enforcement but in doing so, breaking national and international law — such as the US government abuses of the Safe Harbor agreement breaching5 of EU privacy regulations. We are all culpable, to an extent: our ability to copy and use digital media frequently makes a mockery of principles such as copyright.

Speaking of ourselves, a second area of blessing and curse is how we can communicate with people across the globe as though they were in the next room (in the future, potentially, it will appear like we are in the same room, given current advances in Virtual Reality such as Microsoft’s Hololens6). That individuals can make their views heard is generally applauded – activist sites can shine a much brighter light onto despots, dodgy dealings and indeed, miscarriages of justice. Society’s voice has become several decibels louder – and more reactive – by virtue of the all-amplifying Internet. At the same time however we have invented cyberbullying, online trolling and on occasion, replaced the wisdom of the crowd by sheer rabble rousing.

At least, perhaps, such incidents are obvious, and therefore can be dealt with — but the response can sometimes appear heavy handed: consider Paul Chambers for example, whose ‘joke’ on Twitter about Robin Hood airport led to him losing his job and gaining a criminal record. Or Jacqueline Woodhouse, jailed for 21 weeks having launched a racist tirade, all of it captured on a mobile phone. Or Liam Stacey, whose remarks about critically ill footballer Fabrice Muamba earned the student a 56-day sentence. Or indeed, Jordan Blackshaw and Perry Sutcliffe-Keenan, the likely lads jailed for four years each for incitement to riot, having created the Facebook event "Smash d[^o]wn in Northwich Town" in August 2011. The fact that no such event took place was seen as irrelevant by the judge; according to then-Communities Secretary Eric Pickles, “exemplary sentences” were necessary.

As Mike Lynch has pointed out, transparency is a two-edged sword. When Hunter S. Thompson first wrote, “In a closed society where everybody's guilty, the only crime is getting caught,” he clearly wasn’t taking into account the impact that technology would have on both crime and its consequences. Against such a many-faceted, context-sensitive and interpretation-based background, we are able to catch a great many more folks at it.

Which brings to a third area of concern: how sensors make it increasingly straightforward to create interactions between physical and the computer-based worlds, generating data as they go. The benefits are legion; but this, too, is open to abuse. Consider speed cameras, which in principle offer a genuine way of making roads safer, but which have been called out as a revenue generator7 for local councils. It’s not even as if speed limits are clear indicators of criminality. Each country creates its own legal frameworks according to a variety of factors and these can change. The motorway speed limit is set at 130kph (about 80mph) on the continent, or 110kph (70mph approx.) if it rains, whereas in the UK the latter is statutory.

Even as we argue such points, the technological flood gates open wider. Today, for example, mobile phone companies hold all the information anyone needs to determine whether a citizen has been speeding: all it needs is a subpoena. But perhaps it’s not Big Brother we need to worry about, it’s the scurrying mass of little sisters, each equipped to the teeth with tech and with a tale to tell. One way or another, the message can get through.

The trouble is, in many cases there are so many unknowns that we don’t know what the laws should be — and without clear laws, we are left with the moral judgements that drive them. For example, in the case of stores using facial recognition to identify people as they enter a shop, is that a good thing or a bad thing? It’s difficult to tell. To return to the case of Mr Chambers, it’s not just whether his act was ‘criminal’, but also the impact it has on anyone else who now feels a little less inclined to express humour, just in case it is misconstrued. If the goal is to make society better, general terms such as ‘better’ need to be reconsidered, as do the things that make society ‘worse’. To return to speeding for example, the reason behind such a law is the relationship between speed and risk of danger. Should such risks be reduced through safer, connected cars, or through the ability to vary speed limits and send the information straight to the vehicle, then perhaps the law should change.

Laws— sets of rules that govern behaviours between people and collectives — generally exist to protect the rights of others or of the collective, to prevent individual abuse of the system, or indeed to prevent collectives, public or private, from abusing the individual. Of course we all need to be responsible for our actions, online and offline; of course we need to take into account the victims; and of course we all want to live in a safe, just and open society in which the perpetrators of bad things face the consequences of their actions. Trolling, cyber bullying and other activities are no less despicable because they use the Internet.

Equally, we need to be sure that the powerful can be held to account. Technology has the potential to be the great leveller but equally, it puts much in the hands of the few. This goes to the roots of our sociological, legal and commercial models. Technology has also opened sudden chasms of potential difference, so easily exploitable by startups that it has enabled the creation of billionaires from students and peasants alike. This serves to illustrate just how powerful a tool it can be, in both the right and the wrong hands, whether or not they are on ‘our’ side. Even if we do not have megalomaniacs, we have the potential to hamstring ourselves. Even if we manage to avoid the surveillance society, we are being drawn inexorably towards a more transparent world in which the smallest actions can be logged. From Snowden to Ashley Maddison, we have no shortage of cautionary tales to draw upon.
Nobody is denying that our laws are increasingly inadequate. But not only this, the way we make our laws also has to change. Laws also exist to describe how laws can be created and modified and enacted. Meanwhile, legal principles offer background to laws, and outside of all legal frameworks are codes of practice and self-regulation — we need to reconsider the entirety of the moral and contractual frameworks we have, on the basis of what really matters and how to deal with it. But we are not working in a perfect world, and cannot act as such. Given that imperfection will aways exist, we need laws and policy frameworks based on what we do know: that cost and size thresholds will continue to fall as capabilities increase; that sudden events will change how we interact; and that the dynamic between community and corporation, between individual and governmental will remain a constant theme even in an uncertain future.

Crime and punishment go hand in hand and punitive measures have changed over the years. Crime is not absolute and punishment, be it a fine, incarceration, community service or whatever, aims to serve consequences to the convicted, recompense the victim and discourage anybody who might be considering a similar path. As such, a punishment can increase for a similar crime if a judge, or government deems that additional discouragement is necessary – a situation we saw clearly at the time of the 2011 riots. It is an irony that we laugh at the incongruities of older legal frameworks while feeling assured we are more civilised now. But the human animal is no more criminal, nor noble, than it ever was.

All this would remain true were technology not encroaching on the final frontier: not only is it closing the gap with the physical, but it is enabling the physical to exist. 3D printing has been used8 to create a working kidney, which indicates just how amazingly brilliant technology can be in the right hands. But if similar facilities are in the wrong hands, might it be possible to 3D print a dirty bomb, or a genetic modification plant that could create an unheard-of disease? Hopefully not in both cases, but ‘innovation’ is not a luxury available only to the pure of heart.

Technology doesn’t care about right and wrong, of course. As we have seen, since its arrival it has been indifferent to its effects. It should be democratising but corporations and self-interest of both individuals and institutions will get in the way. The use of technology to monitor citizens, to steer behaviours, to exploit very human vulnerabilities is already all too apparent; it would be foolish and complacent to suggest that its future consequences will be only positive, without negative impact. This will not only be through malice, but also through stupidity and simple, unintended consequences — some of the best examples of technology misuse could be greeted by the exhortation, “But that’s not what it was for!” But is it right to blame technology for the bad in the world? That’s like blaming the sea as a major cause of drowning.

In fifty years time we will likely look back and chuckle at just how primitive we were. New developments in facial recognition9, data aggregation and analysis, the increasing use of embedded cameras, each innovation offers new ways of finding things out, of capturing and sharing the moment, of linking one piece of information with another. Or maybe we will be disguising our features, thinks10 Adam Harvey at CV Dazzle.

To end on a positive note, many can feel fortunate that they live in liberal parts of the world. But we all need to take appropriate steps now, to ensure that we continue to benefit from this freedom. Otherwise, if Hunter S. Thompson is right, on the one hand we’re going to need a lot of jails, even as we struggle to work out if new actions are indeed a crime. We need to renegotiate the contracts that now exist between ourselves as individuals, as a well-governed society, and as the series of collectives also known as corporations. Against this background we have some fundamental questions to consider — accountability and responsibility, exploitation and recourse, personal and public protection. However too many elements of existing law are based on a balance of past probabilities and, in the absence of hard data, an underlying acceptance as to what constitutes right and wrong.

With this in mind, let’s start with the thing that connects us to our inevitably techno-topian future — data.

7.2. We are the virtual world

When Roger Ebert, long-time film critic for the Chicago Sun Times, died in 2013, his1 Twitter account was quickly taken over by his wife Chaz and his colleague Jim Emerson, both of whom he had asked to contribute to the feed. “Last week R asked me to tweet new stuff on re.com. Never dreamed it would be memorials,” Jim wrote. Both Chaz and Jim contributed eulogies to his blog, and his wife updated his Facebook page, as was right and fitting. And so, Roger Ebert’s legacy lived on, his well-loved turns of phrase and thoughtful comments maintained for eternity.

We see similar stories across the social web. You don’t have to be famous to have a virtual presence maintained when you die, as partners, family members and colleagues adapt our better-known digital presences – our blogs, Twitter and Facebook pages into memories. Memorial sites also exist, though the challenge, in the fast-moving world of technology, is assuring a such a service will still be there when it is needed – four of the seven services listed here a few years years ago no longer exist. Who knows what even the mainstream social sites will have become in a few years’ time, when our fickle race turns its attention to the next Big Thing?

Maybe, in this barely started business we call the technology revolution, some bright spark will come up with a way to bring together the different strands that we created in life, enabling e-commerce vendors to mark accounts as ‘deceased’ and take them offline, even as they archive and clear away the detritus of our digital lives. Or perhaps (as some have speculated) we will reach a stage where our personalities live on, beyond our physical deaths. This is still the stuff of science fiction. Or equally, our online identities might be mere vapour trails of human existence, our carefully composed thoughts destined to languish as background noise in the ever-increasing information mountains we create. So what if, in the future, we can still catch a glimpse of the day-to-day doings of friends and heroes, snapped in that moment just before we logged off for the last time, our unfinished email conversations and online scrabble games held in eternal limbo?

At the present moment, the only real answer is to create a presence in life that we would be happy to have in death. Because, for now, the information we create online is not going anywhere, fast or slow. As we have already seen, in this brave new world, no information ever gets destroyed. If only it was just our social presences — as we have seen, inordinate quantities of data are being accumulated about out movements, our web site clicks, our purchasing habits and entertainment interests. All that information, all those “if you liked this, you might like…” algorithms, all those connected devices have thus far been disconnected, operating in their own spheres. Algorithmically, however, the tendency is to connect, to integrate, to use the output of one system to drive another. The fact that our governments have been benefiting from this simple fact, collecting data on just about everything they can is, in hindsight, as inevitable as Facebook or Google's trawling of our messages for details of personal interests they could sell to advertisers.

As, indeed, is the fact that the authorities have proved themselves completely incapable of stemming the outflow of information as to what they have been gathering. Edward Snowden, Chelsea Manning and Julian Assange, Anonymous and the unnamed leakers of the Panama papers were not mega-minds holding the world's data mountains hostage, but folks who, for whatever reason, decided to flick the software equivalent of a switch and who have gained notoriety as a result. Some see them as pariahs, others as freedom fighters but the consequence is the same: increased transparency as, once information has been released into the open, it cannot be hidden again. Such characters are not alone: a broad range of people, from corporate leaders to the NSA analysts who have spied on ex-partners, have been pushing at the boundaries of both equality and common decency as they discover what they can do with such a wealth of information.

Some commentators have been shaking their virtual heads in disbelief at the lack of public response — testament to the pervading sense of passivity even as new opportunities to breach privacy continue to emerge, with ethical consequences that go far beyond the questions of legality — such as the case of direct mail targeting the recently bereaved, or the now-banned ’smart’ bins2 which track people using their Bluetooth identifiers. Beacons — the technologies behind the bins — are being trialled in just about every store, to an intrusive yet innocuous end, to enable offers to be delivered to individuals as they move through the store as a physical repeat of the “if you liked this, you might like…” algorithm. The data they collect could go so much further, however, as it adds to the pile of drive-by data being accumulating about ourselves.

In September 2015 GE Software launched a new initiative. GE, that grand survivor of the industrial revolution, believes that its future lies not in power plants and lightbulbs,but in the software that surrounds them. Extending the sensor-laden models espoused by the Internet of Things, GE announced that it wanted to be a market leader in what it called ‘digital twins’ — that is, virtual representations of physical objects that could be viewed, modelled and even repaired without manual intervention. “The ultimate vision for the digital twin is to create, test and build our equipment in a virtual environment,” explained3 John Vickers, manager of NASA’s National Center for Advanced Manufacturing and an early adopter of the digital twins idea. “Only when we get it to where it performs to our requirements do we physically manufacture it. We then want that physical build to tie back to its digital twin through sensors so that the digital twin contains all the information that we could have by inspecting the physical build.”

GE’s goal is to build an open database of digital representations of objects that can be accessed and updated from anywhere, by anyone. It’s a laudable objective, but why stop with physical objects? Why not animals, or indeed people? We all have a virtual twin, in principle. Some technological circles, not least gaming, already accept the notion of a virtual representation of something our person — an avatar, for example, which can then be customised to suit the in-game environment. Outside of this world, our current virtual representations are fragmented and of poor quality, but already data brokerage corporations such as Axciom are looking to change that. Hackneyed phrases like, “If the service is free, you are the product”, bandied around as if saying them often enough makes them acceptable, reflect how corporations already ‘get’ the notion of the virtual, exploitable self.

Put the notions together and it doesn’t take much to realise we are creating a transparent, accessible virtual world which reflects, and in many cases augments the physical world, with the former making connections and drawing conclusions that would be impossible in the latter. We are a long way from achieving such a goal, which is both a blessing and a curse as it leads to partial interpretations based on available data. It is not enough to know that I was moving at 100 miles per hour, having consumed 5 units of alcohol, if indeed I was on a train rather than in a car, for example. There’s also the question of data quality: much of the data that exists today is known to be incomplete or inaccurate, as the numbers of cases of mistaken identity in areas including healthcare and credit checking serve to show. Identity quality has become crucial, not only for humans but for the drugs they take, or the plants4 they handle. Failure could, and indeed can already result in miscarriages of justice, insurance premium problems or any number of other issues. Of course, the assumption here is that quality is due to human error — but it could just as easily be down to deliberate data corruption. You don’t have to be a master hacker to send a fraudulent email that looks like it has come from someone else, for example.

In the virtual representation of the world, we also need to consider our digital shadows. The European Data Protection act does provide for cases with legally negative ramifications (which make sense, it’s a legal document) but it doesn’t take into account situations operating within existing laws which nonetheless erode personal rights or potentially lead to social exclusion. A seemingly innocuous data set might be quite revelatory — we know that soil data can be used as an indicator of vine disease, for example. But what if it revealed the farmer’s smoking habits?

A little extra contextual information is vital. As the number of sensors around us flourish, they should be fingerprinting their own data so that it can be traced to the source. As a consequence data needs to know its own provenance and if it cannot, it should potentially be discarded. This drives the need for traceability — a clear indication (for example using an encryption-based mechanism) that data was generated when and how it was stated to be. At the moment there is no easy way to know who has access to what, from where. For example banking details being given to credit agencies, marketing information being shared to Experian. Or to the government. It might not be untoward to suggest that data without certifiable, non-repudiatable provenance could be considered inadmissible as evidence. This also requires transparency — breach of privacy should not happen through obscurity. And if information has been sold on, we should be able to know about it. It should be possible to monitor how we are being monitored. This needs better metadata than we have today, which could be considered the missing 8th principle of Privacy By Design, which implies that designers can’t be held responsible for subsequent use of data. Such rules sets also need to consider IoT privacy5, therefore.

The debate on such topics is understandably broad and complex, after all, we are talking about the whole world. At the same time however, it remains fledgling in the context of big data. In education, it can appear across other topics, from law to computer science, security to economics. Yes, it should be taught in all, but the consequence is that it is being given an incomplete treatment. “The only helpful general conclusion is that you don’t have to call a module ‘Information Ethics’ for that to be its core content, and information ethics is also present in small but significant ways across the curriculum,” says6 Paul Sturges of the The Australian Library and Information Association. “The interesting question that this raises is ‘Sub to what?’ Clearly it is sub to Information Science, but since Computer Science, Media Studies and the other disciplines mentioned here do not particularly recognise themselves as partial incarnations of Information Science, that isn’t a completely helpful answer.”

In the meantime, we all know that our very real lives are being intruded upon — we can feel it in our bones. Just as we would find it unacceptable to be body-searched for no reason, or for a telephone engineer to walk into our front rooms and start flicking through our address books, so are we experiencing the discomfort of having our online lives mined for information, or being ‘followed’ by over-zealous and poorly targeted banner advertising. To dislike feelings of intrusion is the most natural thing in the physical world, and so it is also in the virtual world. Even more frustrating is that much of the problem is of our own making. We need to accept that we are all complicit – by allowing our information to be shared, collected and used, we invite the devil over our doorsteps. The inevitable consequence of having such vast quantities of data is the death, not of privacy, but of obscurity both in terms of our past actions and, more fundamentally, our future behaviours.

Information is indifferent meanwhile, even oblivious to our attempts to control it as an entity, a fact that the darker elements of our governments and corporations are exploiting, even as they profess the opposite. As are we all, potentially, as we watch Blackberry Messenger become the anti-establishment rioter's preferred mode of communication, or participate en masse in click-rallies aimed at influencing corporations and governments, or benefit from Freedom of Information requests. Indeed, ex-UK prime minister Tony Blair sees the introduction of the Freedom of Information Act as one of his biggest regrets. “You idiot. You naive, foolish, irresponsible nincompoop. There is really no description of stupidity, no matter how vivid, that is adequate. I quake at the imbecility of it,” he said7 of the decision.

But, as many have said, information wants to be free. It is therefore up to all of us, the people who will ultimately be impacted by what we are creating – to consider the moral and ethical questions around it. Is there an answer? Yes there is, by no longer thinking about information as a separate element of our existence, but as an immutable part of us. Like it or not we are indeed heading towards a shadow-less virtual twin of our world, as algorithms shine a light onto every nook and cranny of our existence. This doesn’t require all computers to be ultra-smart; more simply, it becomes easier and easier to interpolate and triangulate between data sources to work out what is going on. If you were at point A, then at point C, and the only way to get there was via point B… well, you get the picture.

Against this background of ultra-transparency and absolute knowledge, we have not yet fully grasped that information about ourselves doesn't just belong to us — data is us, not just ‘to all intents and purposes’ but at a very deep level. We need to incorporate into our collective psyches, the notion that information and we are the same thing. Data about us should be treated in exactly the same way as ourselves; whatever is right for the physical, should also be considered for the virtual, and our virtual, digital, quantified selves should be afforded the same rights as our physical selves. Until we get this, and national and international legislation reflects the principle across the board, we shall continue to be beaten down by the feeling that, in information terms, we are giving away more than we are getting. As we add layer upon layer of detail to everything we see and do, the level of discomfort will only increase until such time as we, as a race, reclaim our information-augmented humanity.

With such a data-enabled understanding in place, the next hurdle is to reconsider how we engage with each other.

7.3. The new intermediaries

“Time flies like an arrow… fruit flies like a banana.”
Anthony Oettinger1

In the late 1940’s, geneticist Angus John Bateman was working at the John Innes Horticultural Institute in Merton Park, London when he set out to disprove one of Darwin’s own theses, on sexual selection. Darwin believed that male mammals generally pursued females, but as he put, “the female, on the other hand, with the rarest exceptions…is coy, and may often be seen endeavouring for a long time to escape from the male.”

Bateman was not convinced about the universality of such a model. Writes Donald A. Dewsbury at the University of Florida, “He contended that the evidence in favor of sexual selection was circumstantial and that alternative explanations were possible.” It was these latter explanations that he sought to prove. In one of the first studies of its kind, Bateman used a selection of flies of genus Drosophila Melanogaster — that is, fruit flies, favoured because2 they reproduced easily in the lab and had relatively short lifespans. Because DNA markets were not available, he selected flies with a variety of easily visible mutations: curly wings or narrow eyes, for example.

Bateman’s conclusions were less to do with behaviour and more about the amount of energy expended — simply put, the harder you try, the more babies you can have, as measured right down to the cellular level. His main conclusions3, that relative fertility was a critical factor and that males were more likely to be fertile than females, set the scene for much of the evolutionary genetics field, right up to the present day when additional factors (such as disease resistance4) have been thrown into the mix.

A couple of decades after Bateman’s study was first released another geneticist, Robert Trivers, picked up the findings and ran with them, focusing on what he called “parental investment” — that is, the amount of time, effort and indeed risk parents expended on the creation and upbringing of their young. He believed that energy levels continued to be a factor after reproduction, continuing through birth into childhood. “What governs the operation of sexual selection is the relative parental investment of the sexes in their offspring,” explained Robert’s paper5. Enabling this to happen requires agreement to be struck between parents, in terms of how the parenting load is to be shared.

Amazing it may be but the ability we now have to decide who cooks the food or changes the nappies has been crucial to the survival of the human race. We, like most other forms of life, have needed to negotiate in order to survive as a species. Indeed, our propensity to survive has been predicated on this ability to negotiate. By extension, the same principle holds true in business, in society, in war and in peace. And it doesn’t take a geneticist to understand that a similar principle will continue to apply in this, data-oriented world. What has changed, however, is less what we are negotiating for, or about, and more what we are negotiating with.

Right now, we are pitched into the middle of a vast experiment. We have all been given a new tool — digital technology — and we have not been given any rules as to how to behave. We have become lords of the data flies, given access to a vast, virtual playground. And, completely unsurprisingly, some of us have chosen to benefit from the opportunities this gives, by positioning themselves as the new intermediaries. Smarter in the business sense, such individuals and their resulting corporations have created contractual frameworks that stack the benefits in their favour: their terms and conditions accept minimal liability and maximum gain. Consider the case6 of Apple Music, for example, which deleted all the music from one person’s hard drive, including songs she had written herself. Fortunately she had a backup. Or the many, many examples we have seen about social sites claiming ownership of both data and content. As a result such organisations are becoming very, and unfairly rich.

Think about it. If the digital economy is worth a trillion dollars, that’s a staggering amount of benefit in the hands of a tiny proportion of the world’s population. Yes, they have profited from the potential differences that have opened up; but in the absence of a clear understanding of who owns what in data terms, they have acted on the assumption that it belongs to them — unless they are told otherwise. The actions are not so much corrupt; more that the ignorant are exploiting the ignorant, through the simple act of saying, “Well, if nobody knows who it belongs to , I will take it and see what happens.” Thus far, the main consequence has been to make a small number of people very rich, even while the rest of the world tries to make sense of it all.

Vast quantities of data already exist on each one of us, as we have seen with Acxiom. Corporations know they are acting in a way that extends beyond their remit, as a room of marketers from a variety of companies wondered why consumers did not consider their own data to be of value. Or did they? Deep in our unconscious, we are all making value judgments about the nature of our online existence: those choosing not to live in a cave are deciding to compromise on absolute privacy, for the benefits gained from corporate-controlled, advertising-sponsored information sharing. This notion of value is fundamental to understanding how to progress, as it relates to an ancient understanding. We see it today with savvy street artists who ask people to fold the money before they drop it in the hat and, as we do so, we witness a most ancient form of transaction taking place. “What you did was of value, and I will therefore return some of that value to you,” says the consumer to the entertainer, who then looks to negotiate terms.

So much of our data is bought, sold and raided with impunity. Examples are rife: Yahoo’s web beacons, Facebook’s repeated changes to user settings and wide open7 privacy terms, “do-no-evil” Google and its drive-by Wi-Fi snooping. But it doesn’t have to be this way, and perhaps it won’t be. Looking about, we can see a number of new contract models that have become possible. We are already creating new forms of value exchange, with understanding or without. For example, consider my personal use of a Tesco Clubcard store card. By using it, I accept that Tesco will keep tabs on exactly what I am buying. In return I get vouchers and Airmiles, which I can then convert into holidays.

The same could be held true for social sites, which already profess their value exchange mantra: “If you are not paying, then you are the product.” In other words, the benefit you get out of the system is being paid for with your data. In all probability, we just need to recognise we are making a trade — but are we getting a fair deal? If the information is so valuable, why are its creators not rewarded? If you want to know everything about me so that you can sell me stuff, then you can pay for the privilege. Why not come to my house, see the books on my walls, the food in my fridge and watch my children dancing around in the garden — that way you can work out exactly what it is I want and offer me a highly customised set of products, goods and services. I'll make you pay… hmm. Five hundred quid for full access? A thousand?

The sharing economy world also seems to have the cards stacked towards the platform providers. At the end of 2014 the UK sharing economy received a report8 by Debbie Wosskow, CEO of Love Home Swap. A key recommendation was that the gov.uk Verify identity management platform should be extended to support the UK’s nascent sharing economy infrastructure. In addition the presence of insurance mechanisms to protect against unforeseen circumstances. The importance (and current lack) of regulation around the sharing economy was stressed.

Some areas are even more obscure in terms of where the value lies — not least, in the crowd. Sentiment analysis is an area being mined by organisations such as Adobe with its Omniture product, as well as a vast number of tagging companies that are together responsible for the the rapid increase in the size of an Internet page. In music, a number of companies are tackling the challenge of mining online sentiment from accessible social sites and generating useful results. Last.fm, now part of a larger organisation. And We Are Hunted and The Sound Index, both of which generate “charts” based on what people say they are actually listening to.

Interestingly, we can see a great deal of innovation coming from that reputedly struggling area of the market, the creative arts. The impetus comes from the fundamental desire to get stuff out there, notes jazz musician Barry Dallmann, “The internet gives us an unprecedented opportunity to go straight to those potential fans and put the music in front of them.” As we have already seen, the possibility of crowd-based band support owes its existence to technology: the direct relationship with fans, enabled by social tools from Soundcloud to Bandcamp, facilitates and simplifies the ability to build a fan base and generate revenue, for bands that lack the marketing muscle of a major label. Even simpler models, such as the patronage model employed by artists such as My Life Story’s Jake Shillingford, depend on online channels to get the message, and the music out there. The music industry is innovative, and rightly so says Karl Nielson, a music manager“Music should be challenging, it should be entrepreneurial, it should be pioneering. Musicians are already going out on a limb, we should be as brave as the artists we are representing.”

Even as Spotify and Apple Music, Google Play and Amazon Prime look to intermediate the market from a top-down perspective, a number of new innovations are being attempted, such as per-user9 content streaming which directly links artists to fans. But the most interesting area is that of smart contracts, which is seen as a great white hope by business-savvy musicians. Not only this but, as blockchain starts to be applied more broadly in the financial, healthcare and other industries, it looks inevitable that smart contracts will follow into areas such as10 property transactions, trust relationships between IoT objects, smart bonds and a range of other financial instruments. Indeed, they have been touted to replace lawyers (though they are not11 legally enforceable at this point). Adoption is relatively slow due to the computational overhead of smart contracts, believes12 Dr Gideon Greenspan, founder of Coin Sciences and MultiChain. “I certainly wouldn't rule out computational blockchains in general in the finance sector,” he said. “I just think that if it's possible to implement a certain application on a more simple blockchain which has better performance characteristics, then people will probably do that.”

Smart contracts are not ‘the answer’, but as physical and virtual collide, it becomes ever more difficult to determine where the value lies and to allocate this value appropriately. Consider music in the live arena. “There’s a huge opportunity for a much more holistic, data-driven approach based on real evidence,” says Songkick’s Dan Crow. Not only could this benefit performers and fans, but also owners of clubs and concert halls. With the right data at their fingertips, they would be better able to pitch to performers to come and play at their venues. But this means getting money in from, and out to the right people in the appropriate way. Consider also the sonic ‘augmented reality’ applications such as Soundhound and Shazam (which, in the simplest terms, can “listen” to music and tell you what the title is) are also examples of apps that combine captured information with that available from an online database. Such technologies enable new things to happen, which enable value to be fed back. Examples from The Great Global Treasure Hunt to the multimedia stage adaptation of Howl’s Moving Castle by Davy and Kristin McGuire are indicative of both the shape of things to come and the kinds of revenue models that become possible if physical and virtual are integrated.

But if everyone is participating in a greater whole, everyone needs to be rewarded for their contribution. All of which make it even more important to bottom out the notion of value exchange in the virtual world, not just for effort but also artistic contribution: legal concepts of IP and copyright need to move beyond current models, for the simple reason that even the smallest pieces of information can have value. For example, if a person walks to the north pole and takes a temperature reading, should he or she not be recompensed? We see it in the arts world, for a sample of music so why not for a stream of data? Equally, who owns the information about how a house heats, or where a person is? If this information has value, then the person should be able to monetise that value. As a consequence the need for algorithmic representation of agreement, which can be enforced electronically, becomes clear.

Of course, it will still be possible to tie someone, or something, into an unfair contract. At the same time the frameworks need to look to the greater good, and not short-term gains or exploiting weaknesses in the model “Like every band, we would have signed for nothing, given the opportunity,” says Pink Floyd’s Nick Mason. Which means that much of the future boils down to the intermediaries we choose, and the trust we place in them. Despite instinct suggesting the contrary — “The natural inclination is to sell direct,” says Ed Averdieck co-founder of CueSongs, a rights buying portal he started up with musical pioneer Peter Gabriel — intermediaries provide a vital link between supply and demand, both enabling the transactions and curating the information they hold. Says Ed, “All these rights are sitting in the basement in contractual straightjackets. We thought, if we could only give them a canvas…”

In the future we can hope to see a new generation of intermediaries that enable the creation and enforcement of genuinely fair contractual frameworks that reward both effort and intellectual contribution. For sure, there is an overhead in doing so, but the owners of transaction engines (that is, banks) and other platforms should be recognised for what they are, and treated as servants to, not controllers of each transaction. We will inevitably have to put up with tryers and hangers-on, money grabbers and incompetent buffoons – these are all parts of the rich tapestry we call humanity. But at the same time, technology itself should drive a highly necessary race to the bottom for the cost overhead incurred in any exchange of value.

Despite the possibilities that technology now affords, we have not come far from the principle of lobbing a shilling in a bucket in gratitude for being entertained, or being supported in our daily doings. For music, this offers a great deal of hope. “If you can please the artists and the fans, the rest will follow,” remarks Songkick’s Crow. And all that “the rest” implies – managers, labels, broadcasters, marketers and everyone else involved in the production, delivery and performance process. A similar principle applies across just about every other industry. To enable it however requires us to re-think how we see our place in relation to each other in ourselves, in our communities and in society as a whole, even as industrialists look to grab what they can. Like fruit flies, our future depends on it.

7.4. Re-establishing boundaries

“A democratic state perishes if the rule of law is undermined by wealthy and unscrupulous men, and that the citizens acquire power and authority in all state affairs due ‘to the strength of the laws’.”
Demosthenes

Back in the 1700s, corporations expanded into the space created by similar extensions of national sovereignty. Companies such as the East India Company, acting under license from the Queen of the United Kingdom, essentially consisted of tradespeople identifying, and exploiting the areas of massive potential difference created by Western expansionism. In the Great British case the founders, including one Samuel Pepys, became very rich indeed — he was worth £6600 in 1660, roughly1 £450,000 in today’s money. Known as ‘mercantilism’, the model2 was based the principle of accumulating as much gold and silver — hard currency — as possible, by a nation exporting more goods than it imported. England saw its colonies as a great market for finished goods, while it permitted colonists to export only raw materials. As a result, there was always a shortage of money in the colonies.” This age of naive exploitation, aptly termed the enlightenment, also saw the origins of global finance, The first banks came about in response, to support the explosion of international trade, the monies created by which catalysed both the American, and the industrial revolutions. In the former case, tea and slavery pushed the country over the edge.

Is the situation all that different today? Tech corporations thrive by exploiting potential differences, blithely ignorant of structures that have come before. What was once prohibitively expensive can become, quite literally, as cheap as chips — in six, twelve, eighteen months’ time a whole number of goods, services and consequences will become predictably possible and affordable. There’s no grand plan here — more that in the brainstorm we call the technological revolution, the one out of a hundred startups that succeeds will have found a highly exploitable source of such, potential which waits like flood waters. It doesn’t take much for the riverbanks to break and when they do, they are nigh impossible to replace. New corporations have emerged very quickly by exploiting relatively simple, yet profoundly lucrative changes in potential difference. Essentially they have short circuited the past, drawing energy across their own circuitry. Mark Zuckerberg happened to be in the right place at the right time. He created a short circuit between two established circuits, and diverted the energy into his own circuitry. It’s a basic rule of entrepreneurship - find the gap and exploit it.

Of course, new companies today are so much cooler than back in the age of enlightenment. They have pizza and beer in the fridge, ping pong tables, brightly coloured bean bags, free lunches and staffers generally looking thrilled to be there. But such organisations have their dark sides — back then the dark side was slavery and drugs; today, the dark side is exploitation of the populace and unthinking abuse of privacy. Media organisations are also exploiting the differences in potential, this time of knowledge. They know that there is money to be made and power to be won by controlling the channels of communication, as they look to do — from ClearChannel to Rupert Murdoch’s Fox corporation.

Both ages share a blithe naivete: whatever the ‘do no evil’ mantras preached as corporations start their journeys towards corporate glory, an essential amorality pervades throughout business. Many upstarts have decided the best approach is not to care about potential ramifications, to cross that bridge when you get to it. Organisations are playing with the rules, in order to grow their respective businesses: petition sites are no different in this respect to the reputed bad guys such as Facebook or Google.

This is not necessarily wrong — to try otherwise would create a set of barriers which no tech-based company could overcome. But left unchecked, corporations will gain too much power. Keep in mind that ultimately, corporations are little more than collectives of people, acting as tribes. Many have said so - it’s why the idea number of people working in an organisation is 20. Does this change when virtual worlds come in? Not really. The number is still 20, according to London Business School experts. What about public organisations? Still the body corporate. Technology is creating a global ecosystem of interlinked, technology-enabled villages, with the resulting impact on national boundaries (some relatively recent3). So, what governance needs to be in place? Perhaps we can get our first lesson from history.

What a world it was before. Incredible it may seem but it’s only a matter of a few hundred years since roads were made safe. Even as Britannia was looking to rule the waves, it was still dangerous to go from one town to another. Trade routes were regularly patrolled, but this left numerous gaps for bandits and thieves to exploit, in Britain, across Europe and indeed, the USA where few wagon trains would consider travelling without outriders. In continental Europe it was Napoleon, despite (or perhaps because) of his despotism, who replaced the inconsistent militias with a network of organised armies and policing services, a model which was replicated elsewhere. By making it safe to travel from Genoa to Vienna, from Oslo to Dublin, not only could trade flourish but also the arts, as musicians and other performers could reach beyond their traditional stamping grounds, yielding the wide variety of services and intermediaries — booking agents, touring performances and even mail order — that we know today.

We can apply much the same principles to the information superhighways, as liable to being assaulted as were our treasure chests of old. The villages of the virtual world are massively dispersed in physical terms, making them utterly reliant on conduits, cables and other communications paraphernalia. While this needs to be protected, in the same way as roads and tracks, such channels also need to be kept as open as possible — any limitations (such as unfair costs or prioritisation mechanisms) are going to restrict data movement and, therefore, progress. This net neutrality4 principle has thus far been maintained across the Internet, despite the efforts of some tom make it otherwise. And it needs to remain so.

Another lesson we can learn from history is the necessity to build in personal rights. Our worst abuses of power, left unchecked to horrific consequences, led to the creation of the international convention on human rights — a very simply phrased, succinct document which set out what was unacceptable in how we act to one another. Interestingly, even as corporations are looking to do what they can while the jury is out, they nonetheless recognise that any abuse of fundamental rights would be bad for business. Post-Snowden and as with the past, trading companies are investing in their own futures by wanting to put the customer first. It’s not hard to be cynical about the move by eight companies — Microsoft, Facebook, Google, Yahoo, Twitter and so on — to seek better regulation from the US government regarding whether the latter can access personal data, but the logic is clear: these guys are in the information game, where the user is the product and real customers pay for access to eye balls. It doesn’t help that governments themselves are abusers. The authorities have shown5 themselves to be playing the same game, as examples such as the rushed-through DRIPA in the UK show. So the day that Mark Zuckerberg became a privacy advocate is not also the one which the lion lay down with the lamb, but rather he, and all such firms are worried about losing their user base.

Despite such good intentions, we also need to recognise that the playing field is global. Just as in the empire-building past, geographic boundaries are not a priority for companies looking to trade internationally, or indeed governments looking to achieve their own goals. The Stuxnet attack wasn’t illegal — international law doesn’t currently have a line about it, as attempts by the US government to sue China, have shown. And it carries on - consider Narilam⁠, which was designed to damage databases in Iran. Or Duqu⁠6.The EU case against the US did result in a conviction, but no particular consequences have emerged. National borders are of no consequence to data: consider, for example, how the IP address of ‘UK-based’ online campaign site Avaaz.org terminates at a New York data centre facility. As is 38signals.co.uk using another US-based hosting company (this makes more sense when the search also turns up the company is using the same agency as Barack Obama’s campaign). If the tools we use for exercising our democratic rights are outside our own jurisdictions, is that a good or a bad thing? Who can say?

And as for sharing economy businesses such as AirBnB or Uber, the traditional concept of the national jurisdiction goes away. Even as individual nations seek to legislate, they are playing a gloriously convoluted game of splat the rat with startups of every shape and size.

Not that attempts haven’t been made to legislate. Sweden is reputed to be the first country to advocate freedom of information, for example, through its passing of laws7 on Freedom of the Press in 1766 which advocated8 access to government documents. The Universal Declaration on Human Rights, adopted in 1948, incorporated the right “to seek, receive and impart information and ideas through any media and regardless of frontiers” — though it didn’t place any requirement on governments to reveal it. This would wait until 1966, when the US Freedom of Information Act was ratified. Thirty years later, amendments were added to incorporate Electronic data. Today, some hundred9 countries have freedom of information enshrined in law.

The UK Data Protection Act (DPA) was introduced, appropriately enough, in 1984 (George Orwell would be proud), a year after a similar law in Germany. No significant event led to the creation of these laws, other than a sudden realisation (in Germany’s case, following a computer-based census) just how vulnerable the data, and hence citizens would be if no legislation was in place. Similar laws were rolled out across Europe, and indeed globally10, in the 80’s and 90’s. New legislation is coming on in certain jurisdictions, such as the “Genetic discrimination act” in the US. And in parallel with draft bills being prepared in a number of its member countries, the international Organisation for Economic Cooperation and Development (OECD) formulated its data protection guidelines11 in 1980 — the “OEDC Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.”

But the legislation is all a bit rubbish. Consider, for example, the UK introduction of the pan-European cookie directive in 2014, devised with the intent to protect the privacy of Internet users (aka: just about everybody). Just before it came into force however, the Information Commissioner's Office (ICO) made a small, yet profound tweak - to move from the requirement for explicit, to implied consent for cookies on websites. 

This essentially rendered the action completely useless. The 'implied consent' statement gives web site owners carte blanche to adopt an "our club, our rules" attitude. "Your use of the site is seen as acceptance of our policy" or "We use cookies. If you're OK with this, click here to hide this message" are not actually choices, merely devices that require acceptance. As things stand, most implementations consider no distinction between cookies help users navigate a site, and those used for marketing. In many cases the only option is on/off, which is a bit tragic given that, indeed, some cookies are essential for site use. The ultimate irony is that a user's cookie privacy preferences have to be stored somewhere. Where? A cookie, of course, without which the punter is forced to answer the same question every time they return to the site. Ultimately, the focus on the cookie is, essentially, shooting the messenger, regardless of what the message contains. (Capabilities such as "Do Not Track" offer a much better starting point for a response as they enable consumers to deal with the question directly. But that isn’t important right now.)

Even most recent efforts to legislate fall short. June 2015 saw the approval of the draft EC Data Protection law. With luck it may be ratified by the end of this year, to much applause from the ramparts of Brussels and relief from the general populace. Its associated handbook is well thought through and considered, based on real world cases, tested in court, which go to the nub of issues such as balancing protection with personal privacy. Such as the suicide attempt thwarted through the use of CCTV (a good thing), but then the footage being released to the media (a bad thing). Indeed, the law is pretty comprehensive, with a fair amount of recourse should things go wrong - the right to be forgotten, for example, i.e. for data to be removed from particular databases.

The right to be forgotten is also an impossible sham, largely because it singles out one right above all others. And data doesn’t care. While you might be able to ask for your own data to be removed from a data set, you couldn’t ask the same about data relating to the soil in the field next to your garden. This is the real danger caused by aggregation - that it is possible to operate entirely in the shadows cast by the context of human behaviour, without treading on the toes of anyone’s ‘personal’ information. Equally, the draft law is structured on the basis of an exclusionary, “if in doubt, take it out” model — this doesn’t resolve the potential for prejudice caused by the absence of a necessary piece of data, or even an entire data set. We may need a “right to be remembered” in some cases, with an inclusive response to an inaccurate ‘insight’.

These are, clearly, early days. Governance needs legislation, which is currently a bit of a mess. Lawmaking has not caught up with progress. Legislators continue to be taxed by the duality of challenges between making both government data transparent, and keeping personal data private.

Governance essentially needs to deliver on two counts: first, to not abuse the relationship with, and between the world’s citizens; and second, to pay its taxes. It is currently failing on both. If I was a conspiracy theorist I would say political turmoil is playing into the hands of the corporations. Just as back in the 1700s, the dithering and collusion by our governments is enabling the few to get richer, even as the many wait for someone to work out what to do.

It is the notion of accountability that remains lacking from our technology-powered lifestyles, positive as they are in so many other ways. In the cyber-world, it would appear, authorities are acting without the blessing of their citizens, and undermining the technological innovations created within their borders. It is a very strange state of affairs indeed, one which would be unlikely to be tolerated in the physical world (in which building searches still need a warrant, for example).

So, let’s sort it out. We all need to be smarter. We need legislation for a world in which nothing can be hidden, or unsaid. We need a new contract between the people and the powerful, which works not on the basis of what happened in the past, but how things will inevitably look in the near future.

We need a bill of rights for the virtual world.

7.5. A virtual Bill of Rights

King John was a bad ‘un, ruling in his brother Richard’s absence like the entire country was his plaything. When he signed the Magna Carta in 1215 together with two dozen of his barons, what really set the Magna Carta rolling was that the ruling classes, as they were at the time, had had enough of his shenanigans. Simply put, absolute power cannot be absolute for ever, a theme that has repeated ever since.

Nobody (save for the sake of debate) would question the influence the sheet of ink-coated parchment, or its many offspring, has had on the histories of many nations, English-speaking and otherwise. Whatever the scope of the original document, its joint principles — that all people are ultimately accountable, whatever their position, and that nobody should gain a status of absolute power — form the basis of the democracies of the UK, the USA and many others, its conclusions cited as a response when the masses have had enough. Containing the First Amendment stating, "Congress shall make no law... abridging the freedom of speech" for example, the US Constitution and Bill of Rights were formulated in response to the perceived threat of tyranny coming from both outside and within the still-fledgling country. And in the living memory of many, the Council of Europe created the European Convention on Human Rights in 1950, in response to both the horrors of the Second World War and the looming spectre of the Eastern Bloc.

From such incidents, and consequences, we can learn not only that eventually people will say ‘enough’; but also, that they will need to be pushed into a pretty tight corner before doing so. We shouldn’t be at all surprised therefore, that humanity continues to stare dumbly into the abyss that is data abuse, and done nothing about it. Consider some of the examples we have seen: bins that can sense bluetooth, supermarkets using CCTV and facial recognition to identify people individually, and track their movement through stores.

Fact: current laws don’t exist in sufficient scope to cover our digital existence -- but does that make them any less subject to the will of the people not to have their rights ridden over roughshod, simply because no protection exists? The information revolution has already changed the world, but in many ways we are still acting as though it hasn’t. This disconnect creates an opportunity for all, but more so for the powerful than the average citizen — it was ever thus. The fact we need new frameworks for governance is so obvious that it hardly needs saying, and as we have seen, efforts are underway. However, current legislative approaches around data are trying to apply three-dimensional thinking to a four-dimensional space.

The legal system simply can’t keep up with technology-driven change: chances are that any attempts to legislate will be superseded as new forms of information gathering and analysis develop. One only has to look at the number of cameras being installed on next-generation cars, or the fears around utilities using smart grids (or indeed, device manufacturers) to switch off energy without the home-owner's consent, to appreciate some of the difficulties which lie ahead. The issues become even more complex when metadata (data about data, such as phone call records), data aggregation and anonymisation are taken into consideration.

Stripping away the silicon-and-polymer trappings of our technologically advanced culture reveals an issue as old as the Magna Carta: that inadequate governance allows for a minority to act with impunity, even as the rights of the masses are abused. Neither Snowden nor Wikileaks were enough to incite change; nor were the Panama Papers; nor, no doubt, will be any other leaks of data. For change to take place, potentially, we need an event so significant it turns humanity on its head. An event significant enough to trigger adequate data governance may not come for decades, but in the meantime the majority of the world’s citizens will be consigned to a state of digital serfdom.

Even if the collective we are prepared to sit back and wait for the ‘data-pocalypse’ before agreeing any significant change, we can at least map out what such changes need to be. For a start, and based on the findings of this book, we can start with a number of reasonably clear assumptions abut where the information revolution is taking us. This needs to include, the idea of complete and utter data transparency, either directly or through the increasing ability of computers to work things out. Yes, “Privacy is dead,” as Scott McNealy once said. And yes, we need to “deal with it” — but not simply through blank acceptance of humanity’s fate, but by putting in place the checks and balances we need as a result.

On this basis, what can we say? First that the protections should be in terms of the consequences, not the features of the causes. Some of our oldest laws are also some of the simplest, and complexity only causes problems.

Second, that any principles should be completely international. The digital world has no geographic boundaries, so any differences in governance between nations can be exploited, directly and without pause for thought. In future, it’s highly likely that any loopholes will be accessed through automation, if indeed they are not already. For example the remit needs to consider corporate law, as well as tax boundaries: technology has enabled a global economy, so it should have a fiscal framework that works accordingly. Today, many companies are currently, simply exploiting the potential difference between acting globally and paying Caesar locally. By doing so they are obeying the will of their shareholders, which in many cases means you and me. Such international legislation is not without its loopholes. Data havens can exist either due to physical geography — in the case of Iceland, which voted in such a measure[1] in 2010.

The question of liability also comes into play, therefore. If an organisation causes a damaging consequence it should be held liable, should the result be an unexpected material impact on the person, even if the data was anonymised or wherever it was stored. We have similar principles in existing law in terms of diminished responsibility, manslaughter and so on, that some responsibility for the outcome needs to be taken.

Perhaps a bigger question is, from where should such laws come? The answer may be to adopt some of the principles made possible by the digital age — open-ness and collaboration. It worked in Iceland through crowdsourcing, and in Austria, workable financial regulation is being co-created. This is not wrong — combative approaches are provably unworkable. By crowdsourcing the rules, then creating a democratic structure to support them. Perhaps the UN, or any other pan-national authority that individual nations are prepared to sign up to. But with the will of the people, not the corporations, driving the remit.

So, perhaps we will be able to devise a set of moral principles. Such work has already been started — the Magna Carta was cited[2] by inventor of the World Wide Web, Sir Tim Berners-Lee as he launched[3] the Web We Want campaign, among others. And indeed, some groups such as AIIM have drawn up[4] an Information Bill of Rights. It doesn’t stop with working groups. The UN has come out with statements such as the Right to Privacy in the Digital Age[5] and Microsoft has followed suit by suggesting[6] it is time for an international convention on government access to data. But as yet it lacks any overall support, or even authority.

An important next point is that such work cannot be considered ‘complete’ when it is done. As we have so clearly seen the Information Age is in a brainstorming stage, as businesses try to combine data and services in new and interesting ways and see what insights emerge. One flash of brilliance might create a hitherto unknown, completely legal, public, non-specific, yet damaging stereotype, such as “cat owners are dangerous drivers”. Once such an insight has been discovered the damage may already be done. As we become better at data analysis such, micro-prejudicial examples will become the norm, rather than the exception.

So we also need to consider speed of legislation, and its enactment. If businesses are recruiting data scientists, so do our judiciaries and our lawmakers. Our legal systems need to operate in as agile a manner as our businesses and startups, quickly considering the consequences of the retrospective application of an unexpected discovery. This may also mean moving from regulatory law to case law, on an international basis. Which is not currently the case, clearly. But in this hyper-connected world in which we now exist, there is no reason for it not to be.
Indeed, nothing at all is stopping the masses from creating an internationally applicable, virtual bill of rights that applies not only to ourselves, but to our data. As we have seen, data is not about us, it is us and if corporations and governments are no longer able to distinguish between the two, then nor should our legal frameworks.


[1] http://www.economist.com/blogs/babbage/2010/06/icelands_media_law
[2] http://www.bbc.co.uk/news/uk-26540635
[3] https://webwewant.org
[4] http://www.aiim.org/community/Discussions/Information-Bill-of-Rights
[5] http://www.un.org/News/Press/docs/2013/gashc4094.doc.htm
[6] http://blogs.microsoft.com/on-the-issues/2014/01/20/time-for-an-international-convention-on-government-access-to-data/

8. Where from here?

The designer stood next to a rail full of t-shirts, in all colours and sizes. Wearing jeans, a checked shirt and baseball cap, he could have been the bloke who brought the t-shirts in. Or a part-time musician. Or a sign painter. He was, in fact, all of those things. He was also Head of Graphics at Cheltenham's success story, the clothing retailer Superdry. Richy had been responsible for every single logo and design that appeared across the clothing range since its inception. “How do you draw the designs,” I asked. “With paper,” he said. “And a pencil.”

Superdry's story is about a couple of blokes and a minor epiphany that inspired a clothing brand, marrying Eastern graphics and lingo with classic fashion. Richy was one of the first on board, and saw the company grow from a back room to a global brand.

As I write this, in a cafe on Denmark Street in London, a man walks past with a notebook in his hand. Around the notebook is an elastic band, perfectly tensioned using the laws of physics to hold the pages together without being too taut to remove. In much the same way, when the digital dust settles we will all get on with being human, without thinking too much about the technologies that surround us. Who knows, if we are still around, some of us might smile, wryly, in the knowledge that computers can finally recognise the subtlety.

A bit like a piece of paper. And a pencil.

8.1. Moving our own goal posts

“As a vessel is known by the sound, whether it be cracked or not; so men are proved, by their speeches, whether they be wise or foolish”
– Demosthenes

“ Just be yourself, sir. Whatever happens, they can't take that away from you.”
— Denholm Elliott as Coleman, Trading Places

Speech writing was a highly formalised affair in fourth century BC Greece, with orators spending several years studying the finer points of Rhetoric (that is, the art of debate) before they could argue their case in the public forums of Athens – the Assembly and the Council.1 According to Aristotle, rhetorical skill was devised to encourage deliberation – in other words, thinking hard – before and during the composition of an argument. Rhetorical speeches were not to be composed in a rush, but over a period of days, weeks or even months.

Assembly meetings took place at the Pynx, an outdoor area with a stone platform running along one side, itself fashioned from a rocky outcrop. The Pynx is recognised as the birthplace of democracy, the word itself meaning people-power, though it did not exist in a shape we would recognise today. There was no party political system, no lobbyists, and indeed, the vote was not universal; rather a semi-random selection of the male populace that had come together, on that one day, to make a collective decision. Speakers would lay out their painstakingly constructed arguments in front of up to 6,000 of Athens’ male citizens2, which was all the area could fit. Once the debate had taken place, all major decisions were put to majority vote by a show of hands, the results of which (in the form of decrees) were passed to the 500-member Council for adoption. From there they were copied and recopied, then broadcast across the mainland and islands of ancient Greece. Messengers travelled by horse, boat and foot, meaning several weeks could pass before decrees could guarantee to have been heard in all corners of the nation.

In truth, Demosthenes was an unlikely orator – born with a speech impediment, orphaned and denied much of his inheritance. He worked hard to overcome his stammer and develop the skills he needed to sue his guardians for a share of his money, working as a logographer – a professional speech writer – before becoming famous for his own efforts, ranking with Plato and Aristotle as one of the most charismatic orators of the time. When it came, his fame was largely on the basis of hearsay, as few of the Greek population would ever hear his speeches first hand, or have the opportunity to read a transcript. The few copies that existed of his speeches were kept in libraries for use by students and were not generally accessible to the wider public, even if they had the skills to read them.

While the Greek system of decision making was not unique[^3] in ancient times, none are known to be quite as extensive. More typically, powers of persuasion and charisma had to be underpinned by both individual strength and the presence of an appropriately structured militia. The ‘power of the crowd’ would more usually be represented along tribal lines, with angry hordes needing leadership before they would attempt to topple their oppressors. Indeed, Demosthenes’ own doom was spelt when he attempted to use his rhetorical skills to rally the masses against the Macedonian king, Philip, and his successor, Alexander the Great. Unable to withstand his foes by force, he took poison to prevent capture by Alexander’s representative, Antipater.

The main advantage Demosthenes had over his modern counterparts was that he had plenty of time. He and his peers had a high level of influence to a reasonable volume of people – mere thousands would be swayed by his words. However the time to prepare, while a luxury, also meant speed of creation had to be measured in weeks; speed of distribution and subsequent impact of his own writings was limited. While today’s computer networks can transmit far more than ‘just’ text, the importance of the considered opinion has not diminished. From the days of community memory, through bulletin boards and online forums, the Internet has offered a place for debate, a virtual Pyx with no limit on how many people can attend.

But while the world may be very different today, the role of the individual has not gone away — indeed, perhaps the only thing to change is the scale of the debate. As we have seen, the information revolution has been marked by the lowering of thresholds. The information is unconcerned about the impact it has — it exists only to inform. It can inform people — we can be entertained or educated, we can use it as the basis of making our minds up both within and outside business. It can also inform processes and drive equipment, opening the door to automation. It is generally created by capturing, then digitising and processing data, transporting it from one place to another using a generally accepted binary format. Whether we write a message or make use of a sensor, we are adding to the mother of all analogue to digital converters. As a result, the whole world has become our Pynx.

With thresholds lowered, we can all become the next Demosthenes, for better or worse. But we are in many way, more isolated than ever. UK journalist Johann Hari, himself on a journey out of a pit of his own creation, was researching drug addiction. His findings surprised him — that more isolated people were more likely to become, and stay addicts. “What I learned is that the opposite of addiction is not sobriety,” he commented. The opposite of addiction is human connection.” How ironic it is, then, that in this hyper-connected world, we should still be suffering the darker consequences of being disconnected.

The future is real, and it is here, and it is happening faster than we can adapt. So we are losing our sense of meaning, as we struggle to understand who we are in the new. Following his experiences as a concentration camp prisoner in the Second World War — during which he essentially found that people, if they lost their sense of purpose, also lost their reason to live — psychotherapistVictor Frankl devised a meaning-based therapy and speculated that a loss of meaning may be linked to the breaking down of the immune system. Ultimately, our physical health and our sense of identity may be connected to our ability to cope with where technology is taking us.

Strange as it might seem, computer security experts were some of the first technologists to stumble upon a similar truth. The Jericho Forum3 was founded by a number of well-placed Chief Information Security Officers at the turn of the Millennium, in the recognition that organisations could no longer control the data that they managed in their databases. “The walls have come tumbling down,” went the mantra. In response, the CISOs realised that they needed to manage data wherever it was, rather than trying to keep it in one place — which moved their thinking from protecting the data to coming up with a way of identifying who, and what, was accessing it. In November 2010 they announced4 their Identity and Access Management Commandments, describing the set of design principles computers needed to adopt should they require to link device, or personal identity to the data set. To emphasise how important this move was, the group was disbanded soon afterwards, in November 2013[^6], when it was deemed to have completed its work. “Ten years ago, the Jericho Forum set out on a mission to evangelise the issues, problems, solutions and provide thought-leadership around the emerging business and security issues of de-perimeterisation, with the aim of one day being able to declare ‘job-done’,” went the press release. “That day has now arrived.”

The need to connect people with data for security as well as psychological reasons should not coincidental, given the primary purpose of security — to manage the risks associated with data about ourselves. At the same time it confirms that to reflect data’s role as being about reality, it also needs to connect to reality in a tangible way. That is, data needs to have real meaning, just as people do. Such principles are also being acknowledged today, not least in GE’s ‘digital twins’ initiative5, which involves creating what they hope will be the world’s largest database of physical object attributes — virtual shells, which can be filled with information about the physical objects they represent.

This evolution in thinking — that identity holds the key to success — is profound. And even more fascinatingly, it is not just coming from technology, but from our own brains which are increasingly wired to take into account our activities in the virtual world. To understand how this is possible we can consider the work of Temple Grandin, an autistic agricultural equipment designer, who has written extensively about both her own experiences, and our general understanding, of the autistic spectrum.  One area she highlighted was that the human brain is in a constant state of redevelopment. Where neural pathways are under-functioning or otherwise blocked, other connections get made. These biological adjustments to the circuitry of the brain enable signals to be re-routed, directly affecting a person’s cognitive abilities.

It isn't just autism where synaptic re-routing takes place. Those familiar with rehabilitating drug addicts know that addiction has physical consequences — essentially, new pathways are created to reflect the ’normality’ of drug use. Once made, pathways cannot be told to cease operating, which starts to explain why addicts can’t ‘just stop’. The parallels between addiction and autism don’t stop there. Temple Grandin has highlighted the importance of learning social skills for people on the autistic spectrum, even if this means they are operating outside their comfort zone, as they enable people to interact and function in society. In the 1950’s, social skills were taught in a much more rigid way so kids who were mildly autistic were forced to learn them. It hurts the autistic much more than it does the normal kids to not have these skills formally taught,” she remarked. 

As Temple Grandin illustrates, it appears that we are genuinely wired for our existence. Our brains are likely wired for the old. What we perceive as rationality is based on our ability to repeat a complex response to a complex situation. When that situation changes subtly, our attempted response appears ‘irrational’. It is a myth that decision making is largely rational - digital sources constitute part of that rational element, as do other non-digital sources such as word of mouth. Irrationality tends to be affected by risk, time pressure, (in)experience and cognitive biases, as well as incompleteness of rational inputs including digital/data. Given how the technology revolution has made us more connected, and yet more isolated than ever, Temple Grandin, Viktor Frankl and Johann Hari’s experiences may be of profound importance. External connections are vital to our well-being, for sure, and these may well be reflected in synaptic connections which, once created, cannot simply be un-knotted. 

If it is that6 our brains need to be re-wired, as suggests the Economist7 and IDG8, then what should we do about it? The answer is perhaps nothing — we can just wait for the change to happen. Children, it is well known, have a greater ability to learn and adapt, simply put they are better equipped to build the neurological matter that we all need. Even they9 will continue to struggle, given that the changes are not over yet. But the important thing is that people will get smarter — shifting towards this goal.

To abet this process, a good first step is education. Which needs to be as broad as possible — we can help ourselves by harnessing the diverse range of skills and knowledge we have accumulated through the aeons, across the sexes, across nations, across rich and poor. With the best will in the world, the core of the technological revolution is being defined by the few, rather than the many. Only with broader understanding can we understand how to be more at peace with technology as a race. Then we have to embrace the change. We need, essentially, to shift our thinking from analogue to digitally augmented ways, to understand that the data around us is as much a part of us as the blood that flows in our veins, or the money in our increasingly virtual wallets. By taking this leap of understanding, we can start to un-knot our brains from the old and build the inner synaptic pathways that reflect the way ahead. But even as we do so, we need to keep an understanding that while technology can augment us, can even change the way we think, it does not ultimately affect our purpose. “A person’s primary task should not be computing, but being human,” says10 cyborg11 anthropologist Amber Case. The reason why we are here, how we have survived for so long, does not have to change.

While we may become hyper-aware, the future lies in retaining our closeness with ourselves and the world around us. From a wine perspective for example, wine technologist Matic Šerc and his team have learned that more goes on in wine growing than engineering can solve. “It’s a triangle of the soil, the weather and the vine. When you manage vineyards you can manage the soil and you can manage the canopy, to an extent. But you cannot completely switch the soil, or change weather conditions, with technology. You have to keep in mind what you want from the grapes — the grapes then go through the process of wine making but there are factors that you cannot influence.” Ultimately wine is more than a product, it is a consequence of everything that takes place to turn the first opening leaves of Spring into the dark reds and crisp whites, the infusion of flavours and textures that bring so much pleasure to so many. “Wine has a story, a personality,” says Matic. As have so many of the facets of the world around, and within us.

Wherever the digital revolution takes us, one thing is for sure: we are stuck with technology and need to make the best of it. This doesn’t mean that we should do away with the largely non-digital world we inhabit, however much some more evangelical voices might want. Like water, data will engulf everything that we do - it cannot be held back. Like fire, it will spread uncontrollably however much we penalise those who drop the occasional lighted match. Like the air on a cold day, we will breathe it, and it will retain an image of our breath which can be captured, with or without our knowledge. But a cup of espresso, served on a veranda on a balmy day in Southern Italy, still gives more pleasure than watching the same in a YouTube video.

And, the chances are, it always will.

8.2. The hunter gatherer take

“I have it - the greatest invention since the wheel… the axle!"
— The Wizard of Id1
“Day in the life of a mayfly: born eat, shag, die.”
— Not the Nine O’Clock News

What would our forebears, the hunter gatherers have made of all this change? They might even have seen some similarities — the gossip of village life, living in each others pockets, dealing with the occasional disaster alongside the more usual mundane. They had to live by their wits, creating solutions to problems, often with minimal resources. As such they relied as much on their greatly developed brains as the tools they used. Indeed, historical evidence (albeit scant) suggests that our intellect, resourcefulness and general drive to survive are what got us to this point, rather than any particular physical attributes.

The tools we have available are not done with changing. New substances such as graphene enable us to create smaller, stronger devices; quantum computing will change the way we think about both processing and networking; DNA-based structures are able to store terabytes of information. “If I was to start my career again, I would start at the cusp of biotech, nanotech and artificial intelligence,” he says. “The ability to make our future devices intelligent and cognitive is key to sustainability and the advancement of the human race,” says2 Peter Cochrane OBE, futurologist and ex-head of research at BT. Might it be possible for computers to start to ‘think’ for themselves, as in the technology singularity cited[^3] by technology futurists Ray Kurzweil and Vernor Vinge. Once we arrive at such a point of a computer consciousness, they say, there is no going back. Indeed, they have gone so far as to set a date — 2025, or under ten years’ time.

Even if it does not, change is itself speeding up, rendering tools into existence then causing them to vanish at an accelerating rate. A few years ago the term ‘mayfly time’ was to describe how innovations were likely to exist within a 24 hour period, leaving scant time to profit from their presence. The expression was a nod to Not the Nine O’Clock News annual, which described the day in the life of a mayfly. “Born, eat, shag, die.” Such principles are already present in algorithmic trading, which looks for immediate differences in stock prices and deals with them in microseconds. The only rule is that the books need to balance at the end of each day.

Ultimately however, innovations remain but tools, and we saw the same kinds of things being said about radio as about3 the Internet, which is just one more in a series of advances. The mid-term future should be considered against a background of diminishing resources, an ageing population and, indeed, our own very human weaknesses. We are a race of opportunists, of tryers, of exploitative types, of innovators and meanwhile of users, of people who just want to get on with their lives, to eat, sleep, read, play sport, procreate and otherwise accept their lot. And those who are smarter, or who recognise opportunity faster, or who get that it’s not that hard, build the devices and the interfaces that reveal facets of the potential to the rest. Occasionally someone will have a lucky break, but most efforts will fail, in this technological survival of the fittest.

So, we bumble along, just like those before us. For our primitive forebears, death was only a whisper away — from predators, from warring tribes, from desperate people who would kill to survive. And yet, they survived, and flourished. 

Even as technology offers such huge promise and yet poses so great a threat on our survival, we can only hope that we, and our future generations will do the same.