A platform for the web of everything

Planning ahead for Formula 1 racing success is not for the faint-hearted. “We have a plan of where we are going for the year, and then something major comes along, like a competitor running something new, and suddenly we have to shift tack completely and head off in a different direction,” explains Williams F1’s IT Director, Graeme Hackland. “Our race culture is ‘You change things’. It happens race by race, we’re learning about our car, our competitors’ cars and that leads to changes in the course of a season.”

As a consequence the design of the car is being changed in between, and sometimes even during race meets. The ultimate decision often lies with the aerodynamics engineers, who test potential designs using the race team’s own supercomputer before 3D-printing them and running wind tunnel scenarios. Then any changes need to work in practice, on the track. To get feedback from cars relies on a complex array of sensors measuring vibration, changes in temperature and so on. “Each car has over 200 sensors, 80 of which generate real time data for performance or things that might go wrong, while the rest create data that the aerodynamicists analyse afterwards, for improvements to the car,” explains Graeme. To do so requires an array of equipment — supercomputing and general-purpose computing facilities back at home base are extended with a small number of truck-mounted servers situated at the side of the track. “We take a feed of everything generated on the car’s sensors, then we trickle feed data back as we need it,”

While Formula 1 is just the latest industry to be swept up by connecting monitoring devices and sensors to some kinds of computing facility, a number of industries have been doing so for a lot longer. Sensors themselves have been around for a goodly long time — the first thermostat appeared1 in the 1880s, patented by Warren S. Johnson in Wisconsin. Johnson was a science teacher at the State Normal School in Whitewater, Wisconsin, whose lessons were constantly being interrupted by staff responsible for adjusting the heating equipment. Over three years he worked on a device, based on the principle that different metals expand at different rates when heated; a coil of two metals could make or break an electrical circuit, which in turn could be connected to a steam valve. “It is evident that the making and breaking of the electric current by the thermostatI will control the supply of steam, and thus the temperature of the room,” stated2 Johnson in his patent of 1883. The company he founded, Johnson Controls, is now present3 in over 150 countries and turns over nearly $40billion in sales — not bad for a farmer’s son of a teacher.

Monitoring such devices from a distance is a more recent phenomenon, which has largely followed the history of computing — in that, once there was a computer to which sensors could be connected, people realised a number of benefits of doing so. The earliest examples of remote control came from the utilities industries in the 1930s and 1940s, and keeping a distant eye on pumping stations and electricity substations quickly became a must-have feature. “Most of the installations were with electric utilities, some were pipelines, gas companies and even done an installation in the control tower of O Hara airport in Chicago, to control the landing lights,” remembered4 engineer Jerry Russell, who worked on such early systems. Not only did they save considerable manual effort, but they also enabled problems to be spotted (and dealt with) before they became catastrophes.

As the Second World War took hold, another compelling issue lent itself to a different need for remote sensing. The war saw aviation used on an unprecedented scale, leading to the invention, then deployment of a radar system across the south and eastern coastlines of the UK. The team of engineers was led5 by Robert Watson-Watt, himself descended from James Watt. As successful as such a system was, it led to another challenge — how to identify whether a plane was friend or foe, ‘one of ours’ or ‘one of theirs’? The response was to fit a device on each plane, with a relatively simple function: when it picked up a radio signal of a certain frequency (I.e. The radar signal), it would respond in a certain way. Together with radar itself, the resulting technology, known as the identify friend or foe (IFF) system, was instrumental in the UK’s success at fending off the German onslaught during the Battle of Britain. And thus, the transponder was born.

Once the war was over of course, the technology revolution began in earnest, bringing with it (as we have seen) miniaturisation of devices, globalisation of reach and indeed, standardisation of protocols — that’s a lot of ‘isations’! By the 1960s, monitoring circuits started to be replaced by solid state supervisory circuits such as Westinghouse’s REDAC6 system, which could send data at a staggering 10 bits per second. Together, the invention of the Modem (which stands for MO-dulator/DEModulator, and which converts digital signals into sounds to enable them to be communicated down a telephone line) and the development of the teletype — the first, typewriter-based remote terminal — resulted in the creation of the Remote Control Unit (RTU), an altogether more sophisticated remote monitoring system which could not only present information about a distant object, but it could also tell it what to do. For example, a flood barrier could be raised or lowered from a central control point.

The idea of remote automation very quickly followed — after all, this was what Johnson himself originally saw as the benefit, as a room thermostat could directly control a valve situated in the basement of his school. Sensors and measuring devices were just as capable to input information to computer programs as punched cards, or indeed, people. As a consequence software programs started to turn their attention to how they could manage systems at a distance — in the mid 1960’s, the term SCADA (Supervisory Control and Data Acquisition System) was created7 to describe such systems. Not only this but the advent of computers brought the ability to not only process information, but also store it. Recording of historical events — such as changes of temperature very quickly became the norm rather than the exception, with the only constraint being how much storage was available. Even so, as it became easier to store a piece of data rather than worry about whether it was needed, “Keep it all” became a core philosophy of computing which remains in place to the present day.

As for how things moved on from this point, suffice to say that they progressed as might be expected, with the creation of the microprocessor by Intel in the early 1970’s being a major contributor. The smaller and cheaper computers and sensors became, the easier it was to get them in place. SCADA systems are prevalent around the world for management of everything from nuclear power stations to flood barriers. “Machine to machine” (M2M) communications is de facto in process control, manufacturing and transportation: today, it has been estimated that the number of sensors on a 747 is in the order of millions. We are still not ‘there’ of course — many houses in the UK still rely on electricity meters, for example, and in many cases water isn’t metered at all. But still, use of sensors continues to grow.

Meanwhile the notion of the transponder was continuing to develop on a parallel track. It didn’t take long for other uses of the idea to be identified: indeed, the patent holder for the passive, read-write Radio Frequency ID (RFID) tag in 1969, Mario Cardullo, first picked up8 the idea from a lecture about the wartime friend or foe systems. Before long the concept was being applied9 to cars, cows10, people and indeed, even the first sightings of contactless bank cards. The main challenge wasn’t to think of where such devices could be used, but to have devices that were both affordable and of a suitable size for such generalised use.

The real breakthrough, or at least the starting point of one, came in 1999. The idea behind the Auto-ID Center set up at the Massachusetts Institute of Technology (under professors David Brock and Sanjay Sarma) was relatively simple — rather than have a transponder holding a large amount of information, why not just create a simple RFID tag and give it a number, which could then be linked to information on a computer somewhere? The result, agreed a committee of standards bodies and big companies, would be that tags could be produced more cheaply and therefore they could be used more broadly. So far so good — and it didn’t take long for Sarma and Brock to work out that the best way of accessing such information was via the Internet. The rest is relatively recent history, as the only constraint became the cost of individual tags. Today you can’t buy a pair of underpants without an RFID chip in them. In 2007 RFID chips were approved11 for use in humans, and it is now commonplace to have an animal ‘chipped’.

The broad consequences of the rise of of both remote sensors, coupled with the ability to tag just about anything, are only just starting to be felt. The term “The Internet of Things” was coined to describe the merger between sensor-driven M2M and the tag-based world of RFID. Broadly speaking there is no physical object under the sun that can’t in some way be given an identifier, remotely monitored and, potentially controlled, subject of course to the constraints it finds itself within. Some of the simplest examples are the most effective — such as the combination of a sensor and a mobile phone-type device, which can be connected to farm gates in Australia so that sheep ranchers can be sure a gate is shut, saving hours of driving just to check a gate. “We still see high demand for GPRS,” remarks Kalman Tiboldi, Chief Business Innovation Officer at Belgium-headquartered TVH, a spares and service company pioneering pre-emptive monitoring of equipment failure. Even a bar code and a reader can be enough to identify an object, and indeed, recognition technology is now good enough to identify a product — such as a bottle of wine — just by looking at it.

As far as how far the Internet of Things might go, the sky is the limit. If the IoT was a train, it would have sensors on every moving part - every piston, every element of rolling stock and coupling gear. Everything would be measured — levels of wear, play, judder, temperature, pressure and so on — data from which would be fed back to systems which analysed, identified potential faults and arranged for advance delivery of spares. It would be passed to partners about the state of the track and considerations for future designs. The same feed could be picked up by passengers and their families, to confirm location and arrival time.

Meanwhile, in the carriages every seat, every floor panel would report on occupancy, broadcasting to conductors and passengers alike. The toilets would quietly report blockages, the taps and soap dispensers would declare when their respective tanks were near empty. Every light fitting, every pane of glass, every plug socket, every door and side panel would self-assess for reliability and safety. The driver’s health would be constantly monitored, even as passengers' personal health devices sent reminders to stand up and walk around once every half hour.

Is this over-arching scenario yet possible? Not quite — still missing is missing is a layer of software, the absence of which means that current Internet of Things solutions tend to be custom-built or industry specific, not sharing common building blocks beyond connectivity12. Does this matter? Well, yes. In the trade, experts talk about the commoditisation of technology — that is, how pieces of tech that started as proprietary eventually become ‘open’, in turn becoming more accessible and cheaper — it’s this that has enabled such things as the three quid Raspberry Pi, for example. We shall look at this phenomenon in later chapters, but for now it is enough to recognise that the IoT is an order of magnitude more expensive than it needs to be. This is not through a lack of trying — numerous startup software companies such as Thingworx13 and Xively14 are positioning themselves as offering the backbone for the smart device revolution. No doubt attention will swing to a handful of such platforms15, which will then be adopted wholesale by the majority, before being acquired by the major vendors and, likely, almost immediately become open-sourced, as has happened so often over recent decades.

These are early days: over coming years, new sensor types, new software platforms and ways of managing devices and data will emerge. In a couple of years costs will have dropped another order of magnitude, opening the floodgates to yet more innovation on the one hand, or techno-tat on the other. GPS-based pet monitors have been available for a while for example, albeit bulky and expensive. Now that they are reaching tens of pounds, they make sense, as will a raft of other examples. It really will become quite difficult to lose your keys16, or indeed misplace your child17.

How small will it all go? Whether or not Moore’s Law succeeds in creating more transistors on a single chip, technology shrinkage doesn’t appear to want to stop — which means that the range of uses for it will continue to broaden, for example to incorporating electronics into pharmaceutical drugs, which will then ‘know18’ whether they have been taken. An intriguing line of development in the world of IoT is the creation of devices which require minimal, or even no power to run. In Oberhaching near Munich sits EnOcean, a company specialising in “energy harvesting wireless technology,” or in laymans terms, chips that can send out a signal without requiring an external power source. Founded by Siemens employees Andreas Schneider and Frank Schmidt, EnOcean is a classic story of marketing guy meets technology guy. Harvesting methods include19 generating electricity from light, from temperature and from motion, leading to the possibility of a light switch which doesn’t need to be connected to the electricity ring main to function. The EnOcean technology may not be particularly profound by itself, but its potential in the broader, technology-enabled environment might well be.

Not everybody thinks miniaturisation is such a good thing. Silicon chips are already being created at a level which requires nanometer measurements, bringing it into the realms of nanotechnology, a topic which has been called out by science fiction authors (like Michael Crichton) as well as think tanks such as Canada-based ETC, which monitors the impact of technology on the rural poor. “Mass production of unique nanomaterials and self- replicating nano-machinery pose incalculable risks,” stated ETC’s 2002 report20, The Big Down. “Atomtech will allow industry to monopolize atomic-level manufacturing platforms that underpin all animate and inanimate matter.” It was Sir Jonathon Porritt, previously in charge of the environmental advocacy group Friends of the Earth, who first brought the report to the attention of Prince Charles, who famously referenced21 Crichton’s “grey goo” in a 2004 speech.

Some 10 years later, such fears are yet to be realised. But the notion of miniaturisation has continued consequences in terms of increasing the accessibility of technology. As a consequence the process, of seeing a platform of hardware, connectivity and control software capable of connecting everything to everything, continues on its way. What will it make possible? This is a difficult question to answer but the most straightforward answer is, “quite a lot.” And it is this we need to be ready for. How we do this is the most significant task we face today.