The Internet of Things is no different from many other concepts in computing that attract a lot of hype in the early days. Everyone has heard of it and most people have an opinion on the topic. But we either have a very vague understanding of the whole or, if we do know something about it, we only know about a specific aspect that we happen to have encountered in more detail.
But there comes a point at which the general excitement about this shiny new concept begins to be tempered by a growing realization that it’s not as polished as it appeared to be at first glance. It becomes apparent that it doesn’t work as well as we thought it would, or that there are significant trade-offs that have to be made to get it to work. We’re fast approaching that point with the Internet of Things.
So I thought it might be helpful at this stage to jot down a few thoughts. First of all about some of the reasons why it’s not working quite so well and then about what might need to happen over the next few years to get it working better.
I’m grateful to Andy Mulholland, former Cap Gemini CTO and now VP and principal analyst with Constellation Research, for leading a roundtable discussion at a Constellation event in London on Thursday that helped me crystallize several of these thoughts.
Intranet of things
In the early days of the Internet back in the 1990s, every enterprise wanted an Internet of its own. Huge sums were invested in building private internets that were called intranets. Their purpose was to share enterprise content internally. It all sounds faintly absurd now and younger readers may not even have come across the term unless they happen to work for an enterprise with decades-old content management software still in place.
In the early days of any public network, there are worries about security and control which tend to lead enterprises to build their own private version before they become willing to trust the public one. An obvious present-day example is the preference that many enterprises have for private clouds over cheaper, higher performing and more robust public clouds.
We shouldn’t therefore be surprised to see enterprises preferring to build their own private Internet of Things — or more properly, an intranet of things — while they get comfortable with the technology and practice of IoT. But will these private intranets of things persist? However durable they may seem at present, history suggests the public network will win out in the long term.
Internet of fogs
In meteorology, when the cloud meets the ground you get a fog. So in computing terms, some vendors have advanced the theory that when the cloud descends on a network of physical devices, you end up with a localized cloud surrounding them — what Cisco, for example, calls Fog Computing.
These localized pockets of fog computing have some processing capability on hand which avoids having to send streams of raw data somewhere else for processing. Instead, the data can be analyzed locally and pertinent results or alerts are sent off to the parent cloud.
It makes a lot of sense to move the processing closer to the data where that’s possible. This kind of edge processing and analytics has lower latency and reduces bandwidth consumption and costs. However there is a danger that you end up with an Internet made up of lots of separate, localized fogs that cannot easily talk to each other.
Whether it’s Cisco, Huawei, Intel, Google, Siemens, GE or any of the many others seeking to capture IoT mindshare and market, buyers should be wary. Whatever the tactical arguments in its favor, fog computing is often also a strategy for tying an enterprise’s intranets of things into a proprietary landscape belonging to the vendor. You might just as well call it FUD computing (where FUD stands for the tried-and-tested technology marketing ploy of spreading fear, uncertainty and doubt about competitors’ offerings).
While edge processing is clearly going to be a very important attribute of the Internet of Things, it would be a mistake to carve up the cloudscape into a collection of localized fog computing islands that make it difficult for individual nodes to interact across the cloud. To understand why, we should drill down into three separate layers that collectively make up the Internet of Things.
Internet of sensors
This is the defining foundation layer of the Internet of Things. It stems from our new-found ability to connect physical devices to the network and instrument them so they can communicate their condition, or to add sensors that monitor the environment around them.
We are seeing very strong adoption of this technology in manufacturing — what companies such as GE call the Industrial Internet. It is rapidly taking hold in manufacturing plants, where many of the machines in use are already computer controlled.
There’s an overlap here with an older, related technology called M2M (machine-to-machine), which is about machines interacting automatically. IoT goes much further than merely automating existing interactions. It’s worth reading my diginomica colleague Derek Du Preez on the differences between M2M and IoT and the importance of innovation on an IoT foundation.
When we look at places like a manufacturing plant, or a domestic home, security of the Internet of Things becomes an important consideration because there are privacy and safety considerations. This is one reason why concepts like intranets and fogs come into the conversation, but it’s important to remember that most intrusion episodes happen because the device maker has assumed the local network is safe. In a connected world, proper security starts from the principle that no network can be fully trusted, and therefore there is little to gain from subdividing the cloud. Every communication must be individually secured and verified.
Sensors and instrumentation, along with Internet connectivity, make up this foundation layer. Thanks to this layer, we have more data than ever before about the machines we use and the environment they and we exist in.
Internet of smart devices
It is the addition of intelligence that brings the Internet of Things to life (perhaps all too literally,one day in the future, but let’s not go there today).
Whereas the first layer added sensors and instrumentation, this layer brings processing power to analyze and harness the data being collected. Not every IoT device is smart. But the Internet of Things needs smart devices distributed around it to share the load of data collection, filtering and analysis.
Much of this processing work will be performed collaboratively. For example, a machine will have smarts built in so that it can process the data its instruments collect, but it will receive updates from the cloud to its algorithms and programs that allow it to change or enhance the way it examines and analyzes the data. It will report or log data with aggregation systems in the cloud that look for new patterns or exceptions across multiple devices.
In many cases, there will be smart devices in the IoT ecosystem whose primary role is to process and analyze data. These may be dedicated controller devices that remain in situ or they may be part of a user-facing mobile device such as a wearable, a mobile phone or tablet, or a vehicle. Smart devices will often have permissions to connect to new systems as they make themselves known and contribute different sets of data or results as required by the applications they host. This is a benign version of the ‘fog computing’ we mentioned earlier — localized IoT activity, but using open standards.
The ability to collaborate with new devices or in new ways is a prime reason for keeping the Internet of Things open, or at least for minimizing the number of different standards in play. Of course, intelligence can also play a role in negotiating the idiosyncracies of different protocols and specifications so diversity is not an insurmountable barrier. But one of the other lessons of technology history is that you can do a lot more than you’d think with the basic languages of the Internet such as TCP/IP and HTTP. Keeping things as simple as possible usually wins out in the end.
Internet of mobile apps
The final layer of this three-tier topology is the mobile applications layer. Not mobile in the sense of running on a mobile device, but mobile in the sense of being able to run on multiple devices. Instead of being physically contained within a single device, a truly mobile app can transfer its activity from one device to another, and acts across all the relevant devices and sensors it has access to.
This is the layer where the most innovation is destined to occur, provided it is able to operate within a horizontally integrated stack where a) individual applications can run on the widest possible range of smart devices and b) those smart devices can access the broadest possible array of connected sensors and machines.
In every wave of computing, we’ve seen a battle play out between vertical integration, which individual vendors promote because it gives them proprietary advantage, and horizontal integration, which maximizes participation and therefore yields the most rapid and extensive innovation.
The argument in favor of vertical integration is that a single vendor can deliver better performance when it controls the stack from top to bottom. We hear this argument very strongly at the moment from mobile device makers. It’s an argument that has merit if you know what you’re aiming to optimize for.
But at this early stage in the evolution of the Internet of Things, we have no idea where all of these emerging technologies and connective capabilities will lead us. We need to accelerate evolution, not hold it back, and the only way to do that is to promote a simple, open architecture that allows for rapid, iterative experimentation. Let the applications flourish, and we will ultimately discover what we can achieve in the Internet of Things.
No comments:
Post a Comment