Suppose technology could help you find what you want to buy at the lowest possible price. Suppose technology could keep you from being hit by a car while walking. Suppose technology could make autonomous vehicles totally effective and safe, could make it impossible to get lost or to fail to recognize a person or landmark. Suppose all of this could be done by one set of initiatives that harnessed what we already know and use. Interested? Impossible? Yes to the first, surely, and NO to the second…maybe. Welcome to the Internet of Everything.
The concept of an “Internet of things” came along officially in an MIT statement by Kevin Ashton: “…people have limited time, attention, and accuracy. All of which means they are not very good at capturing data about things in the real world. If we had computers that knew everything there was to know about things, using data they gathered without any help from us, we would be able to track and count everything and greatly reduce waste, loss, and cost. We would know when things needed replacing, repairing, or recalling and whether they were fresh, or past their best.”
What’s important about this, I think, is that it’s not about the limited stuff we call IoT today, but about a universal Internet of Everything that sees all, knows all.
We have know-a-lot, but not knows-all, these days. It’s surely possible to have an application know what we want to buy, and where we are at the moment. It would be possible for it to know, meaning find out, the prices of the things we want at places near our application, but it would mean a lot of complexity and cost for what might be a pretty marginal benefit. If we could do the other stuff I opened with, then helping us walk into the right shop could be a low-cost addition, but we can’t really do that. We have “know” but not “see.”
This see-and-know stuff is important because it illustrates the good and bad about a human-driven Internet and applications. We see, and acquire a real-time understanding of the real world. IoT sensors…well…sense. It’s perhaps helpful to know temperature, barometric pressure, location, and relative velocity, but it’s not like looking out your car window, if you’re trying to navigate. In 1999 when Ashton made his comments, we didn’t have a practical way to mimic the human visual sense to gather real-world data. With AI, we now have that in “spatial computing,” and we could realize everything I mentioned in the first paragraph, and even more.
OK, if “we now have” what we need, why don’t we have all those applications today? I said there was a kind of see-and-know gap, but isn’t that inconsistent with the notion that we have what we need? If there’s no giant technical advance we should be looking for to unlock that wonderful future, what will? Answer: Technology really isn’t the problem, it’s social.
“Big brother is watching you” is a catchphrase for the risk of large-scale surveillance. We could identify criminals walking on the street with widespread deployment of video, and the same technology could warn us against stepping into traffic. But the same stuff could help people stalk others, spy on people, and maybe expose some secrets we’d just as soon keep hidden. Given that the average person thinks that everything can be hacked, and that many think that government is trying to spy on us already, it’s not hard to understand why companies are reluctant to promote the use of see-all-know-all technology, even the narrow use.
Narrow use such as what? One of my regular contacts is a fairly big-name labor lawyer. I asked her about the use of video monitoring to guard against workplace accidents, and she said “every union would be afraid it would be misused, and every employer would deny that while jumping to misuse it.” Another contact told me that having extensive video monitoring to facilitate safe use of autonomous vehicles would almost surely face lawsuits from privacy advocates, supported by legions who are often where they’re not supposed to be.
Privacy is important to all of us. So is safety, health, life. We may be reaching a stage in technology evolution that will demand we decide how we balance these things against each other. Is the fear of AI running amok an example of this sort of concern? I think it is. And I think that long before AI could rise up and threaten us with extinction, it could rise up and save us, or expose us. We’ve had pressure to create guardrails on AI, but those pressures have largely dodged the broadest, most impactful, and most immediate one – which is the ability of AI and video combining to let the real world, including each of us, be watched by technology.
The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in.
Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further?
We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. I wonder how many people worry that their doorbells are watching them while they’re in their yard. Fewer, I’d bet, than worry about AI rising up and killing them, and yet the doorbells are real and predatory AI is not. Clearly we can dismiss this sort of thinking, stop covering our webcams. Could we become comfortable with universal video oversight? Maybe, but it would be better if we could find a solution to the governance dilemma.
Which just might be possible, with AI, for two reasons.
The more powerful, the broader-scoped, AI is, the more difficult it would be to constrain how it could be used. I doubt anyone would disagree with that. Given that, it would be true that contained, topic-focused, expert AI agents would be easier to constrain. You can secure and govern an API, but how do you secure and govern a conversational relationship? We’re back to Asimov’s Three Laws, the second of which is obedience. Give something, including AI, autonomy and you give it the potential to intrude. In the context of our AI video-watching, we have a greater risk asking AI to look for something than from creating an AI agent that can only look for certain things.
We could also use expert AI agents to govern AI applications. There is already an “adversarial AI” strategy in machine learning in general, working to detect manipulation of data to deceive the model. The same approach could be applied to governance of AI use, particularly if the AI is designed to deliver only specific results rather than to respond to general questions. Facial recognition, for example, could be limited to faces actually on a government criminal registry. Could that be hacked? Sure, but not by your average stalker or suspicious spouse.
Realizing the “Internet of Everything” is critical if we’re to continue to advance how technology empowers our work and improves our lives. Controlling how we create a new level of closeness with tech, avoiding privacy and security pitfalls while giving tech a closer look at our activities, is critical to realizing the Internet of Everything, and AI is almost surely the only path to achieving that control. That AI mission is what we need to be formulating policies to govern. The risk of AI coming to drive us all to extinction is far smaller than the risk of its not coming to our rescue.