Algorithms are essential for IoT.
Connected devices automatically steer our cars; control the light, warmth and security of our home; and shop for us. Wearables monitor our heart rate and oxygen levels, tell us when to get up and how to move, and keep detailed logs of our whereabouts. smart citiesPowered by a multitude of IoT devices and applications, it controls the lives of millions of people around the world by controlling traffic, sanitation, public administration and security. The reach and impact of IoT in our daily lives would be unimaginable without algorithms, but how much do we know about algorithmic function, logic and security?
Most algorithms operate at computational speeds and complexities that stand in the way of effective human assessment. She work in a black box† In addition, most IoT application algorithms are proprietary and operate in a double black box. This status quo can be acceptable if the outcomes are positive and the algorithms can do no wrong. Unfortunately this is not always the case.
When black box algorithms go wrong and cause material, physical, social or economic damage, they also hurt the IoT movement. Such mistakes undermine the social and political confidence the industry needs to ensure wider adoption of smart devices, which is essential to move the field forward.
Opaque algorithms can be costly and even deadly
Black box algorithms can lead to significant problems in the real world. For example, there is an inconspicuous stretch of road in Yosemite Valley, California, which consistently confuses self-driving cars, and at this point we still don’t have an answer as to why. The public road is of course full of risks and dangers, but what about your own home? Smart assistants are there to listen to your voice and fulfill your wishes and commands regarding shopping, heating, security and just about any other home function that lends itself to automation. However, what happens when the smart assistant starts acting stupid and listens not to you, but to the TV?
There is a anecdote circulating on the web about many smart home assistants initiating unwanted online purchases because Jim Patton, host of San Diego-based CW6 News, uttered the phrase, “Alexa ordered me a dollhouse.” Whether this happened on this large scale is irrelevant. The real problem is that the dollhouse incident sounds very plausible and once again raises doubts about the inner workings of the IoT devices to which we have entrusted so much of our daily lives, comfort and security.
From the IoT perspective, the immaterial damage of such events is significant. When a autonomous vehicle fails, all autonomous vehicles take reputation damage. When a smart home assistant does stupid things, the intelligence of all smart home assistants is compromised.
The data elephant in the room
Every time an algorithm makes a wrong decision, its suppliers promise a thorough investigation and prompt correction. However, due to the proprietary, profitable nature of all these algorithms, authorities and the general public have no way of verifying which improvements have been made. Ultimately, we have to take companies at their word. Repeated violations make this a difficult question.
A major reason for companies not disclosing the inner workings of their algorithms – as far as they can fathom them – is that they don’t want to show all the operations they perform with our data. Self-driving cars keep detailed logs of every trip. Home assistants keep track of the activities in the house; recording temperature, light and volume settings; and keep a shopping list constantly updated. All of this personally identifiable information is collected centrally to help algorithms learn and flow the information into targeted advertising, detailed consumer profiles, behavioral nudges, and outright manipulation.
Think back to the time Cambridge Analytica armed effectively 87 million unsuspecting social media user profile information to misinform voters and could have helped change an entire US presidential election. If your friends list and a few online discussion groups are enough for an algorithm to determine the best ways to influence your beliefs and behavior, what deeper and stronger level of manipulation can the detailed logs of your heart rate, movement and sleep patterns enable?
It is in companies’ best interest to keep algorithms opaque, as this allows them to align them with their profit motive and gradually collect huge centralized databases of sensitive user data. As more and more users wake up to this painful but necessary realization, the adoption and development of IoT is slowly stalling, and skepticism is building a mountain for the algorithmic advances that never were. What must we do?
The transition to the ‘internet of transparency’
The most urgent focus should be on making what algorithms do more understandable and transparent. To maximize trust and eliminate the adverse effects of algorithmic opacity, IoT must become the “Internet of Transparency.” The industry can create transparency by decoupling AI from centralized data collection and making as many algorithms as possible open source. Technologies like masked federated learning and edge AI are enabling these positive steps. We need the will to follow them. It won’t be easy, and some big tech companies won’t go down without a fight, but then again, we’ll all be better off.
About the author
Leif-Nissen Lundbæk, PhD, is co-founder and CEO of xayn† His work mainly focuses on algorithms and applications for privacy-protecting AI. In 2017, he co-founded the privacy tech company with professor and chief research officer Michael Huth and COO Felix Hahmann. The Xayn mobile app is a private search and discovery browser for the web – a combination of a search engine, discovery feed and mobile browser with a focus on privacy, personalization and intuitive design. As the winner of the first Porsche Innovation Contest, the Berlin-based AI company has collaborated with Porsche, Daimler, Deutsche Bahn and Siemens.