Fast company logo
|
advertisement

If it’s designed in a way that respects human decision-making, AI can actually be a force for privacy.

How to make sure that AI isn’t invasive and creepy

[Source images: Jolygon/iStock; KOHb/iStock]

BY Mahesh Saptharishi4 minute read

There is a common view in popular culture of artificial intelligence as magically powerful and, often, malevolent. The classic example is HAL 9000, from 2001: A Space Odyssey—the omnipresent computer whose implacable calm masks its murderous intent. There is no escaping the unblinking eye, which sees and hears all (and it will read your lips when it can’t hear you).

This image’s simple resonance—the idea that technology will soon outstrip and overwhelm us—has created an enduring misperception: that AI is privacy invasive and uncontrollable. It has obvious narrative appeal, but isn’t grounded in reality. The surprising truth is very near the opposite: As big data and rapidly accelerating computing power intertwine, AI can be a bulwark of privacy. Real-life AI can, in fact, be a defense against what its fictional counterparts threaten. But we can’t leverage that potential if we mistake misuses of AI for fundamental flaws.

It’s not hard to figure out where the image of AI as necessarily omniscient comes from. The information age has created a spiraling loss of privacy, some of it self-driven as we share online both mundane and momentous details of our lives. And some is generated by the digital footprints we leave on- and off-line through web browsing, e-commerce, and internet-enabled devices. Even as we have created this constellation of data and sources of it, we have made it easier for entities—be they individuals, private companies, or government bodies such as law enforcement—to share it. Before now, people could expect privacy by obscurity: When data did exist it was less accessible and harder to share. But the new tide of information has eroded anonymity.

At the same time, ever-more-powerful systems have made this data flood manageable. Moore’s law—that computing power (as measured by transistors per chip) doubles every two years—has held for 50 years, with astounding effects. Just think of the power of your first computer versus what you’re reading this on (possibly in the palm of your hand). Similarly, Nielsen’s law, which asserts that high-end users’ internet bandwidth increases by 50% annually, has also borne out. We have both unprecedented access to information about ourselves and each other, and the ability to gather, slice, dice, and massage it in extraordinary ways.

The development of facial recognition technology is a classic example of the confluence of these trends. While imperfect, it has eased everyday life in numerous ways: Individuals use this technology to unlock phones and mobile apps and to verify purchases. Additionally, AI-powered computers can peruse innumerable images to find a specific individual, combing through more information faster than a human could, making it possible to locate a missing child or help find a Silver Alert senior, for example. More sensitive tasks, such as locating a criminal suspect, require more careful human–AI collaboration. In such instances the algorithms can be used to generate leads and double-check human-made matches, for example.

But, as with all technological advances, carelessness or misuse can lead to negative outcomes. We can track just about anyone with disturbing ease. There is seemingly no escaping the unblinking eye.

Or is there? AI is not itself the problem. It is a tool—one that can be used to safeguard against abuses. The central issue is the data and how it is being used: precisely what is being collected, who has access to it and how. That last part manifests in a couple of important ways. A critical difference between AI and a traditional computer is that the former has the capacity to learn from experience. The same inputs will not necessarily produce the same outputs. Modern AI even has the ability to express uncertainty or even confusion. So human users must be able to understand the limits of the available data and account for such results.

But for all its potential, we—the humans who develop and employ AI—decide when and how it is used and can design the rules to ensure responsible use while protecting privacy and preserving civil liberties.

AI’s ability to sift through vast amounts of information and pick out discrete data points can be a privacy filter, in other words, screening out unnecessary information and only returning relevant and appropriate results. Because humans need AI to do this, they too can be limited by the rules which govern it. To return to the example of searching for a missing child or Silver Alert senior, system developers can create clear data-access rules. Only authorized users could leverage AI to search specific video with a verifiably legitimate reason. Rather than the user having access to all video, the algorithm can learn to only pull the relevant video, so the user would only be able to see what is relevant to the specific search criteria. Meanwhile, the system can also filter out or obfuscate unrelated, personally identifying information like faces and location.

AI-powered technology also can be used as a failsafe to provide a second point of view to offset or counter human error, which may be introduced by factors such as fatigue, inexperience, bias, attention deficit, or emotion.

The critical element is not the artificial intelligence but the human it is assisting, because AI should be an assistive technology, not a decision-making one. The humans who decide how AI will be used—whether programmers, corporate leaders or lawmakers—must set clear guidelines around fundamental data-issues: When is it appropriate to collect the information, how do they ensure that data sets are diverse (hedging against embedding bias into the algorithm), who should have access to such data, and under what circumstances may they exercise that privilege? The answers to those questions will establish the frameworks in which we can safely and appropriately leverage the power of AI.

With the right human guidance, in other words, we can short-circuit HAL before it can go from fiction to reality.


Dr. Mahesh Saptharishi is the chief technology officer and senior vice president of software enterprise & mobile video at Motorola Solutions.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics