Fast company logo
|
advertisement

Cameras and microphones are some of our biggest tools in creating tech—including robots, which I design—that is as helpful as possible. But there are some uncomfortable side effects when they’re always on.

I used to support ever-present cameras. After one shocking night, I’m not so sure

[Photos: bigtunaonline/iStock; PORNCHAI SODA/iStock]

BY Carla Diana6 minute read

As a robot designer, I’m perpetually faced with a moral quandary around balancing the benefits and dangers of embedded cameras and microphones.

Some days, I am excited about the power that camera-vision technologies afford: smart mirrors that read heart rate from your gaze, connected doorbells that recognize visitors, and mobile payment systems that treat your face as a password are just a few of the myriad applications. I’m currently working on designing robots that can roam through environments such as hospital hallways, hotel lobbies, and conference halls, providing services such as cleaning, helping people find their way, and delivering supplies.

“The camera is the everything sensor,” I announced in my 2017 keynote talk for Building IoT, a gathering of technologists focused on the internet of things. The camera has quickly become the most versatile and ubiquitous sensing system there is. The robots I’m designing rely on different types of camera vision to navigate spaces and understand objects they need to manipulate. Without it, the robots are quite dumb and possess limited abilities to complete even simple tasks. This “everything sensor” provides me as a designer with the flexibility to consider the nuances of interaction that make the difference between a good product and a really great, intuitive, sophisticated product.

But then there are days when the problems posed by the presence of cameras make me want to avoid them at all costs, like the time my friend Claire called me in a panic. “I figured out why the nanny quit,” she exclaimed. “I need to talk!” That evening, she proceeded to describe, in gory detail, the pornographic conversation between Mark, her husband of 11 years, and her 26-year old nanny that was captured on the microphone of a hidden camera that had been lazily left recording in the living room. She had attempted to log in to the family Nest camera account, but ended up using a prepopulated username and password that showed her a camera feed she hadn’t remembered.

“I can’t stop thinking about how hot you were that night,” she heard him say on the audio track, making Claire wince. As the conversation continued, she could hear her 2-year-old just feet away, clearly ignored by both adults in the room.

Claire’s experience was gut-wrenching, and also illuminating. How different would it have been, she wondered, if the Black Mirror-style recording had never existed? What if the affair had come to light in more traditional ways, with the discovery of a discarded hair clasp or pair of panties? Maybe an admission of guilt would surface over time as her husband reflected on what he had done and sought to clear his conscience. While Mark clearly showed a lapse of judgment, the audio recording robbed him of the chance to present his version of the story to his wife during this critical inflection point in their relationship—instead of the one that the camera caught.

We all have a version of ourselves that we modify for different situations: the friend, the lover, the coworker, the parent. We control and craft these identities carefully and present the version of ourselves that is appropriate for the context. But what about the version of all of us that’s under the watchful gaze of what I call the “robot eye”—the combination of ubiquitous camera and microphone recordings that can be analyzed by AI and shared with the highest bidder?

The person present in these recordings might seem at first blush like the truest version of a person, but without the nuance of context, cameras and microphones rob us of a basic human need for privacy. Let’s think of all the “true” versions of me that may have emerged before I even walked out the door this morning. There’s the version that was asleep, drooling; the one that did a goofy dance in my underwear; the one that lost patience with my aging mother way too quickly. I can’t deny that I did all those things, but if any of these instances were shared with a first date or a prospective employer, they would cause certain embarrassment—and might ruin my chances for success in either situation.

What happened with Mark, Claire, and the accidental video footage has made me reconsider my bullish perspective on increasingly present cameras and microphones. I’m still excited about the potential for these sensors to make our products and environments more responsive and interactive than ever, and ultimately provide great human benefit. Nonetheless, I feel confused about the danger that cameras pose when the data falls into the wrong hands, or is accessed for a means that wasn’t intended. Just as Claire’s security camera revealed her husband’s private conversation by accident, what else could potentially be inadvertently revealed that could then later be capitalized upon for corporate or government gain?

Making cameras private by design

When faced with the overwhelming amount of privacy violations that digital technology enables, some tend to throw their hands up in the air, giving up any control over privacy. But as a product designer, I think we can do better.

advertisement

To start with, we need more transparency regarding what’s being collected, and how it’s being stored. Instead of the mind-numbing legalese that people face when they install a new app, there could be clear illustrations of privacy implications that simply state how and why camera data will be used, and iconography that indicates camera viewing and recording status. Designs can also do a much better job of letting people know what’s happening behind the scenes in the product’s programming, taking advantage of nonverbal cues like indicator lights, subtle tones, and robotic movements to communicate when surveillance or recording is taking place.

From a physical design point of view, products should incorporate frames around camera lenses or highlight parts of a product’s shell that describe the location of a microphone array. For the robot projects I work on, for example, a visual indication showing that a camera is present and active would be helpful. For voice agents such as Siri and Alexa, a visual indicator could help remind people that the device is listening. These systems might also detect when a new person enters the room and alert them that there is a microphone present.

I still vividly remember how sad I felt for everyone the day I got the call from Claire. I stood by while she assembled the clothes, toothbrush, and list of hotels that she would give Mark when she kicked him out that night. Ultimately, she was grateful that the camera exposed the truth in no uncertain terms, offering a quick rip of the Band-Aid on an already problematic relationship. On the other hand, the surreptitious recording felt so unnatural and otherworldly that she forever wishes she had never heard it.

Beyond Claire and Mark, the current state of secretive recording remains in a suspicious, uncomfortable place for anyone who uses electronic products. To move past this status quo, designers like myself need to focus on setting precedents for revealing the purpose and presence of cameras and microphones in everyday devices, understanding that we all need a little nudge once in a while to remember that the robot eye is watching us.


Carla Diana is the head of design for Diligent Robotics and the founding head of the 4D Design Program at Cranbrook Academy of Art.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics