advertisement
advertisement
advertisement

Privacy’s not an abstraction

An experiment in privacy–and the discussion that ensued–offer unexpected lessons in who gets watched, and how.

Privacy’s not an abstraction
[Photo: Paweł Czerwiński/Unsplash]

This story is part of The Privacy Divide, a series that explores the fault lines and disparities–economic, cultural, philosophical–that have developed around data privacy and its impacts on society.

advertisement
advertisement

“And so STRESS [Stop the Robberies, Enjoy Safe Streets] was born. With the help of computer data, [Detroit police commissioner John] Nichols planned to flood the streets with undercover cops disguised as drunks and priests, hippies and elderly women. When robbers tried to hold up these “decoys,” backup officers would swoop in and make an arrest. As Nichols explained to the congressional committee, his department hoped to “perfectly blend men into the environment” on a scale never before attempted. “With STRESS,” he testified, “the criminal must fear the potential victim.”

—Mark Binelli, “The Fire Last Time”

From lantern laws to sundown towns, from COINTELPRO and STRESS to stop and frisk and all the way up to the current regimes of technologically aided surveillance, the tracking of Blackness in this country has a long and sordid history.

Being surveilled has been, and continues to be, the de-facto state of existence for marginalized populations in America, and not coincidentally, in private is often where movements within those communities start and where they gain momentum and power. This was made clear to me by my parents, early on in my life. Growing up in Detroit, under the specter of STRESS, there was the sense that my neighborhood, my community, was always under a watchful eye whose purpose was not to protect me but to protect others from me and people who looked like me.

Privacy for marginalized populations has never been, and will never be an abstract. Being surveilled, whether by private actors, or the state, is often the gateway to very tangible harms–violence in the form of police brutality, incarceration, or deportation. And there can be more subliminal, insidious impacts, too.

“The fear and uncertainty generated by surveillance inhibit activity more than any action by the police,” Joshua Franco, a senior research adviser and the deputy director of Amnesty Tech at Amnesty International, told Motherboard last year. “People don’t need to act, arrest you, lock you up and put you in jail. If that threat is there, if you feel you’re being watched, you self-police, and this pushes people out of the public space. It is so hard to operate under those types of conditions.”

Which brings me to a recent op-ed in the New York Times about a thought-provoking lesson in privacy. Kate Klonick, an assistant professor at St. John’s University Law School, described an optional and ungraded assignment in which she asked her students to eavesdrop on and surveil unsuspecting folks in public to see what information they could gather about them, using only Google search on their phones.

The point, Klonick wrote, was “to demonstrate to my students how the most common of technologies can be used to shatter the perceived protections of obscurity and, in turn, reveal the admittedly thin mechanisms by which privacy is actually protected.”

advertisement

Some students told her the idea was “creepy.” “I assured them that the goal was not to eavesdrop on a purposefully private conversation,” she wrote, “or to do any ‘digging’ on the person, or to share or do anything with the information they found out. This was to be purely an exercise in whether people can actually be private in public places and whether they expect to be.”

The assignment was widely discussed, and Klonick appeared on NPR and CNN to describe the impact of the experiment on her students. “Most significantly,” she wrote in a Twitter thread that went viral, “a number who had clung to the idea of ‘I don’t care if anyone’s watching, I have nothing to hide’ were shocked into seeing the privacy issues. Including a future district attorney.”

Others, however, were shocked in different ways. “Folks, this is wrong,” Les Hutchinson, a PhD candidate who studies surveillance impacts on Chicana and Indigenous women, wrote on Twitter. “Do not make your students reproduce the conditions of privacy violations to teach them about privacy violations without the consent of those being violated.”

Estee Beck, a writer and English professor at the University of Texas, worried that “the ‘project’ teaches students how to orient in public, observe, use said information, put it in the consumerist and surveillent Google, and become people who don’t respect the dignity of others in public.”

Klonick later responded to some of the criticism on Twitter. “Is it a privacy violation? (Yes) even though it’s from public info and not used or disseminated? (likely yes) Does the instructional element outweigh possible ethical issues? (I think so),” she wrote.

advertisement

For me, there is a valuable lesson here–just not the one that was intended. The idea that surveillance would be used as an assignment on those with no options for consent speaks to how broken our ideas about consent have become, trivializing what to many people is a life and death matter of their lived existence.

On a more basic level, infringing on people’s privacy for your own purposes is exploitative. The assignment reflects the logic of digital platforms, which treats people’s data as raw material to be extracted and put to one’s own uses. While this extraction is currently legal, it is a “norm” inflicted upon us by platforms such as Google, Facebook, and Amazon. This kind of extraction is surveillance, and whether it is done online or in the “real world,” it intrudes upon people’s rights to move about in public in relative obscurity if they so wish.

“It wasn’t that long ago that nobody could easily investigate us in this way because the technology wasn’t there,” Larisa Kingston Mann, an assistant professor in media studies at Temple University, wrote in an email. “But nobody ever asked us if we wanted to become more searchable, technology designers just made that possible. They either assumed no harm would come of it, or that they weren’t responsible for any harms.”

The act of surveillance alone should be considered a harm, Mann argues.

“At a certain point we should be able to recognize that human dignity requires being allowed to just *be* without being watched,” she wrote. “Even if nothing (yet) is “done with” data, there is still the license that watchers have to see without being seen, and to collect information or watch someone when they would not consent because they prefer not to be watched–especially for communities that have already been over-surveilled, even in the pre-digital era.”

The value of privacy for others–and indeed for oneself–may be hard to grasp. Of course, precisely because ideas about privacy have been undermined by tech platforms like Facebook and Google, it is sometimes difficult to have these discussions with students. But this is precisely why we need to have them. The norm-shifting involved around privacy works to benefit tech companies who profit immensely from labeling extraction as “sharing” and “community.”

advertisement

Until we can come to better terms with the disparate impacts of privacy harms, the privileged will continue to pay for luxury surveillance, in the form of Apple Watches, IoT toilets, quantified baby products, Ring Doorbells, and Teslas, while marginalized populations will pay another price: Surveillance, with the help of computer data, deployed against them–in the form of ankle bracelets, license plate readers, drones, facial recognition, and cell-site simulators. As one group pays to be watched, other groups continue to pay the price for being watched.


Chris Gilliard, PhD, is a professor, speaker, and writer whose work focuses on privacy, surveillance, facial recognition, and digital redlining. He tweets at @hypervisible.

advertisement
advertisement