advertisement
advertisement
advertisement

To really protect our privacy, let’s put some numbers on it

We typically learn what our data privacy means only after it’s too late.

To really protect our privacy, let’s put some numbers on it
[Photo: ViewApart/iStock]

This story is part of The Privacy Divide, a series that explores the misconceptions, disparities, and paradoxes that have developed around our privacy and its broader impacts on society. Read the series here.

Like any other grad student with sketchy dial-up service in the late ’90s, my hours of work were stored on a local hard drive. The day it crashed, my ensuing panic was entirely focused on getting my draft thesis back. I didn’t care about my journals, budget, contacts, or my (failing) diet plan. But that thesis was the culmination of 11 months of research and writing.

advertisement
advertisement

I was lucky. I had paid for the “support plan,” and within a couple hours a tech showed up at my door. He popped out my drive, managed to restore most of the data, and popped in a new drive.

I went back to work, and he went on his merry way. Which would be the end of the story, except for a phone call I received seven months later from a friendly stranger asking if I wanted him to mail me back my hard drive. He had thought it had dummy data on it, but when his wife saw the diet plan, she told him I was a real person.

Knowing not much about computers at the time, I had a profound sense of ownership over that drive, and in due time it arrived in my mailbox. My career was born in that phone call, and after this many years I still can’t be certain what “privacy” is. But I am absolutely certain that everyone cares deeply about it. Try to open a locked stall door next time you’re in a public bathroom, and you’ll be certain too.

Privacy professionals have spent a lot of time debating the word, what the boundaries of legal rights are, and how to define accountability. But we still don’t have a good way of measuring how much privacy we have, how much we want, or how much we are willing to trade away for other benefits–like connecting with our friends and family on Facebook. Our information on privacy comes from lengthy legal documents: privacy policies, terms of use, end-user license agreements, acceptable use policies, etc. No wonder no one can make an informed decision. Privacy isn’t just contextual, it’s transitive.

We mediate what we say, to whom we say it and where. These decisions are made the same as any other human value–an individual’s choice, informed by culture, tradition, and experience. This ability to control the expression of values is critical, but we don’t often consider it until it’s lost. I certainly didn’t.

And what of other human values?

advertisement

Humanity is missing from the technical infrastructure that the world has come to rely on. Machine learning and artificial intelligence applications further lay bare the human values that are absent in much of our digital technology. If even a small part of the systems that technology companies are working on today become the machinery of tomorrow, they will function autonomously. We have a responsibility to teach this autonomous black box to be obsessed with humanity, to learn to express our values in its language. It’s particularly important because that black box is being fed our personal data, makes decisions with it, and impacts real people with those decisions.

How do you teach a machine to trust? To evaluate and make decisions based on protecting privacy? And how do you do that when those values vary so dramatically by person, state, religion, or any other organized group we have formed.

Much of our everyday technology–from the algorithms that sort us to the web ads that battle for our attention–was designed without any of these particular inputs. We know now that was a mistake. How do we design our next infrastructure holistically and at scale? Technologists, long operating in the Wild West of little to no meaningful regulation or social accountability for the tools we unleash, need to start taking measurements.

There’s no real number for a human value. There can be representative ones, however, and we can use those to demonstrate change. For example, “If you share this information, your available privacy will change from X to Y.” That’s meaningful. Computation is a physical process and information theory provides a basis for measuring bits and bytes of data flow through systems.

Measuring privacy can build on that theory, incorporating known factors from different disciplines that impact how we make decisions about information disclosure and protection. For example, many studies in social psychology examine how people share information and under what circumstances. We are, after all, predictable.

Clarifying the real-world changes that might come about when we “agree” to share our data helps us all have a real conversation about what we’re giving up, and what we’re getting in return. It can help us have a conversation about harm–not just the economic kind that our current regulation is obsessed with–but the broader societal and individual psychological harm, by illustrating change. And, if we can begin to measure the presence and absence of human values, we could create the basis for teaching those values to the machines that hold ever larger stores of our data.

advertisement

Just as I all those years ago gave no thought to handing my hard drive to a stranger, we have no concrete notion of the companies and organizations and governments that are monitoring our digital selves every minute of every day. Measuring will force us all to develop more discrete and accessible ways to determine the value we put on our data, and the costs of sharing it.

That’s a measuring that we’d be better off doing before our privacy has been further violated, sooner rather than later.


Tracy Ann Kosa (@tracyannkosa) is a privacy researcher, teacher, and advocate.

advertisement
advertisement