Actually, I want to hand over even more of my personal data to big tech

But thanks to Facebook and Google, even good-intentioned tech companies are now afraid to get to know me too well.

Actually, I want to hand over even more of my personal data to big tech
[Illustrations: ExpressIPhoto/iStock; RamCreativ/iStock]

This story is part of The Privacy Divide, a series that explores the fault lines and disparities–cultural, economic, philosophical–that have developed around digital privacy and its impact on society.


I want to give big tech companies even more of my personal data.

Yeah, I know that sounds contrarian. Because it is. Over the last two years, we’ve all been inundated with alarming stories about Facebook, Twitter, Instagram, Google, etc. taking the most personal data about our personalities, habits, and identities and using it for some nefarious purpose. And it’s prompted plenty of outrage, lawsuits, all-caps comments, and even a bipartisan consensus within the U.S. government to enact tough new rules on data privacy.

Everyone is waking up to the fact that big tech companies have been skimming personal data for years and not saying much about it. And don’t get me wrong, the tech companies deserve all the mistrust and scrutiny they’re getting.

But I hope they get a second chance with user data, because there’s so much cool stuff they could do with it, especially in the age of AI. I think they might find that many of us would be fine with giving up more of our personal data–if we get more in return.

I want my devices and services to know a lot about me. If I own a phone or use a service for five years of my life, I want the technology to get to know me as well as a friend or an assistant does. It should know my workflows, my entertainment habits, my health, how I travel, what I eat, and on and on. I want my tech to know enough about me to proactively remove the mundanities and friction points of everyday life, and know what I need practically before I do.


Here are some examples of things that tech companies could accomplish to improve our lives if they used personal information more wisely.

Biometrics. My smartphone is with me constantly. It should know exactly how fast I walk. The same big tech company owns my phone’s operating system (which accesses the phone’s accelerometer) and mapping app. Yet the mapping app uses some generic pace to estimate the time it’ll take for me to walk somewhere. Based on my individual walking pace, it should know exactly how long it will take me. Apps should take full advantage of the biometric data they can access about me, and find cool ways to customize services based on what my body tells them.

Smart speakers. A health insurance company CEO recently told me that passive sensors in the home–not sensors that must be strapped to the wrist–will be most helpful in detecting adverse health events in the home, especially for at-risk people like the elderly. Many people already have sensitive microphone arrays in their homes. They’re inside the smart speakers that have become so popular, like Amazon’s Echo devices. But the smart speaker makers, for privacy reasons, won’t allow the devices to listen for anything other than human voice commands. In theory, the speakers could be trained to listen for things like falls or other warning sounds. Such a feature could be really useful to people who live far away from their elderly parents and aging friends and relatives, especially as that demographic is expected to grow at unprecedented rates in the coming decades.

Siri. Apple’s personal assistant may be the best example of a service that could be improved by using more of my personal information. For the most part, Apple stores the user’s personal data in a secure enclave within the iDevice’s processor and prohibits itself from accessing it. The AI that powers Siri also runs on the device’s processor. It’s been said that Apple’s hands-off approach to personal data may seriously limit Siri’s usefulness. (Apple denies this.) Siri has lagged behind other assistants in its general knowledge and ability to learn.

I want Apple to protect my data from hackers, cops, spying government agencies, and advertisers, but I also want Apple to use my personal data to make my devices, services, and especially my digital assistant more knowledgeable about me and more useful to me. After five years of using Siri, I don’t feel that the assistant knows me much better than it did at the end of the first week. I want Siri to know what I want before I do.


Health app. Apple’s Health app can use APIs to pull in all sorts of health information from other apps and devices. For instance, if you have a Nokia scale, its associated app can report your daily weight measurements to the Health app. So can non-Apple wearables, diet, and exercise apps. And yet the Health app mainly just displays your personal health data in a statistical way, without making meaningful connections between various metrics. It doesn’t do this proactively, nor will it make connections on the user’s request. For instance, it won’t make connections between my body weight and my heart rate. Apple, however, does deserve credit for creating a place in the Health app for storing your medical records.

The future is personalized

The services we’ll likely be using in the future may require even more of our personal information to live up to their potential.

Autonomous cars. Ride-hailing companies like Uber and Lyft are testing driverless cars now, and their autonomous ride services will go into service in the near future. It’s essentially a technology service that’s delivered to you on a device that you don’t own. Your experience in that space will be different from that of every other rider. And you’ll be a captive audience–you’ll sit there during the ride with no driver to talk to. The autonomous car company will want to make sure that time is as productive and/or enjoyable for you as possible. And it’ll require your personal information to do that. It’ll go something like this.

“When I get in, they’re able to show content on the various surfaces, maybe the windshield, to me,” explains Camera IQ CEO Allison Wood. “Or based on the fact that this vehicle knows I’ve been picked up at work and that I’m on my way home and I like to shop at Whole Foods, and they’re able to tell me when I’m still a mile away that Whole Foods has a sale going on or that they have my favorite coffee in stock. Or on my way home, they’ll know that I sometimes stop at this Pilates place, and do I want to swing by for a class?”

I may want to provide the car’s AI with even more information–like how fast I like to travel and what music, news shows, or podcasts I like to listen to while I’m riding. I may want the car to alert my computer at work that I’m almost there. If the service gets demonstrably better with the more personal information I provide, then I’m willing to supply it. The car should be able to see me coming, access my data and preferences, and turn on the in-car experience that I want.


Augmented reality. We’ll be hearing and seeing a lot about augmented reality in the not-so-distant future. AR apps overlay digital content on the real world as seen through a camera. Today this works mainly on smartphones, or via headsets like Magic Leap One. But we’ll soon start seeing AR glasses that look more like traditional eyeglasses we’re used to wearing. Through those lenses, you’ll see different digital content mixed in with different places you visit. In a public park, you might find a virtual easter egg hunt. In a retail space, you might find promotional holograms. In an airport, you might find animated signs and labels pointing the way through security and to your gate.

“If I’m wearing AR glasses, by walking through the door at Westfield Mall I’ve probably opted into this massive AR app, so it’s just a different [digital] environment,” explains Unity’s Timoni West, who specializes in AR experiences. “You can imagine, then, that every single space, every commercial space, public spaces and semi-private spaces are now the equivalent of just going to another website.”

West says that each one of these digital spaces will need to know a lot about you in order to serve up the right content in the right way. They’ll need to know your preferences about how you want the “site” to interact with you.

Privacy for the AI era

How much you care about your data privacy is a complicated and personal question. Though most people surveyed are very protective of their digital privacy, they differ widely in how that concern impacts their online choices.

A huge majority of privacy survey respondents (91%) feel like they’re not in control of their personal data, and about half said they don’t trust tech companies to protect it, according to Pew. But only 61% said they’d like to do more to protect their privacy. That means that almost 4 in 10 say that they really don’t care enough about their data privacy to do something about it. And that number is probably low, because it’s the kind of research question that people might be embarrassed to respond to negatively.


The real question is: How many people are concerned enough about their privacy to shut off the social media and other services they get for free in exchange for passively giving up their data? Many choose not to, because they’re getting a service they value and don’t experience direct material harm from giving up their data. They’re seeing ads, including some that follow them around the internet, but they can live with that. That’s why Facebook and Google are still making billions.

Looking at the big picture, there may be something about the internet itself that makes us feel nervous about our identity and privacy.

West points out that there was a time when every city and town published a big phone book that listed everybody’s name, address, and telephone number. For many years nobody saw it as a privacy risk. And to this day direct mail businesses gather tons of public and private data to target us for (physical) junk mail. And yet when we’re targeted in much the same way via the internet, it feels more intrusive.

This same anxiety applies to supplying our personal information to AIs. “There’s this sense of the Terminator, Skynet, and the more they know the more they will come and get us,” West says. “It’s interesting that humans think that the more computers get to know us, the less they will like us.”

The tech companies’ excessive harvesting of personal data–and lack of transparency about it–has created distrust and ill will among consumers and regulators. Presidential candidate Elizabeth Warren is calling for the breakup of big tech companies. Facebook’s Mark Zuckerberg, forever on the defensive, said the social network will now become a “privacy-focused” company (we’ll see about that). Federal privacy legislation is circulating in Washington, D.C. Companies are hustling to comply with Europe’s General Data Protection Regulation’s privacy rules. And tech companies of all kinds are now thinking very carefully about the data they collect and how long they keep it.


In a way, it’s a shame this is all happening now, as artificial intelligence is just starting to do cool and useful things with our personal data. But the situation won’t last forever. Society is readjusting its privacy views for the age of social media. Rather than getting more bad press by trying to kill or water down privacy laws, tech companies should focus on ways to use our personal data to actually benefit us, not advertisers. That might be the quickest way to earn back the public’s trust.

About the author

Fast Company Senior Writer Mark Sullivan covers emerging technology, politics, artificial intelligence, large tech companies, and misinformation. An award-winning San Francisco-based journalist, Sullivan's work has appeared in Wired, Al Jazeera, CNN, ABC News, CNET, and many others.