Fast company logo
|
advertisement

TECH

Nine technologies creeping us out in 2019

The technology that deserves our vigilance before creepy creeps into dangerous.

Nine technologies creeping us out in 2019

[Photo: Victor Garcia/Unsplash]

BY Ruth Reader6 minute read

As reality in tech-heavy economies blends further into an unending choose-your-own adventure episode of Black Mirror, the biggest, creepiest innovation may be the big data economy built on the back of the black mirror in your hand. It’s not just Google and Facebook and Amazon and the rest of Silicon Valley sucking up our digital exhaust: A vast array of companies are increasingly capturing information about your every move for profit, and in ways that can adversely and quietly impact you.

Even Sheryl Connelly, Ford’s generally optimistic futurist, is worried about what’s to come. Between surging economic inequality, a yawning digital divide, and persistent privacy violations, she says, “it’s a very 1984 moment, and you have to wonder when the other shoe will drop.”

In many countries, there is little legal framework surrounding the collection and potential abuse of personal data. Last year, just a few months after Facebook’s Cambridge Analytica scandal exploded, Europe began to grapple with new rules surrounding data collection through its new General Data Privacy Regulation, or GDPR. Shortly after that, California passed the nation’s most far-reaching data privacy law, set to go into effect in 2020.

The new laws were important victories for personal privacy in a year that was otherwise marred by tech industry scandals, blunders, and all kinds of reminders of innovation’s creepy side. Here are a few recent developments worth keeping an eye on this year.

Face recognition: Airports, stores, casinos, and an untold number of other places are using facial recognition, even in real time, to search for suspicious people with the help of massive and obscure databases. Australia is launching a national face scanning system, and in China, facial recognition technology is catching criminals as they sip brews at a beer festival. In December, as Amazon continued to face criticism for the sale of its Rekognition service, one of the most prominent call for regulations came from one of its AI competitors. Brad Smith, the president of Microsoft, wrote, “We don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success.”

A police officer uses smart glasses to recognize the face of a suspect, as seen in a 2017 simulation by the U.S. Dept. of Homeland Security.

Affect recognition. So-called affect recognition software isn’t just being used to evaluate job candidates: Police are increasingly turning to AI-based systems to determine whether a person is a risk based on their facial gestures and their voice, in what some experts have described as a modern-day version of physiognomy.


Related: The 60 dumbest moments in tech in 2018


Digital humans. Lil Michaela, who charmed the world when her account first arrived on Instagram, was just the beginning. Now we have simulated TV news anchors and assistants like Google’s Duplex, which is capable of making phone calls to restaurants and hair stylists on a user’s behalf. The humanness of Duplex was so uncanny that many accused Google of fakery; when Fast Company tried the service, it seemed to work as advertised. While simulated people are being proposed for customer service and similar jobs, they also risk exacerbating an environment where living humans can’t tell the difference between fiction and reality.

Digital fakes. As with digital people, the technology surrounding deepfakes—videos intended to trick viewers into thinking someone said or did (or danced) something they didn’t—has recently given way to new techniques, like deep video portraits and near photo-realistic simulations of physical places. While experts wage battle against advanced digital fakery, backed by agencies like DARPA, some are also raising alarms about the prospect of a much less sophisticated kind of attack: the spread of fake data and fraudulent documents.

Alec Baldwin’s impersonation is replaced with the face of the real Donald Trump, using Deepfakes. [Image: Youtube user Derpfakes]Human botification. In a world of algorithmic suggestions, Google is now autocompleting our sentences. Convenient, sure, but nudging us a bit closer to what Google thinks we should write may also be nudging us humans into robot territory. “A lot of this predictive analytics is getting at the heart of whether or not we have free will,” tech ethicist David Polgar told Fast Company‘s Mark Wilson. “Do I choose my next step, or does Google? And if it can predict my next step, then what does that say about me?”

Meanwhile, ride-hail drivers and other algorithmically-guided workers are confronted by a similarly crucial question, writes Alex Rosenblat, the author of Uberland: ‘Given that Uber treats its workers as “consumers” of “algorithmic technology,” and promotes them as self-employed entrepreneurs, a thorny, uncharted, and uncomfortable question must be answered: If you use an app to go to work, should society consider you a consumer, an entrepreneur, or a worker?’

Workplace monitoring. Companies have been able to track employees’ locations and conversations for years, but the tracking is getting more invasive, and closer to workers’ bodies. Last year, Amazon received a patent for a wristband that uses haptic feedback to correct employees when they may be about to do something wrong. Still others are deploying sensors around offices to track movement and space utilization.

advertisement

Home surveillance. Smart home gadgets boomed in 2018, bringing more cameras and facial recognition technology into the home. Facebook launched Portal, Google expanded its Home Hub line, and Amazon launched its smart doorbell service Ring as part of its aggressive push into our homes. (Last week, the company said that more than 100 million devices with Alexa on board have been sold.) We were already concerned about Alexa’s ability to pick up on people’s conversations, but now the company is headed toward integrating facial recognition technology into its devices, which the ACLU said portended a “dangerous future.” Of course, it’s not just Apple, Google, and Amazon reaching deeper into our homes: Our smart TVs are watching us, too.


Related: Get ready for the “splinternet”: The web might not be worldwide much longer


Genetic abuse. Genome editing using tools like CRISPR promises incredible improvements to human health, but they also raise incredible medical and ethical questions that threaten to overshadow their potential benefits. In October, a Chinese researcher announced he had used CRISPR to create new human babies whose future offspring would be resistant to the AIDS virus. That led to widespread condemnation by the global research community—germline gene editing and the implantation of embryos into a human mother’s womb are illegal in many countries—but the research was a reminder that the tools for genetic modification are spreading, and could spread significant risks to ecosystems in the process. In 2017, we spoke with CRISPR pioneer Jennifer Doudna about the threats that worried her most.

Genetic testing. As the DNA testing market has exploded—expected to increase to $310 million by 2022—concerns have grown about the unexpected uses and abuses of genetic data. Many consumers don’t realize that the genetic data they get from DNA testing companies may be shared with an array of third-parties like pharmaceutical companies, a detail that is buried inside ever-hard-to-read privacy policies. The companies have insisted that they only share anonymous data with users’ consent, but regulators may not be convinced: In June, Fast Company reported that the Federal Trade Commission was investigating some of the DNA testing companies, possibly over privacy concerns. Meanwhile, DNA sites are also an appetizing target for hackers, raising the uncomfortable prospect of an Equifax-like leak of your genetic information.


Related: 7 digital privacy tools you need to be using now


As the public and privacy advocates call for more control over how companies collect their data, and more legal protections develop, another glimmer of hope has emerged from the tech companies themselves: Through protests and letters, workers are increasingly holding their employers accountable for their behavior and the risks of the products they sell. Without strong rules in place, it may be the people building creepy technology who are best positioned to keep it from getting dangerous.

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More


Explore Topics