advertisement
advertisement
advertisement

The Ethical Quandaries You Should Think About The Next Time You Look At Your Phone

Michael Sandel, Harvard professor and “rock-star moralist,” talks with Fast Company about urgent ethical questions for a digital age.

The Ethical Quandaries You Should Think About The Next Time You Look At Your Phone
Harvard professor Michael Sandel in Boston’s Fenway Park, where he was photographed during HubWeek in October 2015. [Photos: Rose Lincoln, Harvard University; App Photo: TIZIANA FABI/AFP/Getty Images]

Michael Sandel is perhaps best known for his “Justice” course at Harvard, which is the most popular course in the university’s history. In his classroom, he and his students engage in heated debates about big moral issues we face in our everyday lives, including questions related to technological and scientific advances: Should we try to live forever? Buy our way to the head of the line? Create perfect children?

advertisement

His lectures have been made freely available online and have been viewed by tens of millions of people around the world. On Sunday, in conjunction with HUBweek, Boston’s citywide innovation festival, he held a massive “master class” for an audience of thousands at Faneuil Hall and moderated a panel that included Huffington Post founder Arianna Huffington, cellist Yo-Yo Ma, and Veep writer Alexis Wilkinson. (The forum was scheduled to be held at Fenway Park, but had to be relocated because of the possibility of a tropical storm on the East Coast.)

One afternoon prior to the event, Fast Company spoke with Sandel about what he believes are some of the most significant and disturbing quandaries that new technologies raise in our society.

Fast Company: You spend your days talking through the big ethical questions of our time with your students. What are the key issues that come up with the technologies that we use every day?

Michael Sandel: The question of privacy looms larger and larger. We seem to be increasingly willing to trade privacy for convenience with many of the devices that we routinely use.

For example, many health insurance companies are now contemplating offering policyholders the following deal: If you wear a device like a FitBit that measures your health—how much you exercise, the number of steps you take each day, what you eat, how long you sleep, when you sleep—then sends this information to the health insurer, the insurer will offer you a big discount.

This raises the question: Is this a development we should welcome as a society? Or does it move us closer to a way of life where surveillance—not just government surveillance, but also surveillance by the companies we deal with—becomes more and more intrusive?

advertisement

Giving up privacy is often presented to us as a choice, but after a while, it becomes clear that there is a certain price to not giving up this information. Sometime, while we weren’t looking, it stopped being a choice at all.

What you’re suggesting, and I would agree, is that the accumulated effects of what may seem to be individual choices may wind up creating conditions and expectations that actually leave us with very few choices in these matters.

You can see this in the health insurance example. It’s one thing to offer a discount on health insurance premiums if a person agrees to wear a device that reports health data to the insurer. But suppose an employer tells its employees, ‘This is very valuable, it will drive down our health insurance costs. We are requiring everyone to wear one or pay a fine.’

It seems like one question when it’s offered as a carrot, but what about when the carrot becomes a stick? What if there is a penalty for not agreeing to submit to a kind of surveillance? What at first seems like a simple matter of consumer choice actually becomes embedded in habits and expectations that actually leave little scope for genuine choice in how much personal data to share.

Michael Sandel

On a daily basis, most of us already offer up data about our preferences to companies without giving it a second thought. Amazon users, for instance, are willing to share their shopping habits in the hopes of getting more accurate product suggestions.

Yes, and this leads to another interesting set of questions.

advertisement

To what extent can we and should we aspire to create machines that can outthink us? For example, Netflix has an algorithm that can predict what movies you will like based on the ones you’ve already seen and rated. Suppose a dating site were to develop a similar algorithm—maybe even a more sophisticated one—and predict with some accuracy which partner would be the best match for you. Whose advice would you trust more? The advice of the smart dating app or the advice of your parents or your friends?

If Netflix is going to feed me a constant stream of movies that I like, then I might never stumble on a movie that I didn’t know that I wanted to see. The implications in dating are so much more profound.

What that raises is another larger question which is, What is the role of accident in human affairs? Is this something that we should ideally overcome? Or are there certain limits to the project of master and control over our lives, such that something would be lost if we banished accidents altogether?

You’ve spoken a lot about the role of accident when it comes to children and reproduction.

There have been developments in biotechnology that are spurring a drive to create designer babies. There is now a desire to improve or perfect our children through the use of genetic engineering.

This is not science fiction anymore: Something as straightforward as sex selection is already possible now through pre-implantation genetic diagnosis. If you’ve seen the movie Gattica, a good part of that is possible now by simply walking into a fertility clinic.

advertisement

It’s a slippery slope, though, isn’t it? It’s now fairly typical for pregnant women to do genetic screenings to see if their babies might have diseases.

Right. The question, it seems to me, is should we use new genetic technologies only to cure disease and repair injury, or also to make ourselves better-than-well. Should we aspire to become the masters of our natures to protect our children and improve their life prospects?

This goes back to the role of accident. Is the unpredictability of the child an important precondition of the unconditional love of parents for children? My worry is that if we go beyond health, we run the risk of turning parenthood into an extension of the consumer society. You run the risk of treating our children as commodities or objects of our manufacture. That, I think, is the worry.

I imagine we’re only scratching the surface of things we need to worry about.

There is a constellation of questions that emerge from new technology. We need to find a way to have public discussions about the biggest ethical and even spiritual questions that are embedded in these technologies. But we rarely debate directly and openly with one another.

Why do you think this is?

advertisement

There are two obstacles to having these conversations. One is that we have very few public venues and occasions for serious discussion of these questions. So much of the media is shouting matches and ideological food fights. It’s very hard to have the kind of reasoned discussion of these big ethical questions without creating opportunities to do that.

The second obstacle is that we have a tendency in our public life to shy away from hard, controversial moral questions. We often fear—and understandably so—that bringing moral or spiritual questions into public life runs the risk of intolerance or coercive outlooks and attitudes. We have a fear of moral judgment and moral argument because we know we live in pluralist societies where people disagree about values and ethics. There’s a tendency to believe that our public life could be neutral on those questions.

But I think that’s a mistaken impulse. It’s an instinct we should try to overcome. I think it is possible. Look at the response to the Pope’s visit and his attempt to engage in moral questions in a public way. I think there’s a great hunger among citizens to engage in more meaningful public debates about big ethical questions, including questions of values.

About the author

Elizabeth Segran, Ph.D., is a staff writer at Fast Company. She lives in Cambridge, Massachusetts.

More