advertisement

Pew Research surveyed numerous tech experts and opinion leaders on how to establish truth and civil liberty on the future internet.

14 experts say how the net’s worst problems could be solved by 2035

[Source images:
solarseven/Getty Images; user1518572209/Pexels]

BY Mark Sullivan8 minute read

In the early 21st century, the internet—and the social internet, in particular—has enabled a more connected world. But it’s also enabled and amplified some of humanity’s worst behaviors. Fringy, toxic opinions and outright disinformation proliferate. Antisocial behavior is normalized. Facts—when they can be recognized—are used to bolster preexisting opinions, not to challenge assumptions. Kids (and adults) measure their self-worth by their Instagram comments and follower count. Expecting the huge tech companies that operate the platforms to proactively fix the problems gets more unworkable as online communities grow into the billions.

When the Pew Research Center reached out to numerous tech opinion leaders to ask them to envision the internet of 2035–which might end up being called the metaverse, or Web3, or yet something completely else–many of them seized on the problems of poor online governance, lax content moderation, misaligned incentives, and a lack of trust as key challenges. Their ideas on how to fix these problems, and the progress that might be made over the next 12 years, are downright illuminating. In the interest of TL;DR, I extracted nuggets from these experts’ sometimes lengthy comments.

Doc Searls, internet pioneer, coauthor of The Cluetrain Manifesto, author of The Intention Economy, and cofounder and board member of Customer Commons
“The new and improved digital realm of 2035 is one in which the Web still works but has been sidelined because there are better models on the Internet for people and organizations to get along, and better technologies than can be imagined inside the client-server model of browser-website interactions. To see what is likely by 2035, imagine having your own personal privacy policies and terms and conditions for engagement, rather than always agreeing to those of others. The Internet supports that. The Web does not. On the Web, only sites and services can have privacy policies or proffer terms and conditions. You are never the first party, but only the second—and a highly subordinate one as well.”

Zizi Papacharissi, professor of political science and professor and head of the communication department at the University of Illinois-Chicago
“In most of the spaces we inhabit, humans have developed some form of curation. For example, a closed door may signify a preference for privacy; it may also signal a desire for security (and one of heightened security if the door is locked). Doors allow us to curate what enters our spaces and what remains out. Similarly, we humans have developed ways of chastising or punishing inappropriate behavior in commonly shared spaces. For example, if a person misbehaves in a bar, they are thrown out by a bouncer. We do not have a debate in this case about whether that person’s rights to free speech were violated because they started yelling in a bar. We simply kick them out. As of yet, we have no such types of broadly adopted rules for what appropriate behavior is online and how to enforce those rules online. When we try to establish them, we spark all kinds of debates about free speech. Yet free speech is not the same as free reach.”

Lucy Bernholz, director of Stanford University’s Digital Civil Society Lab and author of How We Give Now: A Philanthropic Guide for the Rest of Us
“Designing digital spaces for safety and serendipity is a next step. Enabling people to go in, out, and between such spaces as they choose is critical. And allowing groups of people to control information they generate about them is also important. Digital spaces need to become tools of the people, not of corporations and governments. They need to be fragmented, pluralistic, multitudinous, and interconnected at the will of people, not by profit-driven lock-in. . . . We need to remember and maintain the best of our physical spaces online—our parks, libraries, sidewalks, stoops, benches, busses, trains, and town squares—and bring that multiplicity of choice and its privacy within crowds, and safe serendipity into digital spaces.”

Ethan Zuckerman, director of the Initiative for Digital Public Infrastructure at the University of Massachusetts, Amherst
“I prefer to imagine a 2035 in which internet communities strengthen our civic and democratic muscles. Imagine a world in which most people are members of dozens of different online communities. Some are broad in membership and purpose, while others are narrowly focused and support small groups of users. These smaller groups, in particular, demand that their participants be involved in governing these spaces, acting as moderators, and the authors of the community guidelines. The larger networks use representative systems to nominate interested users to task forces that write and rewrite guidelines, and participants take shifts as moderators, using mechanisms similar to jury service. Through the rise of these community governance mechanisms, social networks not only become less toxic but become a training ground for full participation in a democracy.”

Vint Cerf, vice president and chief internet evangelist at Google and Internet Hall of Fame member
“[In 2035,] people will still do harmful things, but it will be much harder to get away with it.”

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics