Fast company logo
|
advertisement

Regulation is coming in one form or another. We need multiple stakeholders–government, academia, and us, the users–working collaboratively to figure out the best solutions.

Users need to play a role in how we regulate tech giants

[Image: StudioM1/iStock]

BY Anjana Susarla6 minute read

On Friday, U.S. senator and presidential candidate Elizabeth Warren proposed a plan to break up tech giants Facebook, Google, and Amazon, arguing that they are monopolies with excessive market power. Warren proposes that we treat massive online platforms as utilities that should be subject to regulation, just as our water and electricity companies are.

Warren is right that the tech companies that control vast swaths of our communication, commerce, and media have become so large and influential that they should be regulated to protect the public interest. But we shouldn’t regulate tech giants the way we manage traditional utilities like water and electricity, and simply breaking companies up won’t solve many of the most intractable problems, such as fake news, privacy violations, hate speech, and political bias.

What we need instead is a collaborative, multi-stakeholder approach to constraining tech’s power and tackling its biggest challenges. Traditional top-down, government-managed approaches to regulation like Warren’s are unlikely to be effective in the fast-evolving, globalized tech industry. Technology advances on a daily basis, producing new and unforeseen problems as well as the tools to solve them. In addition to technical challenges, a patchwork of competing national regulations could present a nightmare for international companies with billions of users around the world.

[Image: StudioM1/iStock]

It’s hard to imagine, for example, how government regulators from a single country could identify and impose a standardized, across-the-board solution to a problem like deep fakes, artificially generated photos and videos that portray real people saying or doing something fictitious. Computer scientists in academia and industry, however, are successfully training artificial intelligence networks to weed out these forgeries. It’s not that the problem of deep fakes can’t be kept in check, but it must involve many different groups working continuously to combat an ever-evolving threat.

Effective regulation of tech giants will require a combination of externally- and internally imposed constraints produced by input, pressure, and incentives from numerous channels. We need multiple stakeholders working collaboratively to determine how to minimize harm without excessively limiting the free flow of ideas and information.

Politicians, government officials, researchers, journalists, consumer advocates, the tech companies themselves, and especially consumers and citizens all have a role to play. You may never have thought much about how your water company is regulated, but you’re likely to have a much bigger say in how Facebook is.

This kind of collaborative, multi-stakeholder regulation isn’t entirely new. The internet has long been managed by non-governmental bodies involving civil society, industry, and policymakers, such as ICANN, IGF, and GCCS. But these entities have been more concerned with creating the infrastructure and rules for the internet as a whole than with regulating activities on specific platforms or the internal workings of private companies.

What’s largely absent from traditional approaches to regulation, including Warren’s plan, is the end user. All of us, in our dual roles as consumers and citizens, have to play a part in setting the limits for the Big Five tech companies. Companies may be less motivated by government interference than by pressure to honor their “social contract” with users (and to please advertisers, who face their own pressure from consumers).

[Image: StudioM1/iStock]

Many platforms have already begun to self-regulate their most egregious issues in response to public pressure. YouTube recently announced it would stop including conspiracy videos in its recommendation algorithms and stop showing ads on videos of children that could be targeted by pedophiles after a series of media exposés, public outcry, and loss of advertisers.

Any business with a brand to protect is highly sensitive to public opinion. What makes consumers uniquely powerful with tech is that they can signal their displeasure almost instantly by modulating their use of different apps and broadcasting their concerns on the very platforms they are criticizing. When #DeleteFacebook starts trending on social media within hours of a scandal, company leaders do take notice (although they also have the data to determine whether people actually quit or reduce their use of Facebook as a result).

advertisement

Consumer pressure can be a powerful tool but, on its own, is unlikely to be as informed or effective as it could be. Government, nonprofits, and the media can help draw attention to problems and motivate public action. Journalists and consumer watchdogs have uncovered some of the most egregious tech problems, like the Cambridge Analytica scandal and Russian interference in elections. Recent government hearings and investigations have also ratcheted up the pressure on companies like Facebook.

Rather than stepping in with top-down mandates as Warren’s plan proposes, the role of government in regulating tech may be about increasing transparency and exposing problems more than dictating solutions. Government could serve a function similar to media and nonprofits, but with added leverage from the power to compel transparency and the threat of harsher measures if companies don’t identify effective solutions. This is the role FCC Chairman Ajit Pai sees for his agency, calling for greater transparency while at the same time saying “strict, utility-style regulation” of tech giants doesn’t make sense.

[Image: StudioM1/iStock]

This role for government opens up the potential for innovative approaches to addressing our toughest challenges. Rather than restricting the use of personal data, California Governor Gavin Newsom has proposed that companiespay a “data dividend” to consumers, sharing the wealth with those whose data they are monetizing. This approach wouldn’t ban the sharing of data, which would be too disruptive to many companies’ business models, but it would give users leverage to negotiate with companies around what they share and how–as well as a portion of the profits.

Researchers in academia and industry are also a key part of the regulatory puzzle, helping to define problems and identify solutions. Computer scientists and machine-learning experts can help companies solve vexing technical challenges like identifying fake news and bots, while social scientists can advise government and industry on questions such as how to define hate speech. For example, Facebook recently announced a partnership with the Social Science Research Council to give scholars access to data to study social media’s impact on elections and democracy.

While tech companies may have hoped they could go it alone, it’s become clear that they need outside expertise and insight to address the thorniest problems. Strictly technical solutions developed by programmers will, in many cases, be insufficient, as I’ve seen in my own research on combatting false healthcare information online. My colleagues and I developed a deep learning tool to track how inaccurate medical content spreads on social networks like YouTube. Machine-learning algorithms like this are incredibly valuable, but cannot understand the meaning of content and identify inaccuracies the way a human can. We’re now engaging clinical experts and patient advocates to incorporate their input into recommendation algorithms.

This kind of collaborative, multi-stakeholder approach gives us the best shot at fighting false information while preserving free speech. Wikipedia has drawn on the power of its community to collaboratively regulate information for years. Though they don’t operate at the scale and speed of content shared on Facebook, what Wikipedia gets right is the centrality of the user, something traditional regulation often ignores.

In the coming months, we’re likely to see more plans for regulating the tech industry, as the consequences of doing nothing seem increasingly urgent. Even the industry itself has begun to accept the reality of regulation. Facebook’s chief of public relations Nick Clegg recently said that the question “is no longer about whether social media should be regulated, but how it should be regulated.” This regulation can take many forms, but it must be collaborative and it must involve consumers. As citizens of the internet, we all have a role to play in the regulation of tech giants.


Anjana Susarla is an associate professor of Information Systems at the Eli Broad College of Business at Michigan State University. Her research examines the economics of information systems and artificial intelligence, social media analytics, and how information spreads online. Anjana has a PhD in Information Systems from the University of Texas at Austin.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

ModernCEO Newsletter logo
A refreshed look at leadership from the desk of CEO and chief content officer Stephanie Mehta
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics