advertisement
advertisement

The growing effort to battle the internet’s “public health crisis”

The Knight Foundation is funding a new field of research, with the not-so-modest aim of addressing “one of the biggest issues that democracy is facing in our time.”

The growing effort to battle the internet’s “public health crisis”
[Photo: Daniel Korpai/Unsplash]

Fifty years after that first connection, the internet has become famous for connecting and informing the world—and for growing monopoly fears, privacy violations, and the spread of toxic speech, disinformation, and perhaps impeachment-worthy conspiracy theories. Don’t think of these problems as merely digital, says Sam Gill, a vice president at the John S. and James L. Knight Foundation: They amount to a modern-day public health crisis and demand the kind of effort undertaken in previous eras to counter disease, filth, and social and economic ills.

advertisement
advertisement

“At the turn of the 19th and 20th centuries, you not only had a new class of problems as the country densified and urbanized; you had problems that we didn’t quite know how to study and talk about,” he says. “And I think we feel the same way about the current moment—that there’s a new class of problems and they defy a lot of the traditional boundaries of research.”

Hence the Knight Foundation—typically known for its funding of local journalism, communities, and the arts—said last week it was putting more than $3.5 million into projects aimed at advancing an emerging field of study around internet governance, examining ways to limit the fakery, hate, and distrust that are spreading on social networks. The initiative is meant to add to public understanding—and generate basic, often difficult-to-obtain data—that can inform an ongoing barrage of critique of the tech industry, from state lawsuits to federal investigations, from California’s privacy law to candidates’ calls for breaking up the companies.

We’re not just living through a tech backlash but the birth of a new discipline, says Gill. “We’re responding to researchers, scholars, policy makers who aim to define the field. They’re the ones grappling with the issues, whether it’s on the Hill or in an attorney general’s office in a state that’s looking at Facebook or Google, or at a university studying these issues in new ways.”

So far, the projects Knight will fund—selected through an ongoing open call for proposals—include an effort at Yale to map the economic impacts of new regulations on tech companies, a fellowship focused on disinformation at Harvard’s Berkman Klein Center for Internet & Society, and research projects at Utah State University, Stanford, and NYU studying content moderation, discrimination, and the governance of the commercial internet from the perspective of marginalized populations.

Professors Sarah T. Roberts and Safiya U. Noble, who run the UCLA Center for Critical Internet Inquiry, will apply Knight funds to their study of content moderation systems and the impacts that platform policies and algorithms have on vulnerable communities, people who “by design, bear the brunt of digital systems in the form of ‘technological redlining,’ uneven and inequitable applications of technologies on their communities.” For example, Facebook agreed to curtail discriminatory housing, employment, and credit ads earlier this year as part of a settlement with civil rights groups, but research has shown that the platform’s ad targeting systems make discrimination difficult to avoid.

The investment is part of a broader $300 million, five-year commitment aimed at reinvigorating trust in news and information, including $50 million to understand how technology is impacting democracy. Most of that cash will go toward local news organizations and helping them find new revenue models, Knight said in February, when it announced the initiative. But the foundation is also seeking to address questions about the kind of challenges that stretch from newsrooms to Silicon Valley boardrooms: “how to produce, to curate, to connect people to trusted information at the speed of the internet,” says Gill.

advertisement

Knight’s funds, Gill acknowledges, are “just the tip of the iceberg” in terms of the resources required to build a stronger information ecosystem, the kind of cross-sector and bipartisan efforts that focus on public health. “Think about the kinds of public and private resources that medical research takes,” he says. “We have an NIH, we have a National Science Foundation. That’s the scale we should be thinking around this research too.”

Knight also took care to sponsor researchers from different sides of the ideological spectrum in the hopes of bolstering a diverse, bipartisan conversation. The American Antitrust Institute is a recipient, as is the American Enterprise Institute. The Economic Security Project, a left-of-center nonprofit started by Facebook founder and former New Republic owner Chris Hughes, will get $250,000 to research the impacts of economic concentration by tech companies, while the Lincoln Network, a conservative tech group, will receive the same amount to host its annual conference in San Francisco focused on “innovation policy and governance.”

“A lot of the folks that we’re funding don’t agree about what the answers are,” Gill says. “Our feeling is, we don’t know the answer. What we need is evidence. What we need is thoughtful debate and discussion about various alternatives to inform ultimately what can be quality answers over a period of time.”

Not all of Knight’s efforts have panned out. Last month, the foundation was one of nearly a dozen funders that pulled their support for Social Science One, an ambitious effort that aims to give academic disinformation researchers unprecedented access to Facebook data. This summer, Facebook said it had to limit the amount of raw data it had promised to share with researchers over concerns about user privacy. The Cambridge Analytica scandal that had motivated C.E.O. Mark Zuckerberg to personally agree to the effort also fueled fears that academic researchers could again sell valuable data to the highest bidder. The funders, who included the Charles Koch Foundation, Omidyar Network’s Tech and Society Solutions Lab, and the Alfred P. Sloan Foundation, balked at Facebook’s changes.

“The data that was originally intended to be made available wasn’t, and we think that’s too bad,” says Gill.

A Facebook official in charge of the project told me last month that the company was committed to providing quality data to researchers and was working to develop better techniques, based on differential privacy, that would sufficiently protect user identities while still exposing useful patterns. “We’re producing at a slower pace because we’re trying to move slowly and carefully and do this the right way,” said Chaya Nayak, head of Facebook’s Election Research Commission & Data Sharing efforts.

advertisement

Related: Facebook’s plan for radical transparency was too radical


Facebook’s transparency efforts have stood out, Gill thinks, but like other Silicon Valley giants, it remains largely a black box to outside researchers. “As someone who’s been a party to some of this, I think there’s obviously a long way to go in terms of the availability of information about what’s happening inside Facebook or any of these companies—and they don’t have a legal obligation to turn it over,” he says. “But I think that until we understand better, it’s going to be very difficult as a society publicly to have a collaborative conversation about how these platforms can be a part of a thriving democracy.”

Alongside projects such as Social Science One, new laws could enforce more transparency at Facebook and other tech firms or give the companies more leeway to share data. For instance, “safe harbor” protections could allow digital platform companies to share sensitive data with researchers without legal repercussions. Some of Knight’s funding is aimed at exploring those options, Gill says.

“A paradigm is starting to cohere around what regulation could look like, and I think one of the elements of that is data transparency.” Along with predictability and uniformity, corporations “want to know that the disclosures that they’re making aren’t going to be ultimately legally problematic for them. That’s a good role for a regulator.”

One proposal in Congress, the Honest Ads Act, would establish regulations for digital political advertising, including greater transparency from companies such as Facebook around the legitimacy of content and the authenticity of users. But no new laws governing internet platforms have been passed in the U.S. Congress since the misinformation wars of 2016. One of the bill’s sponsors, Senator Mark Warner, a Democrat from Virginia, said in an email that Facebook and other social media platforms had “started making some efforts to address these challenges. “However, there’s so much more we need to do to safeguard our democracy,” Senator Warner wrote. “And if platforms refuse to comply, we need to be able to hold them responsible.”

The role of algorithms in enabling online extremism and violence has also forced lawmakers to consider modifying Section 230 of the Communications Decency Act, which shields tech companies from legal liability for user-generated content. The Federal Trade Commission and Department of Justice have ongoing investigations into the business practices of several tech companies, while almost all state attorneys general are investigating Google and Facebook for anticompetitive behavior. Most recently, Facebook’s political-ad policies have infuriated watchdogs and policy makers and exposed the challenges of policing—or not policing—”political” speech.

advertisement

Fighting the “extraordinary asymmetry” of big tech

Part of the challenge with crafting new regulation involves the confusion around what a well-functioning digital marketplace should look like—in effect, what public health looks like in the internet era. It’s now well understood that antitrust law isn’t fit for the era of Bezos and Zuckerberg; it’s mostly been focused on “consumer welfare,” and before that, antitrust policy trained its sights on power and innovation in manufacturing.

“Our doctrines of understanding business aren’t particularly well attuned to control over data as a competitive advantage,” says Gill. “They’re not particularly well tuned to zero-cost services, and what those markets should look like. And they’re not particularly attuned to the kind of horizontal integration that these companies can very effectively effect.” Privacy laws are a prime example of an outmoded paradigm waiting to be updated: “a paradigm in which it was really about individual sovereignty, what could I keep or could someone else, as opposed to the use of that information,” which is how privacy is increasingly defined now.

As the industry amasses and bundles giant amounts of our data, its own data remains largely hidden. “There’s an extraordinary asymmetry between their understanding of how this technology works and everybody else’s understanding,” he says.

The failures of corporate transparency, especially in big tech, echo previous eras. “That was a really big problem toward the end of the 19th century,” Gill says. “The railroad company knew better than everybody else how it was influencing markets. Now we created something to respond to that, the administrative state: The state got more complex to understand how these markets were operating. So I think there’s no question that regulation is going to have to evolve.”

Regulations would compel Facebook, Google, and others to make their products safer, the way previous lawmaking has made transportation less deadly. “Automakers today are a much more proactive part of the auto safety solution, not necessarily because better, more kind-hearted, safety-conscious people run those companies, but through the negotiation of regulation and self-regulation and consumer action, through the interplay of those forces. The parameters are different today than they were in the 1940s, when we really had a serious auto death problem.

“And you’ve got different structures in these companies,” Gill says of today’s auto industry. “We have safety engineers who work in these companies. They compete in markets where consumers now expect and are acculturated to certain kinds of safety features.”

advertisement

Related: Should the internet be a public utility? Hundreds of cities are saying yes


Regulating big tech is one idea; Knight is also examining new business models for internet platforms and media organizations—including nonprofit and publicly funded approaches—to build a healthy information ecosystem. The question comes down to: Is this a public good?

“And if it is, do we need to approach different ways of supporting it? Should there be public subsidy for these things? Should there be philanthropic support? Should we just let the market determine the future of these organizations? I think this is a very serious question for us as a society to confront. It may be that the true public good, which is deep, verified, contextualized community information, is not alone going to be supported by the market. I think that’s possible.”

Still, Gill stresses that strong internet governance—and new rules—needn’t divide people along predictable liberal-conservative lines. “What’s interesting about public health as an analogy is, the fact that it became a problem at a public scale didn’t mean that the solution was exclusively public. We still have a private healthcare system. It just meant that at that moment, there were questions that we had to ask as a society, management questions to which companies and hospitals and insurers and government, doctors, patients, et cetera, were all party.”

The problems wrought by big tech require similar collaboration and problem-solving. “If this really is one of the biggest issues that democracy is facing in our time, then this is just the smallest amount of work that’s going to be necessary to grapple with that.”

advertisement
advertisement

About the author

Alex is a contributing editor at Fast Company, the founding editor and editor at large of Motherboard at Vice, and a freelance writer and producer with a focus on the intersections of science, technology, media, politics, and culture.

More