Fast company logo
|
advertisement

THE NEW RULES OF AI

5 simple rules to make AI a force for good

The rise of AI has led to tattered privacy protections and rogue algorithms. Here’s what we can do about it.

5 simple rules to make AI a force for good

[Photo: Flickr user dchelyadnik]

BY Sean Captainlong read

This article is part of Fast Company’s editorial series The New Rules of AI. More than 60 years into the era of artificial intelligence, the world’s largest technology companies are just beginning to crack open what’s possible with AI—and grapple with how it might change our future. Click here to read all the stories in the series.


Consumers and activists are rebelling against Silicon Valley titans, and all levels of government are probing how they operate. Much of the concern is over vast quantities of data that tech companies gather—with and without our consent—to fuel artificial intelligence models that increasingly shape what we see and influence how we act.

If “data is the new oil,” as boosters of the AI industry like to say, then scandal-challenged data companies like Amazon, Facebook, and Google may face the same mistrust as oil companies like BP and Chevron. Vast computing facilities refine crude data into valuable distillates like targeted advertising and product recommendations. But burning data pollutes as well, with faulty algorithms that make judgments on who can get a loan, who gets hired and fired, even who goes to jail.

The extraction of crude data can be equally devastating, with poor communities paying a high price. Sociologist and researcher Mutale Nkonde fears that the poor will sell for cheap the rights to biometric data, like scans of their faces and bodies, to feed algorithms for identifying and surveilling people. “The capturing and encoding of our biometric data is going to probably be the new frontier in creating value for companies in terms of AI,” she says.

The further expansion of AI is inevitable, and it could be used for good, like helping take violent images off the internet or speeding up the drug discovery process. The question is whether we can steer its growth to realize its potential benefits while guarding against its potential harms. Activists will have different notions of how to achieve that than politicians or heads of industry do. But we’ve sought to cut across these divides, distilling the best ideas from elected officials, business experts, academics, and activists into five principles for tackling the challenges AI poses to society.

1. Create an FDA for algorithms

Algorithms are impacting our world in powerful but not easily discernable ways. Robotic systems aren’t yet replacing soldiers as in The Terminator, but instead they’re slowly supplanting the accountants, bureaucrats, lawyers, and judges who decide benefits, rewards, and punishment. Despite the grown-up jobs AI is taking on, algorithms continue to use childish logic drawn from biased or incomplete data.

Cautionary tales abound, such as a seminal 2016 ProPublica investigation that found law enforcement software was overestimating the chance that black defendants would re-offend, leading to harsher sentences. In August, the ACLU of Northern California tested Rekognition, Amazon’s facial-recognition software, on images of California legislators. It matched 26 of 120 state lawmakers to images from a set of 25,000 public arrest photos, echoing a test the ACLU did of national legislators last year. (Amazon disputes the ACLU’s methodology.)

Faulty algorithms charged with major responsibilities like these pose the greatest threat to society—and need the greatest oversight. “I advocate having an FDA-type board where, before an algorithm is even released into usage, tests have been run to look at impact,” says Nkonde, a fellow at Harvard University’s Berkman Klein Center for Internet & Society. “If the impact is in violation of existing laws, whether it be civil rights, human rights, or voting rights, then that algorithm cannot be released.”

Nkonde is putting that idea into practice by helping write the Algorithmic Accountability Act of 2019, a bill introduced by U.S. Representative Yvette Clarke and Senators Ron Wyden and Cory Booker, all of whom are Democrats. It would require companies that use AI to conduct “automated decision system impact assessments and data protection impact assessments” to look for issues of “accuracy, fairness, bias, discrimination, privacy, and security.”

These would need to be in plain language, not techno-babble. “Artificial intelligence is . . . a very simple concept, but people often explain it in very convoluted ways,” says Representative Ro Khanna, whose Congressional district contains much of Silicon Valley. Khanna has signed on to support the Algorithmic Accountability Act and is a co-sponsor of a resolution calling for national guidelines on ethical AI development.

Chances are slim that any of this legislation will pass in a divided government during an election year, but it will likely influence the discussion in the future (for instance, Khanna co-chairs Bernie Sanders’s presidential campaign).

2. Open up the black box of AI for all to see

Plain-language explanations aren’t just wishful thinking by politicians who don’t understand AI, according to someone who certainly does: data scientist and human rights activist Jack Poulson. “Qualitatively speaking, you don’t need deep domain expertise to understand many of these issues,” says Poulson, who resigned his position at Google to protest its development of a censored, snooping search engine for the Chinese market.

To understand how AI systems work, he says, civil society needs access to the whole system—the raw training data, the algorithms that analyze it, and the decision-making models that emerge. “I think it’s highly misleading if someone were to claim that laymen cannot get insight from trained models,” says Poulson. The ACLU’s Amazon Rekognition tests, he says, show how even non-experts can evaluate how well a model is working.


Related: The real reason Google Assistant launched with a female voice: biased data


AI can even help evaluate its own failings, says Ruchir Puri, IBM Fellow and the chief scientist of IBM Research who oversaw IBM’s AI platform Watson from 2016 to 2019. Puri has an intimate understanding of AI’s limitations: Watson Health AI came under fire from healthcare clients in 2017 for not delivering the intelligent diagnostic help promised—at least not on IBM’s optimistic timeframe.

“We are continuously learning and evolving our products, taking feedback, both from successful and, you know, not-so-successful projects,” Puri says.

IBM is trying to bolster its reputation as a trustworthy source of AI technology by releasing tools to help make it easier to understand. In August, the company released open-source software to analyze and explain how algorithms come to their decisions. That follows on its open-source software from 2018 that looks for bias in data used to train AI models, such as those assigning credit scores.

“This is not just, ‘Can I explain this to a data scientist?'” says Puri. “This is, ‘Can I explain this to someone who owns a business?'”

3. Value human wisdom over AI wizardry

The overpromise of IBM Watson indicates another truth: AI still has a long way to go. And as a result, humans should remain an integral part of any algorithmic system. “It is important to have humans in the loop,” says Puri.

Part of the problem is that artificial intelligence still isn’t very intelligent, says Michael Sellitto, deputy director of Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI). “If you take an algorithm out of the specific context for which it was trained, it fails quite spectacularly,” he says.

That’s also the case when algorithms are poorly trained with biased or incomplete data—or data that doesn’t prepare them for nuance. Khanna points to Twitter freezing the account of Senate Majority Leader Mitch McConnell’s campaign for posting a video of people making “violent threats.” But they were protestors against McConnell, whose team was condemning the violent threats, not endorsing them.

Because of AI’s failings, human judgment will always have to be the ultimate authority, says Khanna. In the case of Twitter’s decision to freeze McConnell’s account, “it turns out that the context mattered,” he says. (It’s not clear if Twitter’s decision was based on algorithms, human judgment, or both.)

advertisement

Related: How human curation came back to clean up AI’s messes


But the context of humans making decisions also matters. For instance, Khanna is collaborating with Stanford HAI to develop a national AI policy framework, which raises its own questions of bias. The economy of Khanna’s district depends on the AI titans, whose current and former leaders dominate HAI’s Advisory Council. Industry leaders who have bet their future on AI will likely have a hard time making fair decisions that benefit everyone, not just businesses.

“That’s why I am putting so much effort into advocating for them to have more members of civil society in the room and for there to be at least some accountability,” says Poulson. He led a petition against an address by former Google CEO Eric Schmidt that has been planned for HAI’s first major conference in October.

Stanford has since added two speakers—Algorithmic Justice League founder Joy Buolamwini and Stony Brook University art professor Stephanie Dinkins—whom Poulson considers to be “unconflicted.” (Stanford says that it was already recruiting the two as speakers before Poulson’s petition.)

Humans are also making their voices heard within big tech companies as well. Poulson is one of many current and former Googlers to sound the alarm about ethical implications of the company’s tech development, such as the Maven program to provide AI to the Pentagon. And tech worker activism is on the rise at other big AI powerhouses, such as Amazon and Microsoft.

4. Make privacy the default

At the heart of many of these issues is privacy—a value that has long been lacking in Silicon Valley. Facebook founder Mark Zuckerberg’s motto, “Move fast and break things,” has been the modus operandi of artificial intelligence, embodied in Facebook’s own liberal collection of customer data. Part of the $5 billion FTC settlement against Facebook was for not clearly informing users that it was using facial-recognition technology on their uploaded photos. The default is now to exclude users from face scanning unless they choose to participate. Such opt-ins should be routine across the tech industry.

“We need a regulatory framework for data where, even if you’re a big company that has a lot of data, there are very clear guidelines about how you can use that data,” says Khanna.

That would be a radical shift for Big Tech’s freewheeling development of AI, says Poulson, especially since companies tend to incentivize quick-moving development. “The way promotions work is based upon products getting out the door,” he says. “If you convince engineers not to raise complaints when there is some fundamental privacy or ethics violation, you’ve built an entire subset of the company where career development now depends upon that abuse.”

In an ideal world, privacy should extend to never collecting some data in the first place, especially without consent. Nkonde worked with Representative Yvette Clarke on another AI bill, one that would prohibit the use of biometric technology like face recognition in public housing. Bernie Sanders has called for a ban on facial recognition in policing. California is poised to pass a law that bans running facial recognition programs on police body camera footage. San Francisco, Oakland, and Somerville, Massachusetts, have banned facial recognition technology by city government, and more cities are likely to institute their own bans. (Still, these are exceptions to widespread use of facial recognition by cities across the United States.)


Related: Ring’s smart home plans would sound great if Ring itself was less frightening


Tech companies tend to argue that if data is anonymized, they should have free reign to use it as they see fit. Anonymization is central to Khanna’s strategy to compete with China’s vast data resources.

But it’s easy to recover personal information from purportedly anonymized records. For instance, a Harvard study found that 87% of Americans can be identified by their unique combination of birth date, gender, and zip code. In 2018, MIT researchers identified Singapore residents by analyzing overlaps in anonymized data sets of transit trips and mobile phone logs.

5. Compete by promoting, not infringing, civil rights

The privacy debate is central to the battle between tech superpowers China and the United States. The common but simplistic view of machine learning is that the more data, the more accurate the algorithm. China’s growing AI prowess benefits from vast, unfettered information collection on 1.4 billion residents, calling into doubt whether a country with stricter privacy safeguards can amass sufficient data to compete.

But China’s advantage comes at a huge price, including gross human rights abuses, such as the deep surveillance of the Uighur Muslim minority. Omnipresent cameras tied to facial recognition software help track residents, for instance, and analysis of their social relationships are used to assess their risk to the state.

Chinese citizens voluntarily give up privacy far more freely than Americans do, according to Taiwanese-American AI expert and entrepreneur Kai-Fu Lee, who leads the China-based VC firm Sinovation Ventures. “People in China are more accepting of having their faces, voices, and shopping choices captured and digitized,” he writes in his 2018 book AI Superpowers: China, Silicon Valley, and the New World Order.


Related: The inside story of China’s stunning rise from tech imitator to innovator


That may be changing. The extensive data collection by viral Chinese face-swapping app Zao provoked outrage not only in the West, but in China as well, forcing Zao to update its policy.

And the country with the most data doesn’t automatically win, anyway. “This is more of a race for human capital than it is for any particular data source,” says Sellitto of Stanford’s HAI. While protecting privacy rights may slightly impinge data collection, it helps attract talent.

The United States has the largest share of prominent AI researchers, and most of them are foreign born, according to a study by the Paulson Institute. The biggest threat to America’s AI leadership may not be China’s mass of data or the talent developed in other countries, but newly restrictive immigration policies that make it harder for that talent to migrate to the U.S. The Partnership on AI, a coalition of businesses and nonprofits, says that a prohibitive approach to immigration hurts AI development everywhere.

“In the long run, valuing civil liberties is going to attract the best talent to America by the most innovative people in the world,” says Khanna. “It allows for freedom of creativity and entrepreneurship in ways that authoritarian societies don’t.”

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Sean Captain is a business, technology, and science journalist based in North Carolina. Follow him on Twitter @seancaptain. More


Explore Topics