Fast company logo
|
advertisement

CO.DESIGN

A skeptic’s guide to thinking about AI

From marketing hype to fuzzy ethics.

A skeptic’s guide to thinking about AI

[Source Image: StudioM1/iStock]

BY Katharine Schwab5 minute read

This week, at the research institute AI Now‘s annual symposium, experts debated some of the most critical issues in society through the lens of AI. The event brought together law and politics professors, lawyers, advocates, and writers to discuss how we as a community can ensure that the technology doesn’t destabilize justice and equity.

For the audience, the discussions offered a compelling introduction to the false claims and ethical dilemmas that surround AI right now. It was a valuable primer for anyone, from designers who are starting to work with machine learning to users who simply have questions about the way their data is being used in society.

Here are four insights about how all of us–from developers to designers to users alike–can see more clearly through the hype. They cast a skeptical eye on the AI’s true abilities, its efficiency-oriented value systems, and the way technology companies are approaching ethics.

[Source Images: Evgeny Gromov, StudioM1/iStock]

AI is not neutral

Virginia Eubanks, associate professor of political science at SUNY Albany, studies how algorithms impact the way welfare is distributed in the U.S. for those living below the poverty line. In her book Automating Inequality, Eubanks explores how algorithms are brought in to decide who is worthy of receiving benefits and who is not.

While they provide a veneer of neutrality and efficiency, these tools are built on what she calls the “deep social programming of the United States”–discrimination against poor people, racial discrimination, gendered discrimination.

“Particularly in my work in public services, these tools get integrated under the wire. We see them as being administrative changes, not as consequential political decisions,” she says. “We have to reject that they’re just creating efficiencies . . . It buys into this idea that there’s not enough for everyone. But we live in a world of abundance, and there is enough for everyone.”

In other words, AI might sound “efficient” on the surface–after all, machines are supposed to be impartial–but in practice, it’s anything but. Chances are, it’s simply faster at making decisions that are rife with the same systemic injustices. Beware of any claim that AI is neutral.

[Source Images: Evgeny Gromov, StudioM1/iStock]

“AI” usually relies on a lot of low-paid human labor

If someone claims their product, service, or app is using AI, don’t necessarily believe it. AI is often used as a marketing tool today–and obscures the humans who are really doing the work.

“So much of what passes for automation isn’t really automation,” says writer and documentarian Astra Taylor. She describes a moment when she was waiting to pick up her lunch at a cafe, and another customer walked in, awestruck, wondering aloud how the app knew that his order was ready 20 minutes early. The woman behind the counter just looked at him and said, “I just sent you a message.”

“He was so convinced that it was a robot,” Taylor says. “He couldn’t see the human labor right in front of his eyes.”

She calls this process fauxtomation: “Fauxtomation renders invisible human labor to make computers seem smarter than they are.”

[Source Images: Evgeny Gromov, StudioM1/iStock]

Don’t just talk about ethics–think about human rights

“Ethics” has become the catchall term for thinking about how algorithms and AI are going to impact society. But for Philip Alston, a law professor at NYU who is currently serving as the UN Human Rights Council’s Special Rapporteur on extreme poverty and human rights, the term is too fuzzy.

“In the AI area we’re well accustomed to talking about inequality, the impact of automation, the gig economy,” Alston says. “But I don’t think the human rights dimension comes in very often. One of the problems is that there’s neglect on both sides. The AI people are not focused on human rights. There’s a tendency to talk about ethics which is undefined and unaccountable, conveniently. And on human rights side, it’s out of our range.”

advertisement

In a 2017 report, Alston documented his travels across the U.S. studying extreme poverty. He questions whether policy makers are giving enough thought to how the use of machine learning technology is impacting the most vulnerable people in the country–and if it is violating human rights. “It is extremely important for an audience interested in AI to recognize that when we take a social welfare system and . . . put on top of it ways to make it more efficient, what we’re doing is doubling down on injustices,” he says.

Despite rhetoric that claims AI will solve a host of human ills, we have to be careful it doesn’t enable or exacerbate them first.

[Source Images: Evgeny Gromov, StudioM1/iStock]

We need to hold government and corporations accountable, too

While there’s a lot of talk about how to hold AI technology accountable, Sherrilyn Ifill, the president and director-counsel of the NAACP’s Legal and Educational Defense Fund, reminds us that government and corporations need to be held accountable as well.

“The government relies on corporations to develop technology,” she says. “Once it’s unleashed, it’s hard to put back in the box.”

Ifill spoke primarily about the increasing prevalence of facial recognition algorithms, which are being used by police stations to unfairly target people of color. By being thoughtless in its deployment of the technology, and acting as if it were “just another consumer of a corporation,” she believes that government has stopped acting like government at all.

“[The government] is supposed to hold the public trust,” Ifill says. “We’re seeing the government falling down on the job.”

[Source Images: Evgeny Gromov, StudioM1/iStock]

If you’re designing with AI, ask yourself these questions

Eubanks says she often is asked for a “five-point plan to build better tech.”

While she doesn’t think that tech is the answer to the problem of policy (instead, “we need to move toward political systems that act like universal floors not moral thermometers”), there are a few questions she always asks designers to think about as they’re building technology.

  1. Does this tool increase the self-determination and dignity of the people it targets?
  2. If it was aimed at someone other than poor people, would it be acceptable?

While these questions are tailored toward Eubanks’s focus on welfare distribution algorithms, the first, in particular, is a question that every designer should be asking of their work. When you’re trying to help people using technology, you need to ensure, first and foremost, that your tool is going to affirm the self-determination and dignity of your users.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at kschwab@fastcompany.com and follow her on Twitter @kschwabable More


Explore Topics