Fast company logo
|
advertisement

CO.DESIGN

7 problems with Trump’s “American AI” Initiative

The president has a plan for AI in America. I spoke to four researchers about what it gets right–or wrong.

7 problems with Trump’s “American AI” Initiative

[Source Image: StudioM1/iStock]

BY Katharine Schwab8 minute read

On Monday, President Donald Trump signed an executive order on the “American AI Initiative,” a set of sweeping guidelines aimed to increase the United States’s global competitiveness in the cutting-edge technology.

The policy has five key elements, which are detailed on the White House’s Office of Science and Technology Policy’s website, including investment in R&D, providing more resources (though not money) to experts, introducing ethical standards for development and use, prioritizing training for people who could lose their jobs due to automation, and working with other countries–so long as it “is developed in a manner consistent with our nation’s values and interests.”

Fast Company reached out to experts who work on AI to get their first reactions to these vague–but revealing–elements of the initiative. Here’s what they had to say.

More resources for AI projects could be good–but only if it’s done in a responsible way

One element of Trump’s initiative calls for federal agencies to open up federal data, algorithms, and computing power to AI experts in industry and academia.

For Janelle Shane, research scientist and creator of the AI Weirdness blog, that could be a good thing. “Giving researchers more resources? That’s a nice thing especially if it gets released to students working in the area, or people who have a smallish project they want to try,” she commented. “I’ve seen a lot of people, hobbyists, artists, or students who have projects they want to do and they can only make some progress because they don’t have the computing power, or they sign up for cloud computing service, and didn’t realize how expensive it is.”

But for Nikki Stevens, a researcher at Arizona State University who studies software engineering ethics, opening up datasets for use can have negative consequences.

“I’m concerned about giving researchers access to federal datasets and algorithms,” she says. “We know that datasets and algorithms can be biased, so offering these up uncritically for AI research will perpetuate that bias.”

Ethical standards are an empty gesture

A key element of Trump’s proposal includes setting technical standards for AI development and use to ensure that the technology is “reliable, robust, trustworthy, secure, portable, and interoperable.” While the executive order doesn’t use the word ethics specifically, standards that ensure the trustworthiness and security of technology are clearly related to and influenced by ethical considerations. But as Stevens says, that doesn’t mean much:

“The inclusion of ethical standards as a main priority is an important, but empty, gesture.  Government bodies that have a history of disregard for citizens’ rights are not the best choice for drafting ethical codes.  I would look for this initiative to include a racially diverse group of academics, religious, and community leaders. I would especially hope that the administration would include members of groups that are most impacted by unjust AI: African American, Indigenous, disabled, and immigrant communities.”

Instead, Morgan Klaus Scheuerman, a researcher at University of Colorado Boulder who studies gender recognition algorithms, believes that the government should work toward actual policies to hold agencies accountable for their AI development, not just technical guidelines:

The current proposed guidelines of reliable, robust, trustworthy, secure, portable, and interoperable AI systems certainly address some of the expected functionality of AI systems, but concretely legislating the way these systems can be used against citizens should also be a priority for policymakers. We should also be focused on the social and ethical standards we expect from research and development. Right now, academic researchers are self-regulating by following a still rather fuzzy set of guidelines in the FAT (Fairness, Accountability, Transparency) Framework. However, these are very hard to implement in practice, and the standards for implementation vary wildly. Thus, I hope setting AI governance standards includes the creation of legal policies, not just technical standards.

Its approach to AI ethics is all wrong

Another element of the technical standard proposal is “Setting AI Governance Standards,” including “Federal agencies will foster public trust in AI systems by establishing guidance for AI development and use across different types of technology and industrial sectors.” In the same section, the order states that the National Institute of Standards and Technology (NIST), a government agency, will “lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems.”

advertisement

But for Os Keyes, University of Washington researcher who focuses on the impact of data science and AI on trans people, that’s a big problem:

“For an organization to produce ethical guidelines or ethical regulations, it needs two things: first expertise, and second, a degree of independence and power from organizations that have their own interests in what algorithmic systems that they’re allowed to make. In terms of ethics expertise, NIST really doesn’t have any. They’ve been engaged in [testing and evaluating] AI systems, like facial recognition, for literally decades. But their work rarely considers the ethical considerations of what’s being done. I’ve never seen any indication they have ethicists on staff or any indication they intend to have one.

“NIST isn’t independent…. They’re funded [in part] by the FBI. NIST isn’t just a government standards organization–it’s part of the Department of Commerce. Its job is to create standards with an eye to how they can enable the American free market and U.S. companies to flourish.

“So in the case of AI, what people are worried about is two things: the first is that it will enable an authoritarian government dystopia, and the second is it will enable a corporate surveillance and monetization dystopia. To give the job of preventing those things to NIST is to give it to an organization which takes funding from government law enforcement agencies that promote control and authoritarianism, and an organization whose remit is to promote and boost the degree to which monetization happens and to which corporate entities and private sector concerns play a role in American society.

“I think the result is going to be something that won’t constrain government from doing whatever it wants… and it’s not going to constrain other entities from doing whatever they want.”

After this story was published, NIST clarified that it “collaborates with other agencies and receives some funding from them for specific programs and projects but is independent of those agencies.” The executive order explicitly charges NIST with working on technical standards and not matters of ethics or morality. Of course, the way AI works technically is fundamentally tied to questions of ethics, given that these algorithms often use biased datasets.

Who determines what America’s “values and interests” are?

Trump’s plan also advances the idea that AI research should be a global project, with different countries cooperating to advance the good of everyone. But there’s a caveat: The initiative’s announcement states that the president is open to this international cooperation so long as development is “consistent with our Nation’s values and interests.” For Shane, that raises a serious question:

“There’s already tension now between AI researchers in industry or academia working on projects that may or may not be used for defense, or surveillance, or used to target civilians with drones from above. When we see things about American values and interests, under ‘international outreach,’ the concern is: Who gets to determine what those are? Will it be done in away that represents all Americans, and not just some subset of them? Will it be done in a way that researchers will be happy to work on these problems and won’t be conflicted about how their research might be used?

We shouldn’t approach automation with fear

Another element of Trump’s American AI Initiative tackles retraining the U.S. population as jobs are automated. But Stevens points out that the way Trump frames his solution–educational programs and training sessions–is ultimately harmful:

“The approach to [automation] is fear-based. Framing AI as a technology that will take people’s jobs is a disempowering and deterministic narrative. As [the government] moves to operationalize this, they have the freedom to position this instead as ‘how do we limit AI to protect citizen jobs’ rather than assuming that vast swaths of the working population will be displaced.”

Just because something can be automated doesn’t mean it should be

The last element of the initiative states that federal agencies will prioritize investment into AI technologies in their R&D budgets. But Keyes sees this directive as ultimately detrimental:

“It worries me to hear the federal government, which controls housing, medical care for a large number of Native Americans, law enforcement regulation, and boring stuff like highway standards, but also important stuff like Social Security payments, say that their priority in this kind of research is removing human factors and humanity from how decisions are made.

“You’re going to get a replication of a lot of the ethical issues we’ve seen with AI, and you’re going to get this attitude that if something exists or could be replaced with an algorithm, that means that it should be. That’s dangerous to me because, amongst other things, that’s often not the case. And it’s something where doing it leads to terrifying errors.

“If you say that Housing and Human Services should have its budget prioritize AI investments, they [could] come up with an algorithm to determine where houses should be built or who should get houses that replaces human beings. You get two problems: the factors that this algorithm can factor in are fundamentally things that can be quantified, which will constrain and change how those decisions are made. And the second thing is that we’re told [AI is] so much faster, it can make a million decisions in an hour. But even if it’s only wrong 1% of the time, with a million decisions, that’s 10k errors that need to be appealed somewhere and addressed and handled.”

Immigration has to be part of the AI conversation

Finally, one of the biggest problems with the American AI Initiative is what it doesn’t address. For Shane, keeping America at the cutting edge of AI is dependent on immigration policy, and convincing the world’s brightest minds to come and stay in the country:

“If you talk to people who are actually in the field doing this research, one of the major things they say is that the U.S. needs to do better on immigration policies and not make things so miserable to come over here and study, or stay here and work. In recent years our immigration policies have become so hostile to these students, it’s turning people away from studying in the U.S. or coming to the U.S. for conferences. Conferences moving to different countries. This executive order doesn’t address that. It’s not surprising but that would be an essential part of any meaningful move to make the U.S. more of a leader in AI or in any science area.”

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at kschwab@fastcompany.com and follow her on Twitter @kschwabable More


Explore Topics