Fast company logo
|
advertisement

Today Google unveiled a new set of principles guiding its approach to artificial intelligence, including a pledge not to build AI weapons.

In wake of Project Maven backlash, Google unveils new AI policies

[Photo: Noah_Loverbear/Wikimedia Commons]

BY Steven Melendez1 minute read

Today Google unveiled a new set of principles guiding its approach to artificial intelligence, including a pledge not to build AI weapons, “technologies that gather or use information for surveillance violating internationally accepted norms” or ones “whose purpose contravenes widely accepted principles of international law and human rights.”

The statement comes amid employee discontent over Google’s involvement in Project Maven, a controversial Pentagon AI program that seeks, among other things, to use machine learning to quickly detect and categorize objects in images captured by drones. After several employees quit and thousands of others signed a petition against the program, Google has said it will end its participation when its current contract expires next year. Other tech companies have been closemouthed about their role in the program.

In a blog post, Google CEO Sundar Pichai said the company won’t stop working with the military entirely: It will still potentially work with the armed forces on areas including cybersecurity, recruitment and training, veterans’ healthcare and search and rescue. Google is widely seen as a potential contender for a massive contract to move Defense Department systems to cloud servers.

“These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe,” he wrote.

advertisement

In general, the company says it will avoid using AI in areas that “cause or are likely to cause overall harm” and areas that create or contribute to “unfair bias.” Researchers have warned AI systems could pick up racial, gender and other biases from society at large depending on the training sets and methods used to teach them to process data. Google also vowed to incorporate “strong safety and security practices” and “privacy design principles” and to ensure its AI systems are “accountable to people.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Steven Melendez is an independent journalist living in New Orleans. More


Explore Topics