advertisement
advertisement

“Holy F**K”: When Facial Recognition Algorithms Go Wrong

“Holy F**K”: When Facial Recognition Algorithms Go Wrong
[Screenshot: via Google Photos]

Google’s new Photos service, which uses machine learning to automatically tag photos, made a huge miscalculation on Monday when it automatically tagged two African-Americans as “gorillas.” Developers at Google immediately apologized for the gaffe and then worked to fix the app’s database.

The user in question, computer programmer Jacky Alciné , reported the problem via Twitter when he found that Google Photos had created an album labeled “gorillas” that exclusively featured photos of him and his African-American friend. Within less than two hours, Google chief social architect Yonatan Zunger had addressed Alciné’s tweets and started to investigate the issue:

Facial recognition technology, which involves training computer programs to recognize objects based on databases of images, has caused issues for other services before. Flickr’s image-tagging mechanism, for example, recently identified an African-American male and a white woman wearing face paint as “apes” and “animals.”

One of the biggest problems companies like Yahoo (Flickr’s parent) and Google face is the fact that image recognition systems are only as good as the training data they provide and the algorithms they use, both of which are in their early stages of evolution. If a machine learning system for images misclassifies a house as a retail store, or a dog as a cat, it’s merely a failure of technology. When people are brought into play, however, it’s another story. But to their credit, Google quickly jumped in to resolve both what created the machine learning foul-up, and to stop it from happening again.

[via Ars Technica]

advertisement
advertisement