Fast company logo
|
advertisement

CO.DESIGN

The world’s most-advanced AI can’t tell what’s in these photos. Can you?

Researchers from UC Berkeley, the University of Washington, and the University of Chicago are building the ultimate archive of photos that confuse AI.

The world’s most-advanced AI can’t tell what’s in these photos. Can you?

[Photo: Harshil Gudka/Unsplash, Mathew Schwartz/Unsplash]

BY Mark Wilson4 minute read

Is that a manhole cover or dragonfly sitting on a table? Is that a green iguana or just a squirrel running with some nuts? Is that a unicycle or a crocodile crossing the road? To humans, the answer is obvious. But the best image-identifying artificial intelligence in the world hasn’t a clue.

That’s because each of these images was carefully selected to fool state-of-the-art image recognition technology. They’re part of a collection of 7,000 images curated by researchers from UC Berkeley, the University of Washington, and the University of Chicago.

“Current [machine learning] models are brittle,” says Dan Hendrycks, a PhD student in computer science at UC Berkeley who was an author on the paper. “While other research uses artificial data to study robustness, we show that models make egregious and highly consistent mistakes on real data [with real photos].”

Unicycle [Image: courtesy Dan Hendrycks]

To understand why that matters, let’s rewind. Over the past few years, image recognition has gotten really good, really fast. That’s largely thanks to an ever-growing open data set created by the University of Stanford called ImageNet. The collection now consists of over 14 million photos, which are each labeled with identifiers like “tree” and “sky.” This massive database is a training set, or a reference for new AI systems to learn how to identify images, much like a toddler can reference a picture book to slowly learn new words. Artificial intelligence trained with ImageNet—which you’d probably know best from Microsoft services like Bing—have gotten extremely accurate, able to identify objects with accuracy as high as 95%. This is actually better than humans performing the same job!

But closing that last 5% accuracy gap is an extremely large problem. Since 2017, computers haven’t been getting more accurate at identifying images. That’s why researchers are exploring ways to understand those few images that computers can’t seem to parse. The team behind the new collection scoured Flickr by hand, looking for photos they thought might confuse software. They’d test them against AI models trained on ImageNet, and if the images proved confusing, these photos were added to their new data set—which they dubbed ImageNet-A. It’s basically the anti-ImageNet. The 7,000 photos in this collection drop the accuracy of AI from over 90% to a mere 2%. Yes, you read that right. Ninety-eight times out of 100, the best vision AI models in the world will be confused by these photos.

The question of why AI systems don’t understand these images is complex. Teaching AI today tends to involve throwing a lot of data into a black box—in other words, you can only judge its accuracy based on its final conclusion, not the process it took to get there. If that black box sees enough variations of a tree that it begins identifying new trees in new photos, we consider it successful. (This repetitious task is known as machine learning.) The problem is, we don’t know why the AI has decided that a tree is a tree. Is it the shape? Color? Context? Texture? Is it because trees have some unifying core geometry that humans have never recognized? We don’t know. AI is judged by its answers, not its reasoning. That means we can get all sorts of unexpected bias from AI, which poses a major problem when AI systems are being used in technology like autonomous cars or fields like criminal justice. It also means that image recognition systems aren’t intelligent in any real way; they’re more like match game savants.

Sea Lion [Image: courtesy Dan Hendrycks]

Building ImageNet-A is about tricking AI to discover why certain images confuse these systems. For example, when an AI mistakes one of the images—of a squirrel—for a sea lion, the lack of deeper intelligence and reasoning starts to become clear. The system is relying on texture of these animals alone, and not considering their relative size or shape for identification. “Photos which require understanding an object’s shape seem most likely to trick a model,” says Hendrycks.

With ImageNet-A, researchers have successfully found 7,000 blind spots in vision AI. Does that mean these images can just go into a new training set and fix their shortcomings? Probably not. “As there is much diversity and complexity in the real world, training on these images would likely not teach models how to robustly manage the full range of visual inputs,” says Hendrycks. “Collecting and labeling, say, 1 trillion images may resolve some model blind spots, but patching each previous blind spot will likely fail when novel scenarios occur and when the world changes.”

In other words, simply adding more and more photos to current machine learning data sets cannot fix the core shortcomings of its logic. There will always be images a computer hasn’t seen enough of before to identify accurately. So what can researchers do to close that lingering 5% gap? Hendrycks says they need to develop new methodologies, outside the bounds of modern-day machine learning, to create more sophisticated AI systems. Or, you know, they could not do that—and let us humans keep our smug superiority over machines for just a little bit longer.

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Wilson is the Global Design Editor at Fast Company. He has written about design, technology, and culture for almost 15 years More


Explore Topics