I Hate Pictures Of Food. Why Doesn’t Facebook Understand That Yet?

Algorithms and machine learning aside, the social network still has a long way to go when it comes to anticipating users’ image preferences—and aversions.

I Hate Pictures Of Food. Why Doesn’t Facebook Understand That Yet?
[Photo: Unsplash user Jorge Zapata]

This confession is long overdue, but here goes: I hide every single picture of food that shows up in my Facebook news feed—even yours. Your steamy steak shots. Your proud Thanksgiving panoramics. Your ill-composed close-ups of whatever that flaky dessert is supposed to be. I’ve been hiding them ever since I joined Facebook in 2009, and please understand it’s nothing personal. You see, thanks to some strange idiosyncrasy, I can’t stand looking at pictures of food. They gross me out on a visceral level that borders on pathological.


Cursed with aversion, I naturally try to avoid the triggering stimuli as much as possible, but even after almost eight years of willfully hiding food pictures on Facebook, I’m still assaulted with a fresh barrage of new ones each day. Restaurant promotions. Ads for Seamless. My cousin’s homemade quiche. Without fail, they flood my news feed, staring out at me in all their unappetizing glory.

Why is this? For all the talk of Facebook’s powerful algorithm and its ability to learn and adapt to our personal tastes, why hasn’t the world’s most advanced social network figured out my basic disdain for food porn? Clearly Facebook knows I’m hiding the pictures, but it doesn’t connect the dots. For whatever reason, the algorithm doesn’t see food as the common thread.

[Photo: Unsplash user Erol Ahmed]

Accounting For Taste

I set out to investigate this curious flaw by speaking with people at Facebook who are familiar with its news feed and machine-learning efforts. As it turns out, Facebook does have the basic ability to understand what’s happening inside a photograph. The company uses advanced “computer vision” technology that can identify and segment objects and even entire scenes. Is that photo you just “liked” a scenic view of the Space Needle or a half-eaten hot dog? Facebook’s AI robots are starting to figure it out. And according to Facebook spokesman Ari Entin, they’re getting smarter all the time. “From a fundamental tech perspective, certain capabilities are there,” Entin tells me.

The problem is, the technology isn’t being applied in a way that would protect me from the tyranny of graphic mealies. As we’ve written before at Fast Company, computer vision is already being used in exciting ways at Facebook. Notably, the technology is improving the accuracy of alt-text features that describe images for people who are visually impaired. Entin says computer vision also enhances Facebook’s search engine and helps keep the site free of objectionable content like violent images or pornography. (Facebook recently announced it was using image-recognition tech to combat revenge porn.) He says Facebook developers are just beginning to unlock the potential uses for the technology.

But food pictures in the news feed? That’s not on its radar. To understand why, you have know a little bit about how Facebook’s algorithm works. What you see in your news feed is determined by thousands of signals based on your Facebook behavior—who your friends are, what pages you like, what you comment on—and there’s a hierarchy to this ranking system. If Facebook knows I tend to comment on my cousin’s photos, it’s going to show me more of them. The system gets more complex as Facebook collects more data on my habits and is able to draw from more signals to further personalize my feed.

Computer vision is not part of this equation—at least not at the moment. When it comes to learning what you like, Facebook’s algorithm cares about who shares a photo, not what’s in it. This is why my hiding food pictures all these years hasn’t made a bit of difference. Let’s say my cousin happens to be a frequent traveler who posts beautiful photos of far-off locales. I’d probably comment on lots of her pictures, because I like travel. So even though I may hide a few shots of that homemade quiche (because gross), those signals aren’t strong enough to counteract the hundreds of travel photos I’ve already engaged with. Facebook just knows I like my cousin’s photos.


The same holds true for pages I’m connected with. Whenever the New York Times pollutes my feed with one of its recipe articles, I naturally get disgusted and hide it. But that’s just a small percentage of what the New York Times shows me. Sometimes I may even click the “see less from” button, but all that does is tell Facebook I want to see less from the New York Times, which really isn’t true. So I end up sending a false signal and making the news feed less useful.

This is where computer vision could conceivably play a big role, by adding a contextual layer that bridges the divide between disinterest and disgust. After all, Facebook is constantly telling us that the goal of its news feed is to “show people the stories that are most relevant to them.” And this isn’t just about food. Imagine the possibilities if Facebook’s algorithm knew you didn’t want to see pictures of scary dogs, or babies, or people who look like your ex.

[Photo: Unsplash user Brooke Lark]

Disgust Is Irrational

Granted, this is a tall order. Social scientists say the emotion of disgust first evolved as a simple coping mechanism to prevent us from ingesting things that would kill us, but in modern humans, it manifests in ways that are often deeply personal, complex, and—let’s be honest—downright irrational. My aversion to food pictures began with a childhood drenched in messy red-sauce Italian-American dishes that taught me to associate food with uncleanliness. That progressed to a reflexive instinct to flip over copies of Gourmet magazine at the dentist office or the change the channel whenever those cooking segments came on Good Morning America. To this day, I can’t sit through an episode of Anthony Bourdain Parts Unknown.

For years, I was able to quarantine myself in a self-imposed filter bubble, but not anymore. By the time we entered the age of social media, I knew I had lost the battle against gratuitous culinary documentation. Food pictures on Facebook, Instagram, and elsewhere, are as commonplace as selfies and pet shots. They’ve earned their place among established societal norms, whether I like them or not. My only hope is a smarter, better automatic filters.

I asked Facebook if it has any plans to implement computer vision into the news feed signaling process, but the company is not keen on talking about its future plans. A spokeswoman simply said they wouldn’t rule it out. Whatever the case, it’s clear computer vision is going to be a big part of Facebook’s future. Just this week at Facebook’s annual F8 developers conference, CTO Mike Schroepfer stood on stage and touted a new Facebook technology called Mask R-CNN, which can detect moving objects in photos. Another exec at the conference said Facebook is even developing computer vision tech that can analyze video.

For now, I’m not holding my breath that any of this will solve my problem anytime soon. Maybe that’s because learning what we like is more complex than learning what we “like.”


That’s not to downplay the inconceivably difficult task of managing a product used by 1.6 billion people. Facebook deserves credit for being able to make some sense of the chaos that goes with all that content. Still, whenever I hear talk of its enormous power, or the scary amount of data it knows about us, or the bold promise of artificial intelligence to understand our emotions, I always go back to my cousin’s homemade quiche. How can I not? It’s staring me right in the face.


About the author

Christopher Zara is a news editor for Fast Company and obsessed with media, technology, business, culture, and theater. Before coming to FastCo News, he was a deputy editor at International Business Times, a theater critic for Newsweek, and managing editor of Show Business magazine.