How To Take A Perfect Selfie (According To A Neural Network)

Using 2 million self-portraits culled from the web, Andrej Karpathy trained a neural network to classify the good from the bad and ugly.

Crop out your forehead, follow the rule of thirds, apply a filter, oh, and be a woman. These are the winning traits of a “good” selfie according to Andrej Karpathy, a Stanford computer science graduate student who trained a Convolutional Neural Network to be an ace photo arbiter.


Karpathy used a network that’s capable of processing 140 million different parameters and input millions of photos to arrive at his conclusion.

The experiment started by running a script to collect web images tagged with #selfie. Karpathy then narrowed that initial 5-million-image sample into 2 million photographs that contained at least one face. To determine which ones were deemed good versus bad, Karpathy ranked the number of positive responses—i.e., likes—based on their audience size—i.e. number of followers. Working with a sample of 100 images, he labeled 50 as positive selfies (the ones that proportionally got the most likes) and 50 as negative selfies (the ones that received the lowest number of likes proportionally).

Karpathy looked at what the ConvNet deemed were the top 100 of 50,000 selfies and noticed a lot of similarities.

1. All of the top 100 selfies were of women. No boys allowed, apparently.

2. The majority of these images followed the classic rule of thirds with the face occupying the top 1/3 of each image.

3. The subjects typically cropped out their foreheads in the image.


4. Most of the subjects have long hair.

5. Most of the images are overexposed.

6. Filters—be they color or black-and-white—were frequently applied.

7. There’s often a border around the image.

Karpathy noted that the top 100 photos of males did not hew to all of these parameters and observed that the composition of a good selfie included the full head and shoulders. Moreover, they had “a fancy hair style with slightly longer hair combed upwards.” The rules for lighting still applied.

In terms of bad selfies, Karpathy noted that the worst selfies had the following traits:


1. Dim lighting.

2. A group shot.

3. A head that occupies most of the frame.

As an experiment, he also had the ConvNet figure out the optimal crop for images, which is shown below.

In conclusion, “a good portion of the variability between what makes a good or bad selfies can be explained by the style of the image, as opposed to the raw attractiveness of the person,” Karpathy writes. “Also, with some relief, it seems that the best selfies do not seem to be the ones that show the most skin. I was quite concerned for a moment there that my fancy 140-million ConvNet would turn out to be a simple amount-of-skin-texture-counter.”

As one person commenting on the blog writes: “I will be the boss of Tinder with this knowledge.”


Wield these lessons carefully, friends—there’s no excuse for crappy selfies now.

Read about the whole experiment on Karpathy’s blog.


About the author

Diana Budds is a New York–based writer covering design and the built environment.