Facebook engineers say the auto-enhance feature newly added to the service’s iOS app wouldn’t have been possible even a few years ago, when phones just didn’t have the necessary processing power like they do today.
But the actual goal of the tool—cleaning up images so they look more like what the photographer sees and wants to capture and share—is the same challenge that last century’s masters of film photography would have faced in the darkroom, says Facebook Engineering director Brian Cabral.
“Certainly we were inspired by the masters of imaging,” he says. “They confronted sort of the same problems every photographer did—they’re trying to artistically express emotions or memories, whether it’s in a grand way like Ansel Adams did at Yosemite, or in a more personal way with your friends and family.”
Ultimately, the challenge is that cameras just don’t see the way people do. Our visual systems automatically adjust for different levels of brightness and shadow, along with relative colors, across a real-life, three-dimensional scene. If you’re looking at a group of people standing in the light in front of a darker background in real life, for instance, you’ll be able to see and remember the whole scene.
“Your eye paints a picture,” says Cabral. “Your eye actually adapts to the relative brightness and compensates for it.”
But a piece of film or a digital sensor doesn’t have that level of smarts by itself.
“Digital sensors, in particular, are very linear: A certain amount of photos come in, and they fire off a certain number of electrons,” says Cabral. “You have a nice linear range, but that’s not how you remember it; that’s not how the brain accumulated that image.”
Cabral and his colleagues want Facebook users to be able to share images that look more like their memories, and the techniques the app uses—leveling out highlights and shadows and selectively smoothing out noise while sharpening edges—are comparable to what Adams and other giants of film did manually by controlling exposure in the developing process, he says.
“When you look at the physical process you say, ‘Oh, they’re doing something really different,’ but the techniques are really analogous,” Cabral says.
For instance, while film photographers used physical filters to vary the light exposure, and thus the level of light or darkness, of different sections of a print, a digital tool like Facebook’s app can create the same effect mathematically.
In the past hundred years or so, there have been formal scientific studies of human vision and color perception, but artists have developed and shared their intuitive understandings for hundreds of years, going back at least as far as Leonardo da Vinci, Cabral says. And, he says, members of the Facebook team working on the photo app have read both scientific papers and artists’ writings.
But translating even well-understood analog techniques to digital photography is by no means trivial, since the possibilities depend on the particulars of the hardware available, including camera sensors, processors, and screens or printers, and how they’re being used, says Cabral.
“We’re not only overcoming the input device—we’re overcoming the output device too, because the output device has limited dynamic range,” he says.
And they’re relying on chips powerful enough to scan through images not just pixel-by-pixel but regionally and globally, putting together areas of the picture to make sure adjustments and noise corrections really make sense in context. Facebook’s also reacting to increased demand, as phone photography has gone from a novelty to a part of everyday life, with users snapping and sharing more photos every day.
“Five years ago I think it would have been hard to do this—not because we learned that much more about the masters in the last five years, it’s just that you have to have the right computers,” he says. “You have to have the right confluence of technology and need.”