It can turn a cityscape from day into night, or paint a green forest with the warm colors of fall. It can make your drab kitchen look like it’s straight out of an Ikea catalog, or add a sunset to just about anything.
And it can do it all automatically, without the need of learning the complex tools of Photoshop. Dubbed Deep Photo Style Transfer and spotted by The Next Web today, it’s the latest AI-driven image editing technology developed by Adobe and Cornell.
All you need to do is pick a source photo that you’d like to edit, and then pick an inspirational photo with the qualities of style you’d like to copy. Then, the Deep Photo Style Transfer algorithm can combine the two with uncanny, photorealistic accuracy. In user tests, the new method beat out similar technologies 80% of the time. It’s incredible, and in the age of fake news, it’s frankly quite scary.
The technique leverages a rapidly popularized concept called style transfer. We’ve seen similar Van Gogh-style “filters” applied to photographs in smartphone apps like Prisma, a technique that depends on layers of analysis within the virtual brain of a neural network. Cornell and Adobe researchers did the same thing–but they built upon this earlier style transfer research by increasing restraints, essentially another layer of the logic that would ensure the output was not just stylized but also photorealistic. (You can see the full equations at work in this paper.)
As a result, Deep Photo Style Transfer can accomplish some crazy feats, copying the time of day, weather, season, and even artistic stylization from one photo to another–but crucially, doing so with such high fidelity that it’s convincing to your gut. In fact, I’d argue the most incredible photo in the paper is of the simplest of things: Three apples, each presented with skin textures that have been swapped from one another. Such is the power of Deep Photo Style Transfer when a user narrows its focus down to just part of an image.
The new research isn’t part of any Adobe products yet, but it plays to Adobe’s new direction with artificial intelligence. Whereas present-day Photoshop allows highly skilled artists to master its tools to create all sorts of fake imagery that passes as real, the next era of Photoshop uses AI to be both more automated and more convincing. We’re seeing Adobe train software to build websites and clone people’s voices through its project VoCo. And now, the company is training software to transform the most fundamental elements of a photo.
From a design perspective, Adobe’s new approach makes sense. Why not leverage AI to enable users, without skill or knowledge, to do whatever they can imagine with a piece of media? Wouldn’t that be the most perfect piece of software that Adobe could make?
Perhaps. But it also means that, soon, anyone will be able to fake just about anything you see online. Imagine putting a politician’s voice into Adobe’s VoCo software, and programming it to “confess” to treason or fabricate a scandal? Imagine how easy it will become to falsify supposed intelligence–like North Korea once faked its missile armament, but in a much more realistic way that no analysis will be able to easily discern. In the age of the internet, misinformation is a political weapon. We’ve already seen what’s happened at Facebook and its peers, and the challenges of battling fake news in a world where everyone holds a microphone. Imagine what happens when anyone can make it appear that anyone has done anything.
Of course, Adobe isn’t intentionally developing some platform for malevolence. But like any other research group studying new technologies, it cannot distance itself from the consequences of its own work, especially in a digitized political climate, where discerning what is true and what is false feel is more important than ever before.