What does an algorithm think a human body looks like?
Gaze long enough into Mario Klingemann’s work, and you’ll get your answer. The artist has used machine learning in his art for years, experimenting with nascent tools that allow him to utilize data and algorithms like a portraitist might mix colors on a palette. On his Twitter feed, he shares the results almost daily: Ghostly faces from models trained on 19th century photographs, or frilled, glitchy portraits from a dataset of paintings by Old Masters.
He compares the process to photography–but instead of shooting a camera to capture images, he trains neural networks to create them.
I started retraining my Old Masters GAN with some semantic hair markers. The hair segmenter is not really good yet but I believe that there is already some more variation in the generated hairstyles. pic.twitter.com/kxt2f11fqh
— Mario Klingemann (@quasimondo) July 4, 2018
In a sense, you could compare the process to drawing a live model: The piece begins as a stick figure generated by a dataset of 150,000 human poses. Klingemann gives that pose to the real star of the show–a Generative Adversarial Network, or GAN. GANs are fascinating and increasingly common fixtures in AI; they typically consist of two neural networks trained on thousands of images of a particular thing (in this case, the human body). One neural network “generate” images of that thing and the other judges whether it’s convincing based on the data it was trained on.
In this case, the data it’s trained on is pornography. Why? “[It’s] one reliable and abundant source of data that shows people with their entire body,” Klingemann explains over email. (“Another source would have been sports imagery, but I must admit that am not really into sports,” he adds.)
The GAN generates a body for the stick figure, and then a different GAN steps in to refine the low-res image with new textures and details, a process Klingemann calls “transhancement,” comparing it to the 1990s-era TV trope of “enhancing” a photograph of a crime scene. He’ll look through tens of thousands of images before choosing a select few to refine. “I picked about 20 where the combination of pose, composition, colors, and textures felt ‘right’ to me,” he explains. “This curation process is not easy since in particular in the beginning a lot of images look very novel and interesting and you might be tempted to keep them all, but over time I learn to see the fine nuances and becomes more critical in my choices.”
GANs are becoming much more common in the art world–and stirring debate around whether AI is being used as a gimmick to overhype work. In August, the 250-year-old auction house Christie’s declared itself “the first auction house to offer a work of art created by an algorithm,” since it plans to auction off a piece called Portrait of Edmond Belamy later this month. The piece was created by a trio of artists who used a GAN to generate the portrait.
So, are artists like Klingemann trying to move beyond GANs? Not exactly. “They are like painters’ brushes, which have also never really gone out of- ashion through the centuries,” he says. “The problem is that because GANs are relatively easy to use and the default low-resolution convolutional ‘style’ they produce looks very novel and ‘AI-ish’ that’s where a lot of people stop.”
He’s more intrigued by the way machine intelligence learns–its ability to create the “uncanny,” and for him to control the finished product. “Also most importantly,” he adds, “I want my images to be interesting even if one does not know that they are ‘made with AI.'”