Neural networks are already good at simulating the styles of famous artists like Picasso and Van Gogh. Swedish artist Jonas Lund, who previously designed a Chrome extension that allowed people to stalk his surfing, wondered if neural networks could actually generate “better” art than he could. So he trained one on his collected works, and had it start spitting out paintings, all in an effort to train an AI to “think” like he does.
The three paintings that make up Lund’s “New Now” series look almost like blurry photographs of iridescent inks dropped in milk, or some kind of psychedelic lava lamp. What they actually are is a computer’s Deep Dream hallucination of the patterns and colors it thinks it sees in Lund’s previous works–or at least those works represented by Los Angeles’ Steve Turner gallery. In the paintings, you can almost (but not quite!) see abstract snatches of Lund’s Strings Attached, a series of 24 text-based paintings on fabric wallpaper, or The Paintshow, an exhibition Lund did that features paintings made on his Microsoft Paint-like website, Paintshop.biz. In all, the neural network trained off of 222 images, including photographs of his work from multiple angles.
Where things get interesting, though, is how Lund can tweak his neural network to output different kinds of paintings. Lund’s neural network didn’t just train off of high-resolution images of his work, but also information about price, reception, and style. For example, Lund could instruct the neural network to weigh a composition more strongly toward elements in his works that have sold quickly, and for high prices. In other words, he could train it to create a more commercially successful painting. He could also go the other route, telling it to work in a more experimental or abstract style. “By weighing the works differently depending on their success in the market, the neural network can be trained to become more commercial, or more conceptual, or more decorative,” Lund says.
Of course, 222 images isn’t really a big enough sample to train a neural network to generate anything that “looks” like the work of a human hand. By comparison, Google’s Deep Dream AI trained off a database that contained 14 million images. But once they reach a point where they can more intelligently learn an artist’s style from a smaller sample set, influencing how and what they paint. To get there, neural networks will have to evolve to a human-level of artificial general intelligence–and that’s years, even decades in the future. In the meantime, Lund is content to use neural networks to explore “optimizing a seemingly non-optimizable process”–the process of creating perfect art, even in the absence of any set definition of what “perfect” art would look like.
All Images: Courtesy the Artist/Steve Turner, Los Angeles