Facebook uses neural networks to discern faces and identify your friends. Google’s Deep Dream goes a step further; its algorithms can actually see crazy mutant animals like pig-snails and camel-birds in the clouds. (How? The software is trained on what pigs, snails, camels, and birds look like–and if it thinks it spots an animal, it actually filters and enhances the source image until it looks more like an animal, essentially reinforcing its hallucination.)
So far, feeding images Deep Dream has just made some really ugly net art. But a new paper by a team of German Researchers called “A Neural Algorithm of Artistic Style” details a methodology to actually leverage a neural network to cross-synthesize photos with famous works of art. The process is extremely technical, and involves the deconstruction of various layers of each image (even this explainer might leave you scratching your head). But the end result is clear as day: software that’s referencing a piece like Starry Night or The Scream can transform a photo through the paint strokes of a master.
Even though the research is just over a week old, coder-artists are already playing with the methodology themselves. Kyle McDonald has posted some early results to his Twitter feed, and in a satisfying twist, they’re even in motion.
You can easily imagine how quickly we can burn out on any one artist’s aesthetic, and how the snobs amongst us will complain about millennials appropriating the most significant moments of art history. But for now, I’m just appreciating “Starry Night In NYC” as a rarely striking bit of media, and a tease of how much creativity is lurking in the next wave of smart image processing to come.
[via Prosthetic Knowledge]