Algorithmic art has been around as a distinct discipline for decades. Hungarian artist Vera Molnár created an early form of it back in the 1960s with computer drawings fashioned according to strict algorithmic rules. And American artist Roman Verostko created algorithmic art with The Magic Hand of Chance, non-repeating digital drawings created using BASIC on a first-generation IBM PC. But in a new exhibition called Gradient Descent—now on view at Nature Morte gallery in New Delhi—a group of curators, a gallerist, and seven artists are attempting to define, both aesthetically and conceptually, a more recent development in algorithmic creativity: art made by artificial intelligence.
The brainchild of brothers Raghava KK and Karthik Kalyanaraman, alongside Nature Morte co-director Aparajita Jain, Gradient Descent features artworks made by seven international artists in collaboration with AI. Unlike the rather primitive AI art of Google’s DeepDream image generator, these works are aesthetically striking. Tom White’s Electric Fan (2018), for instance, calls to mind Expressionist works of the 1950s, with its cream background and splashes of gray, black, and white gestures that approximate, yes, a fan. Anna Ridler’s Untitled (from the First Training Set) is similarly expressionistic, with a female form rendered in black watercolor. Others, like Mario Klingemann’s 79530 Self Portraits and Memo Akten’s Deep Meditations, as well as Nao Tokui’s black-and-white photo installation, Imaginary Landscapes, are more warped in appearance.
The inspiration for the exhibition, as Raghava KK tells Fast Company, was a conversation with his brother about transcendence into a post-human age. While Raghava approached AI art from an artist’s angle, Karthik assumed the perspective of a data scientist obsessed with art. Jain decided to host the exhibition after a discussion with Raghava and another friend about how 97% of all jobs could be replaced by AI, while 3% of the remaining jobs would be allocated to the irreplaceable human will. The conversation quickly turned to creativity, which Jain argued remains in the realm of humankind.
That is, until she learned that artificial intelligences were also being creative. “If the future is here, and is so hugely impactful to all of humanity, I believe we need to debate and address this now,” Jain says. “I don’t think we can afford to wait.”
Gradient Descent was also, in part, a response to Google’s 2015 release of DeepDream and cGAN (Conditional Generative Adverserial Nets) in early 2017, neural networks, built on AI, that composite multiple images into hallucinatory new ones (see: DeepDream’s viral “puppy slug” faces). While this may be an exciting curiosity to the public, Karthik says the aesthetic richness of these networks’ synthesized images were fairly limited. But in the last year and a half, Karthik and Raghava have seen an artistic field expanding to include more conceptually rich and aesthetically varied work. They wanted to define it as a genre, to think about its limitations and possibilities; to understand the practice of this new breed of artist-programmers who have started engaging with AI, and to start pondering important questions about creativity.
Creating something from “everything”
Akten tells Fast Company that he first began messing around with algorithms on his BBC Micro B computer at the age of 10, equating it to playing with Legos or drawing. What was a hobby throughout his childhood and teens eventually became art. By the late 1990s, Akten was messing around with neural networks. In the mid-2000s he began to experiment with machine learning via pattern detection with Haar Cascades, a basic form of detecting faces in an image or video. Deep Meditations, Akten’s contribution to Gradient Descent, is a culmination of his research, both artistic and technical, into AI over the past several years.
“It’s a deep dive into, and controlled exploration of the inner world of an artificial neural network trained on everything, the world, the universe, space, mountains, oceans, flowers; trained on art, life, love, faith, ritual, god,” says Akten. “It is literally trained on ‘everything.’ I scraped Flickr for images tagged with ‘everything’ (as well as all those other tags).”
To create the work, Akten used deep learning—specifically, a generative adversarial network for the visuals, and a variational auto-encoder for the audio. For the latter, he used a custom architecture and system he calls “grannma” (Granular Neural Music and Audio), which he has been developing as part of his PhD at Goldsmiths, University of London.
“When you train a deep neural network on a ton of data, it hopefully learns something about that data, it contains some ‘knowledge,’ [and] that knowledge is laid out in an almost infinitely large space,” Akten explains. “In this case, it’s laid out on the surface of a sphere, but not a normal 3D sphere—a 512 dimensional hyper-sphere. The question that then arises is: How are we puny mortal humans, only able to conceive of a mere three-dimensional space . . . navigate that vast space and find what we’re looking for, or even just find things of interest?”
The title, Deep Meditations, comes from the idea that this AI is exploring—with Akten as the guide—its “inner self,” which Akten calls a form of meditation. But while the AI has been trained on “everything,” it hasn’t been told what anything is. In other words, there are no “class labels,” as they are known in machine learning research.
“Unable to semantically distinguish bacteria from nebulae, the neural network just analyzes everything based purely on aesthetics, and creates really abstract images that have characteristics of all of these different objects,” says Akten. “It doesn’t know what anything is. It just fuses everything together based on what it thinks they look like. Then when we look at the resulting images, which aren’t anything really, we project the meaning back onto them, based on what we think they looks like.”
Akten likens the results to a “very slowly evolving Rorschach inkblot test.” He sees multiple levels of meditation, or even spiritual experiences in this process, both for the neural network, the viewer, and indeed the artist.
The machine wants what it wants
With Closed Loop, artist Jakes Elwes also plays with AI image interpretation and generation for his Gradient Descent offering. His model, a Densecap, has been trained to describe hand-captioned images it uses with language, then converses with another model, PPGN, that has been trained to generate images—14 million photographs from Imagenet—from scratch interpreting the input text. The two are then fed back into each other ad infinitum.
“I feel the work called into question my preconceptions of agency,” Elwes says. “I, as the artist, had no idea what images and text was going to emerge. I decided to never edit or curate the output, allowing the machine to often go off on strange and mysterious tangents that weren’t necessarily perceivable to a human spectator. This relinquishing of control was what excited me about this piece and collaborating with a machine.”
Another artist in the exhibition, Harshit Agrawal, comes from MIT Media Lab’s Fluid Interfaces group. Harshit tells Fast Company that he, much like Elwes, is interested in “exploring and evolving the creative agency continuum between man and machine,” to create an intimate artistic process and embrace the results—a “cyborg artist.”
Harshit first became interested in how humans can share creative agency with machines while working with a drawing drone for his work A Flying Pantograph. It’s something he terms the “human-machine creative-agency continuum,” where the artist instructs the machine to do something based on his or her artistic intentions. “The machine’s outputs in turn guide your creative process, and you work together with it to achieve the final artwork you want to,” he explains.
With The Anatomy Lesson of Dr. Algorithm, Harshit references one of Rembrandt’s earliest and most well-known paintings, The Anatomy Lesson of Dr. Nicholas Tulp, in which the Dutch master painted a dissection of a hand being performed in public. Rembrandt painted this work in a time of troubled fascination with medical technology. Harshit sees a more contemporary parallel with AI: How much of human beings should machines be exposed to, and how much and what should they be allowed to learn?
Working off these questions, Harshit wanted to create a work that exposed the machine to the hardware of the human body, letting it create its own impressions. The result of this artistic inquiry is a panel of 20 prints generated by the machine, created by learning from a dataset of 60,000 images of different human surgical dissections, which Harshit curated.
“I use the AI algorithm called GAN (Generative Adversarial Network) to train the machine on the dataset of images, and then sample from the machine as it learns,” Harshit notes. “I sample imagery from it at different times of the learning process, resulting in different visual aesthetics.”
A Human Touch
Despite the technological magic of AI creating art, Harshit emphasizes that an AI’s artwork is incomplete, or nonexistent without the human artist. At least for now.
“The intention of the work is the human’s, curating or creating the datasets, creating or choosing the algorithm, tweaking its parameters, all to create a final work that they are satisfied with visually and conceptually,” Harshit says. “In some sense, AI is a sophisticated paintbrush for me with which I love to paint.”
Indeed, it is this impulse that unites the artists in Gradient Descent—the desire to use AI as tools, as paintbrushes. While it may not be so very different from early algorithmic and generative art, at least on a conceptual level, the code governing these AI are only getting more refined and expressive.
“This is the first show that treats art produced by AI as a distinct genre, worthy of being considered as such,” Kalyanaraman says. “This is significant because we are not just talking about the future impact of AI on society but actually thinking about what AI art means for the art world in particular as well.”