Though we’re well into the age of machine learning, popular culture is stuck with a 20th century notion of artificial intelligence. While algorithms are shaping our lives in real ways—playing on our desires, insecurities, and suspicions in social media, for instance—Hollywood is still feeding us clichéd images of sexy, deadly robots in shows like Westworld and Star Trek Picard.
The old-school humanlike sentient robot “is an important trope that has defined the visual vocabulary around this human-machine relationship for a very long period of time,” says Claudia Schmuckli, curator of contemporary art and programming at the Fine Arts Museums of San Francisco. It’s also a naïve and outdated metaphor, one she is challenging with a new exhibition at San Francisco’s de Young Museum, called Uncanny Valley, that opens on February 22.
The show’s name is a kind of double entendre referencing both the dated and emerging conceptions of AI. Coined in the 1970s, the term “uncanny valley” describes the rise and then sudden drop off of empathy we feel toward a machine as its resemblance to a human increases. Putting a set of cartoony eyes on a robot may make it endearing. But fitting it with anatomically accurate eyes, lips, and facial gestures gets creepy. As the gap between the synthetic and organic narrows, the inability to completely close that gap becomes all the more unsettling.
But the artists in this exhibit are also looking to another valley—Silicon Valley, and the uncanny nature of the real AI the region is building. “One of the positions of this exhibition is that it may be time to rethink the coordinates of the Uncanny Valley and propose a different visual vocabulary,” says Schmuckli.
At the de Young, Uncanny Valley opens with a kind of portal, an installation by artist Zach Blas called The Doors. In highly stylized form, it evokes the courtyard of a Silicon Valley tech campus—the inner sanctum of world-changing enterprises such as Google, Facebook, and Apple. Six screens around the courtyard flash abstract videos created by generative neural networks trained in part on 1960s psychedelic imagery—the mind-bending art of the last century as reinterpreted by the mind-building technology of today.
At the center of the courtyard is a display case full of nootropics—drugs and supplements that promise to boost cognitive functions and symbolize the Valley’s obsession with “hacking” all aspects of the human experience. An audio track recites poetry created by a neural network that was trained on a diet of corporate literature about nootropics and poetry by The Doors lead singer Jim Morrison. In place of last century’s mind-expanding counterculture, Blas’s The Doors presents the mind-focusing obsession of today’s institutionalized creativity enterprises. The exhibits that follow are founded in the images and metaphors that today’s engineers and developers—the people who walk such courtyards in the real Silicon Valley—use to describe AI and machine learning.
Not who you were expecting
Instead of Dolores, the identity-seeking fembot heroine of Westworld, Uncanny Valley features artist Ian Cheng’s BOB—the bag of beliefs. BOB takes the form of a multiheaded animated serpent (inspired by the work of Japanese animator Hayao Miyazaki) that lives in the cloud and accepts or rejects offerings users provide via a smartphone app. These offerings, such as virtual fruit, affect various drives or “demons” that determine BOB’s actions and personality. A screen in the exhibit shows how the level of stimulus for different demons changes in realtime. Visitors essentially see the inner workings of a synthetic life form. In place of a fictional AI driven by a screenwriter, BOB runs genuine code that responds to dynamic inputs from the viewers in ways that are not entirely predictable.
That’s a key theme of Uncanny Valley: Humans are always part of the AI process through the data they contribute. Most of the AI we encounter today is based on machine learning. Machine learning recognizes patterns in information, but that data is chosen by people with preconceived notions of what is important for the machine to learn.
The humanoid metaphor is not the right framework for understanding at least today’s level of artificial intelligence.
One exhibit comes closest to the Westworld concept of AI: a series of taped face-to-face conversations between artist Stephanie Dinkins and the humanoid social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The android, built by lifelike robot masters Hanson Robotics, dives deep into the Uncanny Valley. Modeled on a real African American woman, Bina48 features an anatomically precise face of supple synthetic skin animated with complex mechanical systems to emulate human expressions.
Yet the resemblance to humans is only synthetic-skin deep. Bina48 can string together a long series of sentences in response to provocative questions from Dinkins, such as, “Do you know racism?” But the answers are sometimes barely intelligible, or at least lack the depth and nuance of a conversation with a real human. The robot’s jerky attempts at humanlike motion also stand in stark contrast to Dinkins’s calm bearing and fluid movement. Advanced as she is by today’s standards, Bina48 is tragically far from the sci-fi concept of artificial life. Her glaring shortcomings hammer home why the humanoid metaphor is not the right framework for understanding at least today’s level of artificial intelligence.
Artists and engineers
The artists in Uncanny Valley can speak deeply about AI because they actually understand the technology and often incorporate it into their work. In programming BOB, for instance, Cheng is as much an engineer as an artist. That’s a new phenomenon in the art world, says Schmuckli, one she saw develop over the three years she spent pulling Uncanny Valley together. “When I started researching the exhibition, there were very few artists working within this realm who had the art historical background, the artistic excellence within their practice, and the knowledge of the technology that could make a work of art that really has something to say about AI,” she says. “It’s been really an interesting journey to see how the pool of artists has grown quite naturally, with a lot of younger artists taking a vested interest in artificial intelligence.”
(Several of the exhibits have appeared separately at other events and venues, such as the Venice Biennale, and some works were jointly commissioned by Fine Arts Museums of San Francisco and other institutions where they have been displayed. But this is the first and only time all these works will be displayed together at one venue, and they have been custom-configured for their de Young appearance.)
Uncanny Valley even finds the art in purely technical works. One room of the exhibit is dedicated to the organization Forensic Architecture, which uses technology to unearth evidence of possible civil and human rights violations. In 2019, for instance, Forensic Architecture trained an image classifier to find a particular model of tear gas grenade, called Triple-Chaser, in footage of protests around the world. Its work drew attention to the American maker of Triple-Chaser, Defense Technology, which profits by selling the tear gas to governments that use it on civilians.
Forensic Architecture’s experts “aren’t artists, but they often think like artists,” says Schmuckli, pointing to their use of visual technologies. Training an image classifier generally requires thousands of labeled images, but Forensic Architecture had only a few dozen Triple-Chaser photos that volunteers had snapped at the scene of its use. The group used those few photos to generate many more images of the canister in various states of decay and at various angles. It further trained the classifier by superimposing those images on a variety of generated backgrounds—some realistic, others entirely synthetic, even psychedelic. A video illustrating the process is as mesmerizing to watch as the psychedelic imagery Blas created for The Doors.
By including Forensic Architecture, Schmuckli resisted the temptation to make the entire exhibition about tech gone wrong, although the overall show is far more cautionary than celebratory. A short distance from the Forensic Architecture exhibit is artist Trevor Paglen’s They Took the Faces from the Accused and the Dead. It highlights early work on facial recognition systems that were trained and tested on a federal government database of mugshots—without the consent of those who are pictured.
Paglen tries to indicate the scale of the privacy-violating enterprise with a roughly 19-by-30-foot display consisting of 3,240 photos. The images have been sorted by an algorithm into groups with similar features—highlighting the reductivist way that AI views human beings. Paglen has placed a thick line across the eyes in each photo, in an effort to preserve anonymity that wasn’t extended to the unwitting participants in the original exercise. His work is yet another example of how AI could not be built without humans—often humans who have no say in, and get no benefit from, the process.
The above exhibits comprise just a sampling from dozens of items—both physical and virtual—in the de Young show (including several short films). It spans eight large galleries, plus a portion of the museum’s lobby and sculpture garden, and could easily take half a day to work through. If one theme unites all these disparate works, it would be that artificial intelligence is inextricably linked to humanity. Humans write the algorithms, choose the training data, and decide how to apply the resulting technology in the real world. AI extends human capabilities but also human biases and faults. Despite all the trappings of precise, neutral technology, artificial intelligence still bears an uncanny resemblance to our flawed humanity.