As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.
“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. Both are in jeans and T-shirts—Hollywood’s workaday flip side of red-carpet fashion. When I meet with them, it’s not even three weeks to the show’s premiere, with last-minute tweaks still being made, but they seem fresh-faced and excited.
Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots fumbling along the blurry line between simulated and actual consciousness.
Robots becoming something like humans is a well-worn film theme—from 2015’s Ex Machina back to 1927’s Metropolis. Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.
The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan.
The setup comes in the first episode when petulant Lee Sizemore (Simon Quarterman), who writes the salacious storylines for the park’s synthetic characters, questions the cantankerous head of quality assurance Theresa Cullen (Sidse Babett Knudsen) about the real aims of their parent company, Delos (also the name of the Greek island where the gods Apollo and Artemis were born). “This place is one thing to the guests. Another thing to the shareholders. And something completely different to management,” she says, clutching one in the steady stream of cigarettes she burns through during the show.
“We based that statement loosely on Google, for whom, for its customer’s, it’s search,” says Nolan. “For its shareholders, it’s advertising. For the principals behind the company, I think they’re interested in building God.”
Google’s customers get its services for free—in exchange for revealing the outlines of their lives. In Westworld, guests pay $40,000 a day, although they receive a lot more. “All our hosts are here for you, myself included” says the blonde welcoming robot, Angela, pressing herself against first-time guest William (Jimmi Simpson). At least this robot (played by author and on-again/off-again Elon Musk spouse Talulah Riley) knows what she is. The characters in the park think they’re real—well, assuming they genuinely can think. They are like humans, but not humans, meaning that guests are free to befriend them, court them, rape them, or kill them—without guilt.
“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.
“In preparation for the project we went back and played Grand Theft Auto a little bit,” says Nolan. Like Westworld, the vast, open world of the GTA game series doesn’t have a set objective or moral code: All of that is up to the player. “We’re sort of fascinated by, not just the state of the art in terms of the companies that are actively pursing machine intelligence, but [by] gaming,” he says. “Gaming is another forefront to AI. It’s now a massive industry.”
Nolan and Joy envision Westworld as the apotheosis of gaming, in which the non-player characters, the AIs, are not only physical but are human-level smart and absolutely dedicated to the player having fun. Today, he says, open-world games outsource intelligence to other humans in multiplayer online titles—where some 14-year-old kid will always kick your ass. “The idea of a non-player character that is every bit as fascinating to interact with and potentially challenging, but whose whole purpose is to gratify your ego—that’s the ultimate,” says Nolan.
Real-world tech is already headed there. “In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.
“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.
Technologically, a flesh-and-blood-and-silicon Dolores may still be far off. Ken Goldberg, an artist and professor of engineering at UC Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.
Ellie formulates pretty convincing dialogue by intelligently drawing on a vast store of prerecorded phrases. Westworld robots often use canned phrases; but they can also improvise. Today’s AI is developing that ability as well, by gaining a level of emotional intelligence.
“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.
Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”
Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.
However, Westworld offers no hint of its time frame, except that the park has already been in operation for 30 years, created by visionary and now ever-more eccentric aging scientist Dr. Robert Ford (Anthony Hopkins). Westworld is isolated from any reference to or communication with the outside world that could provide a context of time and place. “Wherever this space is, wherever this park exists, you’re not bringing your phone into it,” says Nolan. “You’re going into it as naked as the day you were born.” That’s not only to protect the privacy of the guests, but the intellectual property of the park.
Even today, artificial intelligence is creeping ever closer toward believability. In a 2014 competition, a chatbot was declared to have passed the Turing test—able to fool people more than 30% of he time into thinking that they were communicating with another human. The results are controversial, but the controversy itself illustrates the slippery slope of verisimilitude.
“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”
Lisa Joy (who wrote for the shows Pushing Daisies and Burn Notice, and was a coproducer for the latter) jokes that it would have been easy to simulate her while they were developing the show. “You would just need something with brown hair, bipedal, that would sit at a desk and type for, I don’t know, like 12 hours a day,” she says, chuckling. Then it would, “go home, give a kid a bath, read it a story…and go to sleep.” Those trips home were in the couple’s robotic-driven Tesla, and the kid is their nearly 3-year old daughter. “When we were brewing this pilot, we were also cooking her up,” says Joy. “It was like a meditation on sentience while one was forming.”
While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”
AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. “When was the last time you saw a banner ad that you actually clicked on? That’s the Holy Grail, and they’re getting better at it,” says Nolan. This version of AI, the one controlled by humans, is the one that troubles him.
Nolan doesn’t fear AI in the 2001-HAL 9000 sort of way. That’s clear in his screenplay for the 2014 movie Interstellar, one of several collaborations—including Memento, The Dark Knight, and The Dark Knight Rises—with his big brother, writer/director Christopher Nolan. “If you’re waiting for the moment when the robot crew rises up and purges the human crew through the airlock, you’re going to be disappointed,” he says of bots in Interstellar. “They are the most loyal, the most selfless, brave, capable members of the crew. And that’s never questioned.”
You get clues to Jonah Nolan’s thinking about the dark side of AI in Westworld by looking to his first TV series, Person of Interest. It’s based in the present world of NSA-style electronic surveillance, using machine-learning AI to predict criminal and terrorist activity. “Person of Interest…was ostensibly a CBS crime procedural, but actually for me an exploration of networked artificial super-intelligence, birthed in secret, that had gotten frustrated with its role as a sort of asset in counterterrorism,” he says, “and it decided to start gently acting on some of the information that it had that didn’t relate to its principle mission.”
In the show, the AI goes rogue in a good way, passing information to help stop everyday crime through a backdoor installed by its creator Harold Finch (played by Michael Emerson). He’s an eccentric, reclusive billionaire who uses high tech to fight crime—a bit like Bruce Wayne, except Finch outsources the groundwork to a crack team of operatives. (Like Nolan and Joy, Finch is a show runner.)
“Person of Interest dealt with the idea of a networked intelligence that was more along the lines of the intelligence that…I would imagine would emerge in the next 10 years,” says Nolan, “…an industrial intelligence suited to a narrow set of qualities whether that’s investment, well, you know what I’m talking about.” The show is saturated with images of surveillance cameras and storylines using social media and smartphones as data-gathering devices.
With selfies (which Nolan loathes), social media posts, and free services like Gmail, humans are willingly disgorging data about themselves, no Orwellian government required, according to Nolan. “We went to the Apple Store, and we bought it,” he says. “We created this surveillance state, and it’s a fucking horror show.”
Anyone who’s seen online ads for something they just bought can attest to how poor technology is currently doing at targeting us as consumers. The same is true when we get ads for products vaguely related to a topic we may have searched on but have no interest in buying for ourselves. That’s why advancing AI is so critical to the consumer economy, says Nolan. From what you say and do, Google, Facebook, retailers, and many others need AI that can ascertain what you genuinely desire.
“Can it read your mind?” asks Nolan. “The only way you can build a technology that can read your mind is if it possesses one itself.”