It’s just another morning for me in my Los Angeles apartment. When I get out of bed on November 18, I put on a flannel shirt, a large sweater, and sweatpants. I spread almond butter and grape jelly on a pita. I make coffee.
And then, looking very much like Daniel Day-Lewis at the end of There Will Be Blood, I enter the Defrag Conference (“the intimate event for strategically-oriented technologists, and C-level leaders,” per their website) in Broomfield, Colorado.
I do this by checking into one of Double Robotics’ Double telepresence robots, a sort of $2,500 iPad-on-a-stick, attached to a little wheel. At Defrag, people can see my face, sleep-addled and confused, on the screen of the iPad; at my apartment in Los Angeles, through my own iPad, I can communicate with and observe the conference in Broomfield.
Double Robotics was founded in 2011 with the idea of focusing on user experience in the field of practical robotics, and the iPad robot I’m driving launched in May 2013, partly on the strength of funding from Y Combinator.
Using the robot—piloting the robot?—is a little like a touch-screen video game: I steer with left, right, forward, and back arrows that show up on my interface, and I try not to stare at the image of myself in the upper-right-hand corner, because of course. As soon as I “arrive” at the conference, an employee of Double Robotics gives me the rundown, and he tells me to make sure that I lower the iPad-on-a-stick before I start scooting around, for aerodynamic and practical reasons. I can raise the iPad-on-a-stick when I’m speaking to people, so as to better impersonate the feeling of being a human being.
In the long hallway leading to the ballroom in which keynotes will be given, I promptly run into a chair. My robot recoils. I worry that I’ve hurt him. I realize I’ve gendered my robot. I decide to call him Greg.
The Double telepresence robot is trying to solve one of the greatest dilemmas people, and in particular people of commerce, face in everyday life: how to be somewhere that you aren’t. As a species, we’ve been trying to fix this problem since the beginning, first with smoke signals and rudimentary messages, then with letters and the telegram, then with phone calls and emails and texting and FaceTime.
The Double skirts the question of distance and access by providing a physicality these other forms of communication lack. Someone can ignore your phone call or your email, but they cannot ignore you if you’re a little angsty robot, scooting up to them with your face suspended in air.
Conferences, with their mingling over coffee and breakout sessions and keynote speakers and Happy Hours, provide an ideal set of circumstances for a Double robot. You attend these things to hand out business cards and put faces to names, to mingle and hear smart people talk about your industry’s future. (At least, that’s what I imagine? Defrag is the first professional conference of my life. I’m a writer.) But before I can network, I very, very, very deliberately steer my little robot into the ballroom to hear ex-Wired editor-in-chief Chris Anderson speak about “3D Mapping the World With Drones.”
I can’t hear a damn word.
One of the interesting aspects of robo-telepresence is that it’s technologically sophisticated to an extreme degree. We’re on the cutting edge here. And while the robot may be prepared for this new world, I, the writer, sitting at my desk and staring into my iPad, do not look like the cutting edge. I can’t quite get the sound right; everything sounds like the teacher in Peanuts.
“I think successful technologies are a product not only of the vision, but also the timing,” Helen Greiner, CEO of CyPhy Works and the former founder of iRobot—creator of the Roomba!—tells me (read: tells Greg) later. Now she’s focused on drones, but Greiner developed telepresence robots back in 2000 with iRobot, and back then, she’d give people tours of her apartment using the rudimentary Internet.
But speaking over the Internet in the early 2000s, you’d have to hook up to a phone system at the same time. The infrastructure and technology wasn’t ready for the robot to thrive. Now, I sense that my own equipment lags behind the $2,500 robot I am driving around a conference in Colorado.
This is all fine: I’m not here to listen to speeches. Even if I could’ve heard Anderson, I probably wouldn’t understand what he was talking about. My relationship with sophisticated robotics, and technology in general, is a little like the beginning of 2001: technology’s the obelisk, I’m the ape, and I’m yelling, loudly, while brandishing a stick.
After the morning keynotes end, folks stream by me, gawking. At this point, I’m alone, no human handler in sight—just a robot going about its conference. I wait for the audience to leave the ballroom before I do, so as to avoid bumping into anyone. Of course a bunch of tech and computing people would be fascinated by the sight of a human being driving around a stripped-down Segway, but I’m startled to discover the weirdness of people looking at me like an object of curiosity. Conference-goers take pictures. One guy asks me if he can take a selfie, then does the telepresence equivalent of giving me his card, holding it up to the screen—my face—so that the letters resolve themselves out of the general blur into something I can actually read. He tries to pitch me on his hackathon.
“Vacuuming robots, of course, those are the most commercially successful,” Greiner says, explaining other uses for physical robots than what I’m doing. “I think all kinds of chores, like window washing—that’s something that costs a lot of money and takes a lot of time, and while humans will do a better quality, the difference, like with Roomba, is you can do the job every day—the filth doesn’t build up. There are all kinds of applications that can just make peoples’ lives easier.”
Max Versace, the CEO of Neurala, has similar thoughts about the future of robotics. Or, at least, I think he does. When I speak to him, I haven’t yet figured out that unless I mute the microphone on my end while others are speaking, ambient sound in my apartment will trigger my mic and black out whatever they’re saying. After our discussion, I email him apologizing and asking him to reiterate what he’d said about the technology his company is developing.
“It will allow a new class of uses for robots, where they won’t need to be driven by hand, but will be able to produce a whole new set of behaviors thanks to artificial brains operating them,” he wrote. “E.g., they could follow a person that the user says ‘follow that guy,’ or ‘go to the Defrag talk’ or to the ‘IBM stand,’ or for a drone ‘find rust on the pipe you are inspecting.’ In essence, amplify human productivity— 1: many (bots) rather than requiring 1:1 attention from users.”
Versace predicts that by 2015, this technology will start to spread on a limited basis, and that by 2016-17, it will be widespread, meaning that a future in which semi-autonomous robots are standing in for us, or facilitating our day-to-day life, is near.
As the conference whirs into post-lunch action—I disconnected during lunch, since Greg can neither 1) go down stairs nor 2) go outside—I locate myself near a large plant, waiting for the next speech to start. Despite being present in robot form, I’ve quickly fallen into the same existential dread I experience in real life—I can’t break down the enormous conference, and all of its patrons, into manageable parts. But while you might expect the robot intermediary to work as a kind of valve, it’s actually more acute. Without my physical presence to help orient my sense of the room and the people, I feel helpless. I don’t know who anyone is, and I’m not oriented enough to decipher my surroundings. In real life, I would just ask for directions or advice; here I feel aggressively othered. I’m supposed to interview another person, but I don’t know who he looks like, and ID’ing faces is tough. I wait for him by the plant, but he never shows.
At one point, someone picks me up and moves Greg, freaking me out entirely. I have the great anxiety that someone will steal me. Without my face in the screen, people don’t register my presence; two men sit down next to me and start talking, and I can hear every word that they’re saying.
These challenges aside, the robot works shockingly well for its intended purpose. I can do all the things I would nominally need to do at one of these conferences: I’m mobile, I can hear and see and think and speak. But you don’t realize how much you miss small things. My lack of peripheral vision grates on my brain, constantly reminding me that something is wrong. I can’t clap or shake hands. I am, by nature, obtrusive when I want to be covert. I’m a straight white man—I’m not used to standing out.
In a future of robots, though, this could be different. After piloting the Double, I can imagine a world in which these machines are everywhere, standing in for people across the country or overseas. Greiner said we’ll soon have robots that can manipulate their surroundings, not just observe. There’s nothing impractical about them. Mentally, it’s a different story. The comparison to a video game was apt; nothing felt entirely real.
On the bright side: At no point did I have to deal with catching an Uber to the airport, taking off my shoes at security, getting a rental car, or bad hotel coffee. Excluding the hypothetical cost of the robot, I spent zero dollars to be present 1,000 miles away. And when you’re actually at a conference, you can’t dip out to Trader Joe’s or do laundry, and you must always be wearing real pants.