Astro Teller is sharing a story about something bad. Or maybe it's something good. At Google X, it's sometimes hard to know the difference.
Teller is the scientist who directs day-to-day work at the search giant's intensely private innovation lab, which is devoted to finding unusual solutions to huge global problems. He isn't the president or chairman of X, however; his actual title, as his etched-glass business card proclaims, is Captain of Moonshots—"moonshots" being his catchall description for audacious innovations that have a slim chance of succeeding but might revolutionize the world if they do. It is evening in Mountain View, California, dinnertime in a noisy restaurant, and Teller is recounting over the din how earlier in the day he had to give some unwelcome news to his bosses, Google cofounder Sergey Brin and CFO Patrick Pichette. "It was a complicated meeting," says Teller, 43, sighing a bit. "I was telling them that one of our groups was having a hard time, that we needed to course-correct, and that it was going to cost some money. Not a trivial amount." Teller's financial team was worried; so was he. But Pichette listened to the problem and essentially said, "Thanks for telling me as soon as you knew. We'll make it work."
At first, it seems Teller's point is that the tolerance for setbacks at Google X is uncharacteristically high—a situation helped along by his bosses' zeal for the work being done there and by his parent company's extraordinary, almost ungodly, profitability. But this is actually just part of the story. There happens to be a slack line—a low tightrope—slung between trees outside the Google X offices. After the meeting, the three men walked outside, took off their shoes, and gave the line a go for 20 minutes. Pichette is quite good at walking back and forth; Brin slightly less so; Teller not at all. But they all took turns balancing on the rope, falling frequently, and getting back on. The slack line is groin-high. "It looked like a fail video from YouTube," Teller says. And that's really his message here. "When these guys are willing to fall, groan, and get up—and they're in their socks?" He leans back and pauses, as if to say: This is the essence of Google X. When the leadership can fail in full view, "then it gives everyone permission to be more like that."
Failure is not precisely the goal at Google X. But in many respects it is the means. By the time Teller and I speak, I have spent most of the day inside his lab, which no journalist has previously been allowed to explore. Throughout the morning and afternoon I visited a variety of work spaces and talked at length with members of the Google X Rapid Evaluation Team, or "Rapid Eval," as they're known, about how they vet ideas and test out the most promising ones, primarily by doing everything humanly and technologically possible to make them fall apart. Rapid Eval is the start of the innovative process at X; it is a method that emphasizes rejecting ideas much more than affirming them. That is why it seemed to me that X—which is what those who work there usually call it—sometimes resembled a cult of failure. As Rich DeVaul, the head of Rapid Eval, says: "Why put off failing until tomorrow or next week if you can fail now?" Over dinner, Teller tells me he sometimes gives a hug to people who admit mistakes or defeat in group meetings.
X does not employ your typical Silicon Valley types. Google already has a large lab division, Google Research, that is devoted mainly to computer science and Internet technologies. The distinction is sometimes framed this way: Google Research is mostly bits; Google X is mostly atoms. In other words, X is tasked with making actual objects that interact with the physical world, which to a certain extent gives logical coherence to the four main projects that have so far emerged from X: driverless cars, Google Glass, high-altitude Wi-Fi balloons, and glucose-monitoring contact lenses. Mostly, X seeks out people who want to build stuff, and who won't get easily daunted. Inside the lab, now more than 250 employees strong, I met an idiosyncratic troupe of former park rangers, sculptors, philosophers, and machinists; one X scientist has won two Academy Awards for special effects. Teller himself has written a novel, worked in finance, and earned a PhD in artificial intelligence. One recent hire spent five years of his evenings and weekends building a helicopter in his garage. It actually works, and he flew it regularly, which seems insane to me. But his technology skills alone did not get him the job. The helicopter did. "The classic definition of an expert is someone who knows more and more about less and less until they know everything about nothing," says DeVaul. "And people like that can be extremely useful in a very focused way. But these are really not X people. What we want, in a sense, are people who know less and less about more and more."
If there's a master plan behind X, it's that a frictional arrangement of ragtag intellects is the best hope for creating products that can solve the world's most intractable issues. Yet Google X, as Teller describes it, is an experiment in itself—an effort to reconfigure the process by which a corporate lab functions, in this case by taking incredible risks across a wide variety of technological domains, and by not hesitating to stray far from its parent company's business. We don't yet know if this will prove to be genius or folly. There's actually no historical model, no precedent, for what these people are doing.
But in some ways that makes sense. Google finds itself at a juncture in history that has not come before, and may not come again. The company is almost unimaginably rich and stocked with talent; it is hitting its peak of influence at a moment when networks and computing power and artificial intelligence are coalescing in what many technologists describe as (to borrow the Valley's most popular meme) "the second machine age." In addition, it is trying hard to develop another huge core business to augment its massive search division. So why not do it through X? To Teller, this failure-loving lab has simply stepped into the breach. Small companies don't feel they have the resources to take moonshots. Big companies think it'll rattle shareholders. Government leaders believe there's not enough money, or that Congress will characterize a misstep or failure as a scandal. These days, when it comes to Hail Mary innovation, "Everyone thinks it's somebody's else's job," Teller says.
It is worth noting that X's moonshots are not as purely altruistic as Google likes to make them sound. While self-driving cars will almost certainly save lives, for instance, they will also free up drivers to do web searches and use Gmail. Wi-Fi balloons could result in a billion more Google users. Still, it's hard not to appreciate that these ideas, along with others coming from X, are breathtakingly idealistic. When I ask Teller why Google has chosen to invest in X rather than something that might appeal more to Wall Street, he dismisses the premise. Then he cracks a smile. "That's a false choice," he says. "Why do we have to pick?"
Google X is situated at the edge of the Google campus, housed mostly in a couple of three-story red-brick buildings. The lab has no sign in front, just as it has no official website ("What would we put on the website, anyway?" Teller asks). The main building's entrance leads into a small, self-serve coffee bar. The aesthetic is modern, austere, industrial. To the left is a cavernous room with dozens of cubicles and several conference rooms; to the right is a bike rack and a lunchroom with a stern warning posted that only X employees are allowed. Otherwise, there's little indication you're in a supersecret lab. Most of the collaborative workshops are downstairs, in high-ceilinged rooms with whimsical names such as "Castle Grayskull," and are cluttered with electronic paraphernalia and Xers bent over laptops.
The origins of X date to around 2009, when Brin and Google cofounder Larry Page conceived of a position called Director of Other; this person would oversee ideas far from Google's core search business. This notion evolved into X around 2010, thanks to Google engineer Sebastian Thrun's effort, backed by Brin and Page, to build a driverless car. The X lab grew up around that endeavor, with Thrun in charge. Thrun chose Teller as one of his codirectors, but when Thrun was drawn deeper into developing the car technology (and later into his online educational startup, Udacity), he gave up on overseeing other X projects. That's when Teller assumed day-to-day responsibilities.
There are differing explanations for what the X actually stands for. At first it was simply a placeholder for a better name, but these days it usually denotes the search for solutions that are better by a factor of 10. Some of the Xers I met, however, think of the X as representing an organization willing to build technologies that are 10 years away from making a large impact.
This in itself is fairly unique. Once upon a time, corporate labs invested a chunk of their R&D budget in risky, long-term projects, but an increasing focus on quarterly earnings, and the realization that it can be exceedingly hard to recoup an investment in far-off research, ended almost all such efforts. These days, it's considered more sensible for a company to fund short-term research—or if it wants to think far into the future, to either buy rights to an embryonic idea that arises from university research or a government lab, or to swallow up an innovative startup. Teller and Brin are not averse to doing this; for example, the wind-energy company Makani was recently bought by Google and folded into X. But Google and X have often rejected the conventional business wisdom in favor of hatching their own wild-eyed research schemes, and then waiting patiently for them to mature. Recently, when Page was challenged on an earnings call about the sums he was pouring into R&D, he made no effort to excuse it. "My struggle in general is to get people to spend money on long-term R&D," he said, noting that the amounts he was investing were modest in light of Google's profits. Then he chided the financial community: Shouldn't they be asking him to make more big, risky, long-term investments, not fewer?
Generally speaking, there are three criteria that X projects share. All must address a problem that affects millions—or better yet, billions—of people. All must utilize a radical solution that has at least a component that resembles science fiction. And all must tap technologies that are now (or very nearly) obtainable. But to DeVaul, the head of Rapid Eval, there's another, more unifying principle that connects the three criteria: No idea should be incremental. This sounds terribly clichéd, DeVaul admits; the Silicon Valley refrain of "taking huge risks" is getting hackneyed and hollow. But the rejection of incrementalism, he says, is not because he and his colleagues believe it's pointless for ideological reasons. They believe it for practical reasons. "It's so hard to do almost anything in this world," he says. "Getting out of bed in the morning can be hard for me. But attacking a problem that is twice as big or 10 times as big is not twice or 10 times as hard."
DeVaul insists that it's often just as easy, or easier, to make inroads on the biggest problems "than to try to optimize the next 5% or 2% out of some process." Think about cars, he tells me. If you want to design a car that gets 80 mpg, it requires a lot of work, yet it really doesn't address the fundamental problem of global fuel resources and emissions. But if you want to design a car that gets 500 mpg, which actually does attack the problem, you are by necessity freed from convention, since you can't possibly improve an existing automotive design by such a degree. Instead you start over, reexamining what a car really is. You think of different kinds of motors and fuels, or of space-age materials of such gossamer weight and iron durability that they alter the physics of transportation. Or you dump the idea of cars altogether in favor of a substitute. And then maybe, just maybe, you come up with something worthy of X.
DeVaul is leaning back on a chair in a big ground-floor conference room at X. He's brought me here to demonstrate how the Rapid Eval team discusses ideas. We're joined around an oblong wood table by two of his colleagues, Dan Piponi and Mitch Heinrich. The men are a study in intellectual contrasts. Piponi, 47, is soft-spoken, laconic, British—a mathematician and theoretical physicist and the winner of those Oscars. Even among the bright minds at Google X, he's regarded as freakishly smart. Heinrich, the lab's young design guru, gives off an affable art-school vibe. On his own initiative, he's built what's known as the design kitchen, a large fabrication shop that's stocked with 3-D printers, table saws, and sophisticated lathes in a building adjacent to the primary X lab. He brings a plastic tub stuffed with old eyeglass frames to the Rapid Eval session. "These were some early prototypes for Glass," he explains, randomly pulling out some circuit boards and a few terrifically ugly designs. They weren't intended for the market, he says, but to show his colleagues that what they were conceptualizing could indeed be built.
DeVaul, 43, completes the trio. He has a PhD from MIT and worked at Apple for several years before coming to Google. It is difficult to figure out precisely what he studied in college—after 10 minutes of explaining, it sounds like some mashup of design, physics, anthropology, and machine learning. As such, he can talk a blue streak on a dazzling range of topics: crime, communications, computers, material science, robotics. It was DeVaul, in fact, who came up with the idea for Project Loon, as those Wi-Fi balloons are officially known. He tried desperately to make it fail on technological grounds but found he could not, so he agreed to run the project for about a year before returning to Rapid Eval.
In some respects, watching his group in action is like watching an improv team warm up—ideas are bounced about quickly, analytically, kinetically, in an effort to make them stick or lead toward something better. The team on most Rapid Eval sessions numbers about half a dozen, including DeVaul, Piponi, and Heinrich (and sometimes Teller); they meet for lunch once a week to discuss suggestions that have bubbled up from within X or have filtered in from outside—from their parent company, say, or somebody's acquaintance in academia. Later in the week, one or two of the best suggestions are brought up again more formally for further consideration. Mostly the team looks at the scale of the issue, the impact of the proposed fix, and the technological risks. Will it really solve the problem? Can the thing actually be built? Then they consider the social risks. If we can build it, will it—can it—actually be used?
There's a reason they factor these questions into their early calculus. When you're explicitly trying to imagine products that have no real counterparts in our culture, the obstacles have to be imagined, too. With driverless cars, for instance, there remain unresolved complexities of state laws, infrastructure, and insurance; for Google Glass, there are huge ongoing privacy issues. But if the team believes these kinds of hurdles are surmountable and is still sufficiently curious about a technology by the end of the discussion, they'll ask Heinrich or Piponi to build a crude prototype, ideally in a few days. Once they're satisfied that it can work, they move toward getting the brass to officially commission the project. They will not say how often this has happened, except that it's exceedingly rare. "It's a really high bar to say, 'This is going to be a new Google X project,' " says DeVaul. And that doesn't mean it won't be killed as it evolves. It's a much higher bar to actually launch a Google X project, he points out. "Sometimes the problems at Google X are very easy to frame, such as two-thirds of the world does not have reliable, affordable Internet access." That's what led him to Project Loon. "But some problems are easier to see in the rearview mirror. Imagine how hard it would be to explain to your pre-smartphone self how much this is going to change your life." DeVaul says this is the type of thinking that led to Google Glass. "It's a matter of looking back from the future, where everyone walks around with smart glasses and no one leaves their house without them. And then it becomes obvious: 'Well, of course I want to be connected to information, but in a way that's minimally invasive, and minimally imposes on my attention.'"
He makes it sound quite reasonable. But this is also the point in the conversation when we start talking, quite seriously, about hoverboards and space elevators.
DeVaul is an avid skateboarder, and building a hoverboard is something that he has long imagined. "I just wanted one," he tells me, shrugging. When he brought it up for discussion last year—"If there's a completely crazy, lame idea, then it's probably coming from me," he says—the group actually discerned some practical applications. In industrial settings, moving heavy things on a frictionless platform could be not only valuable but transformative. "Imagine a giant fulfillment center like Amazon's, where all the pallets can levitate and move around," DeVaul says. "Or what about a lab where all the heavy equipment would come to me?"
"Dan, show him the hoverboard you built," says Heinrich.
"Right," says Piponi, sitting up and clearing his throat. In front of him is a small, shiny rectangle, about the size of a hardcover book. On the surface is a tight configuration of circular magnets. "So the first question here relates to the physics," Piponi says. "Can you actually have an object hovering about? And so people try really hard with magnets—to find some arrangement that keeps something hovering." This is the logic behind the superfast magnetic-levitation trains now used in China and Japan. But these "mag-lev" systems have a stabilizing structure that keeps trains in place as they hover and move forward in only one direction. That couldn't quite translate into an open floor plan of magnets that keep a hoverboard steadily aloft and free to move in any direction. One problem, as Piponi explains, is that magnets tend to keep shifting polarities, so your hoverboard would constantly flip over as you floated around moving from a state of repulsion to attraction with the magnets. Any skateboarder could tell you what that means: Your hoverboard would suck.
But that's exactly the sort of problem X is designed to attack. "There are loopholes in this theorem that you have to find," Piponi says. "There are materials that are kind of weird, that don't behave like magnets normally do." Piponi discovered that a very thin slice of a certain type of graphite would actually work well on a small bed of magnets. So he built one for the Rapid Eval team. He pushes his small hoverboard across the table to me, and I try it. The graphite slice, not much larger than a quarter, floats slightly above the magnets, gliding in any direction with the most ethereal push. When DeVaul first saw this, he tells me, he was astounded.
Yet by that point, Piponi had already moved on. As he did the calculations involved in expanding the small hoverboard up to a usable size, the physics suggested that at a certain point the weight of the board would knock it off its cushion of air. Other technologies could conceivably help (you might try using special materials at supercool temperatures), but the team decided that would create huge additional costs and complications—costs that would not be justified by the project's relatively modest social and economic impact. So the Google X hoverboard was shelved. "When we let it go, it's a positive thing," DeVaul says. "We're saying, 'This is great: Now we get to work on other things.'"
Like space elevators, something X was widely rumored to be working on but has never confirmed until now. "You know what a space elevator is, right?" DeVaul asks. He ticks off the essential facts—a cable attached to a satellite fixed in space, tens of thousands of miles above Earth. To DeVaul, it would no doubt satisfy the X criteria of something straight out of sci-fi. And it would presumably be transformative by reducing space travel to a fraction of its present cost: Transport ships would clip on to the cable and cruise up to a space station. One could go up while another was heading down. "It would be a massive capital investment," DeVaul says, but after that "it could take you from ground to orbit with a net of basically zero energy. It drives down the space-access costs, operationally, to being incredibly low."
Not surprisingly, the team encountered a stumbling block. If scaling problems are what brought hoverboards down to earth, material-science issues crashed the space elevator. The team knew the cable would have to be exceptionally strong— "at least a hundred times stronger than the strongest steel that we have," by Piponi's calculations. He found one material that could do this: carbon nanotubes. But no one has manufactured a perfectly formed carbon nanotube strand longer than a meter. And so elevators "were put in a deep freeze," as Heinrich says, and the team decided to keep tabs on any advances in the carbon nanotube field.
The larger lesson here is that any Google X idea that hinges on some kind of new development in material science cannot proceed. This is not the case with electronics—X could go forward with a device that depends upon near-term improvements in computing capability because Moore's law predicts an exponential increase in computing power. That is why DeVaul's team is confident that Google Glass will get less awkward with each passing year. But there is no way to accurately predict when a new material or manufacturing process will be invented. It could happen next year, or it could be 100 years.
The conversation eventually drifts to how the team had at one point debated the pros and cons of taking on teleportation. Yes, like in Star Trek. As with that show's Transporter, the molecules of a person or thing could theoretically be "beamed" across a physical distance with the help of some kind of scanning technology and a teleportation device. None of which really exists, of course. Piponi, after some study, concluded that teleportation violates several laws of physics. But out of those discussions came a number of insights—too complicated to explain here—into encrypted communications that would be resistant to eavesdropping, a matter of great interest to Google (especially in light of recent NSA–spying revelations). So bad ideas lead to good ideas, too. "I like to look at these problems as ladders," DeVaul says.
At the moment, the Rapid Eval team is watching the work of certain academics who are attempting to create superstrong, ultralight materials.
One Caltech professor, Julia Greer, is working on something called "nanotrusses" that DeVaul is particularly enthusiastic about. "It would completely change how we build buildings," he says. "Because if I have something that's insanely strong and incredibly compact, maybe I could prefabricate an entire building; it fits into a little box, I take it to the construction site, and it unfolds like origami and becomes a building that is stronger than anything we have right now and holds a volume as big as this building." There's a moment of silence in the room.
"I know that sounds completely insane," he adds. But I'm not sure it sounds crazy to him.
At one point, DeVaul asks if I have any ideas of my own for Rapid Eval consideration. I had been warned in advance that he might ask this, and I came prepared with a suggestion: a "smart bullet" that could protect potential shooting victims and reduce gun violence, both accidental and intentional. You have self-driving cars that avoid harm, I say. Why not self-driving ballistics? DeVaul doesn't say it's the stupidest thing he's ever heard, which is a relief. What ensues is a conversation that feels like a rapid ascent up that imaginary ladder. We quickly debate the pros and cons of making guns intelligent (that technology already exists to a certain degree) versus making bullets intelligent (likely much more difficult). We move from a specific discussion of "self-pulverizing" bullets with tiny, embedded hypodermic needles that deliver stun-drugs (DeVaul's idea) to potentially using sensors and the force of gravity to bring a bullet to the ground before it can strike the wrong target (Heinrich's). Then comes the notion of separating the bullet's striker from the explosive charge with a remote disabling electronic switch (Piponi). The tenor soon changes, though. We start talking about smart holsters for police officers, and then intelligent gun sights—something that firearms owners might actually want to buy. They think that idea might even be worth a rapid prototype. But we also debate the political and marketplace viability of bullet technology—who would purchase it, who would object to it, what kind of impact it might have. Eventually it becomes clear that in many ways, appearances often to the contrary, Google X tries hard to remain on the practical side of crazy.
Later in the day, I take a walk around the Google campus with Obi Felten, 41, who is the team member who tries to keep the group grounded. In fact, DeVaul refers to her as "the normal person" in Rapid Eval meetings, someone who can bring everyone back to earth by asking simple questions like, Is it legal? Will anyone buy this? Will anyone like this? Felten is not an engineer; she worked in marketing for Google in Europe before coming to X. "My actual title now," she tells me, "is Head of Getting Moonshots Ready for Contact With the Real World." One thing Felten struggles with is that there's no real template for how a company should bring these kinds of radical technologies to market. ("If you find a model," she says, "let me know.") Fortunately for X, not everything has to evolve into a huge source of revenue. "The portfolio has to make money," Felten explains, but not necessarily each product. "Some of these will be better businesses than others, if you want to measure in terms of dollars. Others might make a huge impact on the world, but it's not a massive market."
Later this year, X hopes to announce a top-secret new project that is likely to fall into that latter category. What will it be? There are no discernible clues. In my own conversations, I could only glean certain hints—that they're extremely curious about transportation and clean energy, and that they are especially serious about creating better medical diagnostics, rather than medical treatments, because they see a far greater impact. At one point, I walked through a Google X user-experience lab, where psychologists gain insights from volunteers trying possible forthcoming technologies. A large object, about, oh, the size of the Maltese Falcon, had been wrapped in black plastic. Go figure.
Meanwhile, consider that X has an overwhelming task on its hands already. The organization must move all of its unveiled projects at least one square ahead this year. Project Loon—which has not finalized a business plan yet—has apparently drawn interest from most of the telecom companies in the world, but is still not technically ready for scaling up. (It was unveiled in part because the patents were about to be made public, and Google preferred to disclose it on its own terms.) Google Glass, the X product closest to commercialization, and self-driving cars, which are much farther away, have both sparked extraordinary public interest, yet it is impossible to say if or when they'll succeed as businesses, or whether they'll have that 10-times impact within a 10-year period.
That evening at dinner with Teller, I bring up all of these issues. To me, the fundamental challenge of fashioning extreme solutions to very big problems is that society tends to move incrementally, even as many fields of technology seem to advance exponentially. An innovation that saves us time or money or improves our health might always have a fighting chance at success. But with Glass, we see a product that seems to alter not only our safety and efficiency—like with self-driving cars—but our humanity. This seems an even bigger obstacle than some of the more practical issues that the lab grapples with, but the Xers don't seem overly concerned. Teller, in fact, contends that Glass could make us more human. He thinks it solves a huge problem—getting those square rectangles out of our pockets and making technology more usable, more available, less obstructive. But isn't it possible that Glass is the wrong answer to the right problem? "Of course," Teller says. "But we're not done. And it's possible that we missed. I mean, I know we missed in some ways."
The part of the X process that colleagues like Obi Felten think about, he says, is also meant to be iterative. "It's to say to the world: What do you think? How can we make this better? It's part of us being open to being wrong, because it's way easier, and way cheaper, and way more fun to find out now that we missed than to find out years from now, with an incredible amount of additional expense and emotional investment." Teller says he calls X's ideas "moonshots" for a reason. "If one of Google X's projects were a home run, became everything we wanted, I would be really happy," he says. "I would be overjoyed if it happened with two."
At one point, I mention my own moonshot to Teller, that smart bullet that DeVaul's team had talked through earlier in the day. It wasn't a disaster, I say, but it wasn't much of a success, either. "Well, that's entirely appropriate," Teller says, sympathetically. "Most ideas don't work out. Almost all ideas don't work out. So it's okay if yours didn't work out." He thinks for a moment. "How about instead of a bullet it delivers a deadly toxin that could be reversed in a week?" It wouldn't stop bad guys immediately, he says, but once they were shot, they would have to go turn themselves in to get the antidote. He mulls it over for a moment more. "I don't know," he says, already seeing the obstacles ahead. "I'm just brainstorming."