advertisement
advertisement
advertisement

Douglas Rushkoff: Prove you’re not a robot

In “Team Human,” media theorist and veteran cyberpunk Douglas Rushkoff reminds us why we wanted progress in the first place.

Douglas Rushkoff: Prove you’re not a robot
[Image: monsitj/iStock]

Among the few things that made me want to be a technology journalist was reading Douglas Rushkoff’s 2010 book Program or Be Programmed, right after I finished college. In it, Rushkoff argued that humans need to understand how technology works or they risk being manipulated by it. In the age of algorithmic recommendations determining our every desire and voters being targeted and manipulated by the likes of Cambridge Analytica, Rushkoff’s warning from nearly a decade ago seems like an understatement.

advertisement
advertisement

In his new book, Team Human, Rushkoff argues that our focus on technological development means we’ve lost sight of what was supposed to be the whole point of all this tech in the first place: making life better for humans–or maybe, just maybe, making humans a better version of themselves. The book reads as a manifesto, spanning the entire development of human civilization from the creation of language to the invention of smartphones. But for a book about technology written by a renowned media theorist, Team Human surprisingly doesn’t say much about tech itself. Instead, it focuses on the ways that humans can reconnect and reclaim our collective humanity in the midst of a digital ecosystem that Rushkoff argues is designed to exploit and divide us rather than enhance and unite us.

[Photo: Rebecca Ashley]

Rushkoff, who is a professor of media theory at Queens University CUNY and host of the “Team Human” podcast, sat down with Fast Company to talk about his book, his shifting perspective on the media ecosystem, and how tech tycoons might avoid further tearing apart the social fabric. That Q&A is below, edited for clarity and length.

Fast Company: Why did you write Team Human?

Douglas Rushkoff: I guess the main reason was the way people are talking about humans these days. It’s as if humans are the problem and technology is the solution.

The real impetus was when I was on a panel with a transhumanist who was talking about how human beings have to pass the torch to technology and these are our successors and we need to get over ourselves and all. Then when I made my big long case for humanity, he said, “Oh Rushkoff, you’re only saying that ’cause you’re a human,” as if it was this act of hubris. And in the moment, I was like, “Oh, no, no, no. It’s not that I’m a human” and tried to defend myself.

And then I realized, so what? Is it hubris when I declare, “No, I’m on team human?” I think that that’s okay. We’ve got to keep humans around. I think we’re cool. But then I wanted to go back and look at this technological renaissance or whatever that we’re in, which was supposed to be about the unleashing of a new collective human flourishing. How did it become about atomization and isolation and human hatred? And how do we retrieve the human?

advertisement

FC: So what was your conclusion?

DR: I mean, the real thesis of this book is that human beings alone are weak and that together–united, connected–we’re happier and stronger and more real. Like, humanity is this team sport, and we have all these inventions that originally come up and are sold to us as new ways to connect to other people. But then they come up with ways of isolating and atomizing us, Facebook being maybe the latest, clearest example.

It’s certainly not an anti-tech book. It’s a pro-tech book. I’m just arguing we have to retrieve the human values and embed them in the technological infrastructure rather than forget them.

FC: It’s interesting to hear you say it’s a “pro-tech” book, because it sounds like you’ve become a little less optimistic of technology or techno-solutionism. You used to talk more about the prospects of liberatory technology, from open source platforms to even blockchain solutions.

DR: Well, looking at each one of these technologies or media seems so promising. That’s why I go through even the development of language and texts and radio and TV. And each one of these seems so promising for humanity, but then are used to find and hack the exploits in other humans. And there was some balance in that I guess until capitalism. And then capitalism took on such a life of its own and then found, in technology, a new body for itself. But I don’t blame the tech so much as how we’re programming it. Most algorithms are the enemy at this point, but it’s not their fault.

FC: You think it’s more a function of capitalism than technology itself?

advertisement

DR: Yeah. And capitalism then goes back to something else probably. It’s this unresolved set of fears we have. In some sense, the book is a last-ditch attempt to save us from ourselves and our inventions, you know, before we reach some weird point of no return that we’re just programmed into submission by algorithms that evolve so much faster than the human psyche can develop defense mechanisms. You know, when your computer is crying because you’re not doing something, it’s drawing on some instinct that it doesn’t know you have, but it’s happened upon by chance.

FC: What were those early digital pet things?

DR: Tamagotchis?

FC: Yeah. It’s like we’re living with much more advanced Tamagotchis. Persuasive design has evolved so much recently.

DR: Yup. And I meet the people behind that all the time. I’ll go to a boot camp or something and meet the guy that developed this terrible Las Vegas slot machine or some new social media app, and then he’s like, “Well, now I’m trying to undo it. Now I’m donating to Rudolf Steiner schools,” or whatever.

FC: Yeah. What do you make of those people?

advertisement

DR: I mean, I’m conflicted. The problem is–and I feel bad ’cause they’re on the good side and they’re humans and they’re trying–but when I hear a construction of humane technology, I think of it like cage-free chickens or something. It’s humane. “Let’s be as humane to people as we can while we still do this stuff to them. We’re just gonna do it more humanely.” It’s like, humane? You don’t talk about “humane” unless you’re hurting somebody. You know, “What’s a more humane solution?” It’s like, no, no, no. It’s a human. A human solution would be better than a humane solution. So the problem is that they’re looking for a way to sell “how we could be less cruel to users” to the CEO of a major social media company. But it’s simple. Don’t abuse users.

Protect yourself or be programmed

FC: A few years ago you started the Team Human podcast. What made you want to start a podcast?

DR: I wanted to use the platform I’ve developed over the past 25 years to promote other people’s work and ideas, rather than just my own. I wanted to share the kinds of conversations I get to have. I wanted to expose people to as many models for retrieving and reclaiming humanity, prosperity, and dignity as I could. I wanted to return to radio, to audio, which is the most physical and intimate of the media. Team Human, the podcast, is a way of finding and connecting the others.

Rushkoff and Mark Pesce taping an episode of the Team Human podcast at New York’s Civic Hall in December. [Photo: Erin Locascio]
FC : You have a whole chapter in the book on spirituality, which I think for people who know you from Program orBe Programmed or Throwing Rocks at the Google Bus might be a little surprising. I mean you talk a bit about spirituality in Present Shock, but it’s not so much a focus of the book. But you did write those comics, Testament

DR : —which is still one of my favorite things I’ve done. People think it’s confusing and weird, but I find it so simple.

FC : And they show you’ve been thinking about the relationship between technology and religion for a while. How are you thinking about that intersection now, more than a decade later?

advertisement

DR : In some ways what I’ve been trying to do is compensate for Judaism’s linearity. Judaism was, in part, a product of text: We could write something down, then you can have a contract; you can write down your history, and you can project into the future. You didn’t have a before and an after. The problem with monotheism, until then, was if there’s one God in charge of everything then why is the world so screwed up? But once you have linear time you could say “Oh, well it’s because He’s making it better and eventually it’s gonna be fine.”

But the problem with that is it ended up promoting a very future-based, end-justifies-the-means cause-and-effect understanding of culture, of society, of politics, of science.

FC : It starts emphasizing the idea of progress over everything else.

DR : And a certain kind of progress. You lose the circular. Whereas if you look back at the myth of Eternal Return in all of the pre-Judaic religions, it’s all very circular, down to the belief in reincarnation. And they also had the idea that the there’s no really original human act, that all you can do is reenact something the gods have done. So if you stop worrying about anything being original, if everything’s a reenactment and everything comes back, you stop screwing things up so much. You can’t just wreck everything behind you in the name of progress because it’s still there. It’s not the past; it’s still here. You can’t screw over other people because you’re going to see them again in the next lifetime.

FC: So how does that connect to Team Human?

DR: So once you take on a circular understanding, you don’t have to look at inputs and outputs so much as well: Every output is another input. Everything in nature recycles. Nothing in linear technology recycles. The spiritual sensibility was a way of saying, by leaving behind these very human sensibilities that we had over hundreds of thousands of years, we end up robbing ourselves of one of the great keys to a sustainable future.

advertisement

RelatedDouglas Rushkoff: It’s Time To Break Up Amazon


FC: I also want to talk about Program or Be Programmed a little bit. I’m not sure, but I suspect that you might have some mixed feelings about how it’s been interpreted in the intervening years…

DR: Yeah, it’s funny. I would’ve thought the problem with Program or Be Programmed is that it was a little binary and adversarial even as a construction. If you’re not doing the programming, somebody’s programming you. So it was setting up this kind of technological arms race between people and their would-be manipulators. You know, if you don’t see what Mark Zuckerberg is doing, he’s gonna just do it with impunity. And I didn’t mean it necessarily that you have to learn to code, although that’s a good thing. I meant it more that you have to have an almost liberal arts, critical thinking understanding of code. And the book ended up becoming, for a while, the sort of rallying cry of the code literacy phenomenon, the Code for America movement, all the learn-to-code schools. And that stuff, for the most part, is bunk.

FC: Well, and those jobs almost seem like they’re becoming akin to what assembly-line jobs of yesteryear were for the previous generations.

DR: Right, absolutely. And some of the code literacy movement people missed the broader picture. Coding is not everything. I just saw a great piece on Medium, “You can’t build an iPhone with Python.” I mean, what I say in there, in Program or Be Programmed, is in a couple of weeks you can kinda learn enough about code to understand what are the limitations of the code and what are the choices of the people who are doing this thing to me. And that’s sort of what I wanted, was this sense of the user as the creator rather than as just this passive thing. Because I’m from the old days of AUTOEXEC.BAT files and things, where you knew how things worked, different command lines. To use a computer was to program a computer. Even if you couldn’t do it well, you could do it well enough to at least open a program or use the thing and do some key commands.

FC: So I wanted to bring up this interview you did with Wired about Program or Be Programmed almost a decade ago.

advertisement

DR: You know, I think that’s the first time I showed up in there, because [founder Louis] Rossetto hated me because I was not libertarian. I was on the other side of that. I was on the Mondo 2000 side.

FC: Which makes sense, because in Team Human there’s a sort of retrospective looking back at the early cyberpunk ethos, kind of your equivalent of the non-libertarian Declaration of the Independence of the Cyberspace. And in this interview with Wired, they asked you:

“What separates a computer user from a computer programmer?”

And you say,

“I guess the program through which they are communicating and interacting. The computer programmer creates the environment and terms through which the computer user functions. In some cases, the computer user is aware that his or her actions have been completely circumscribed by a programmer (who may, in turn, be working for some other person or corporation to achieve a particular purpose). But many–even most–cases these days, users are unaware of the programmer and the agendas underlying the functionality that has been afforded him. He thinks this is simply what the computer can do. So the real difference is the programmer understands that the machine can model almost anything. The user only knows how to behave inside that model.”

And then in Team Human–by the way, this is not meant as a got-you, it’s more like . . .

DR: . . . No, no, no. It’s cool. Like what’s changed? Yeah.

advertisement

FC: So in Team Human you have this compelling idea of how we’re always one step behind in the media revolutions in terms of seizing power and the elites are always a step ahead of us. And so when you talk about programming, you write:

“With computers came the potential to program things to online networks. The masses gained the ability to write and publish their own blogs and videos. But this capability, writing, was the one enjoyed by the elites in the prior revolution. Now the elites had moved up another level and were controlling the software through which this all happened. Today, people are finally being encouraged to learn to code. But programming is no longer the skill required to rule the media landscape. Developers can produce any app they want, but its operation and distribution is entirely dependent on access to walled gardens, cloud servers, and closed devices under the absolute control of just three or four corporations.”

DR: Right. It went up another level. I mean, a decade ago I was still thinking about computers as computers.

FC: As, like, personal devices?

DR: In a way, yeah, even though they were connected and all. I would still look at Facebook as the equivalent of an online program. In a sense, what I’m saying now is Amazon’s cloud servers are more important than any of the programs or platforms. It went up another notch.

Remember when O’Reilly wrote that thing about Web 2.0? It’s no longer the thing. Now it’s the platform. So you wanna own the Amazon or the eBay or whatever. And now those platforms themselves are subsumed by this bigger, bigger thing. You know, and it has to do with the gates between networks. I mean, on the other hand, it goes down all these levels too. You know, you’re writing a piece of software, but that’s sitting on an operating system. The operating system’s sitting on a machine language. The machine language is sitting on a chip architecture. So it’s like you’ve gotta trust each level down.

FC: And then how many people really understand all that?

advertisement

DR: Right. But that’s also how civilization works. You always have to trust someone else–that’s how we build knowledge. But the original premise of Program or Be Programmed that is wrong is that I’m basically saying, “If you don’t know how this thing works, you are gonna be used by it.” And you can’t live your life like that, you know, where everything is manipulating you on one level or another. “Oh, I gotta know how language works because someone’s gonna use their rhetoric against me. I gotta know how the streets are designed to make me walk on one side versus the other. I’ve got to know mall architecture to know when is the signage of a store is triggering some impulse in me.”

I mean, my god! There’s a certain point at which you can’t strengthen your resilience by learning all these things. Instead, what you have to do is boost your immune system.

Find the others—and find 10 minutes

FC: Or is it even not the responsibility of the individual to have that sort of critical thinking? I mean, critical facility is great and all . . .

DR: . . . I’m starting to think no. I mean, critical facility is important, especially for whatever area you’re gonna professionalize. But the real answer is to build your resiliency to assaults of all kinds. And you get that through solidarity and connection and confidence. If you’re an atomized individual wandering in the algorithmic landscape, you’re gonna get freaking clobbered.

There’s just no way. It’s like, I tried to play . . . I’m such a ninny. I tried to play Fortnite, right? You’ve heard of it?

advertisement

FC: Of course. Yeah.

DR: It’s this game that’s sweeping the nation.

FC: You’re basically the dad in that SNL skit right now.

DR: So I was in there, and I’m walking around and walking around. And then somebody calls to me and says that I could be their friend and we can go wandering together. And I go to her and then she kills me. It’s like, what?! She just killed me like that.

FC: You’re like, “I trusted you.”

DR: “I trusted you because you said that we could do this together and it would be more fun.” I would have loved to just go around there with her and learn how to kill. And I would have helped her. I wouldn’t have stabbed her. And if in the end, if it was just me and her, I’d let her kill me, but . . . come on.

advertisement

But then I realized that regular people, they’re just walking around in the world like I was. They’re just walking into Facebook, walking into a political party. They’re just getting clobbered. It’s one big Nigerian email scam out there, looking for the exploits.

And the saddest part, is what they’re treating as exploits are the most important and evolved human mechanisms for rapport and empathy. And we’re learning not to express those because they’re too dangerous. And if we get trained in the virtual world to be that way, then we start doing that out here, in the real world.


Related: The father of mobile computing is not impressed


FC: So what are some of the specific solutions or ways out of our current tech-induced human isolation that you see?

DR: To begin with–and I know it sounds simple–people should do whatever they can to actually be with others. For many, it’s a tremendous challenge, both in terms of time and in terms of intimacy. I’m asking people to try to sit with another person for 10 minutes. To begin with, just once each week. Try to find 10 full minutes that you can be with someone else.

The objective would be to not even use a device to do this. You’d be in real space, in the same room, with the other person. And through the entire 10 minutes, you wouldn’t use media. You couldn’t answer a text message or check email or anything for the whole time. Ideally, you should try to make eye contact with the other person. You can talk or something, but try to look in their eyes if it’s not too intolerable. Hold the gaze for longer each time.

advertisement

Slowly, you’d see if you could do it more than once a week. Get up to once or even twice a day. It restores so many neglected social mechanisms. It allows you to establish rapport. Your mirror neurons will start firing, oxytocin will go through your bloodstream, and your body will start learning the very opposite things about people than our social media are trying to teach us. Social media developers try to make us fear one another, so that we depend on the platforms for contact. Direct experience of other people contradicts that messaging, and is tremendously empowering.

From the other side, if platform developers found a way to sustain profitable businesses–to be satisfied with revenue, rather than selling their companies–they wouldn’t have to resort to these scorched earth, anti-human practices. If Twitter could have been satisfied with $2 billion a year, they wouldn’t be searching for other, less pro-social ways of doing things. If Zuckerberg would have been satisfied just being a billionaire, he wouldn’t have had to destroy the social fabric of his network.

So I encourage people to create business plans that simply let them become small billionaires, rather than giant billionaires. A couple of billion is more than enough.

advertisement
advertisement

About the author

Jay is a freelance journalist, formerly a staff writer for Fast Company. He writes about technology, inequality, and the Middle East. He read a lot of Walter Benjamin in college and his favorite sci-fi author is Ursula K

More