Fast company logo
|
advertisement

TECH

Exclusive: Eric Schmidt says leaders must be mindful of societal changes caused by AI

The ex-Google CEO tells Fast Company why he worked with Henry Kissinger and MIT’s Daniel Huttenlocher to create a new video centered on AI.

BY Mark Sullivanlong read

Ex-Google CEO Eric Schmidt, ex-Secretary of State Henry Kissinger, and MIT College of Computing dean Daniel Huttenlocher released a video today encouraging global leaders to take seriously the promise—and potential threat—of AI on nearly every element of society. In the video, the three men explain why they believe AI to be the most powerful foundational technology in history, and describe how the technology could change everything from industries, war, global relations, and even our experience as reasoning beings.

The video is based on The Age of AI: And Our Human Future, a book Schmidt, Kissinger, and Huttenlocher published two years ago. Co-produced by Juan Delcan (the animator behind the viral burning matchsticks animation), the 16-minute video incorporates imagery generated by AI systems. 

Fast Company spoke to Eric Schmidt about the major themes of the video, how it came together, and how the timing of the release relates to the broader focus on large language models and chatbots such as ChatGPT. The interview has been lightly edited for clarity and length.

Fast Company: How did you and your co-authors get together to work on this project? 

Eric Schmidt: Maybe four years ago we started talking about it and it became clear that the three of us had somewhat different viewpoints and that the combination was interesting. We would check each other. Doctor Kissinger, of course, is not a scientist, but he’s brilliant. But he had some basic ideas that were not correct from a computer science perspective. But his historical context is extraordinary. Dan is a very careful computer scientist. And I tend to make stuff up that sounds plausible. 

FC: How would you describe what the video is about if somebody asked you about it in a coffee shop?  

ES: I start with the premise that AI is a very, very big thing. And we’re not building Terminator robots, which is what everybody thinks AI is, but rather that it’s going to change the way we talk to each other. The way we think, the way we reason, the way we communicate, the way we entertain each other, the way science works, the way economics works, everything. And the current excitement over ChatGPT is just another step in the evolution of AI.  

The key message here is that this stuff is happening. It’s happening faster than I’ve ever seen. It just gives me a headache. And it’s coming globally from many different players and it’s really going to change things in a big way. I would rather be early than late, and I think that a lot of the things we say in Age of AI are early, in the sense that they’ll occur in five years or seven years. What we’re really trying to do is to get people to hear the idea and begin to think about it.  

FC: Did you have a moment with ChatGPT or Bing Chat or one of these tools where you got a chilling feeling that this was something more than just a very talented mimic, something more than just an advanced auto complete tool? 

ES: Well, it is more than an autocomplete tool, the one that really did me in was when Podcast.ai did and interview between Joe Rogan and Steve Jobs. And this is in current context. And of course Steve’s been dead for a decade. And I knew Steve very well and that one shook me to the core. I don’t know. I cannot just describe it. Like my friend’s dead, and he’s speaking in a contemporaneous [way]. I cannot describe to you how powerful that was for me.  

I understand that ChatGPT and its competitors are just predicting the next word. When I saw the article about Sydney, where Sydney said something to the effect of “I won’t harm you unless you harm me first.” Undoubtedly the underlying technology encountered that phrase in some novel. And then the poor reporter managed to trigger that, and decided it was evil when it was just triggered by something that it saw in a set of novels, which were, you know, [written] in inflammatory way. I think for me when you make it personal is when it’s really shocking.

FC: Yann LeCun says these things are really like children. By training only on a large corpus of language, they’re actually picking up a certain logic of the universe, how things fit together. That seems to suggest that these LLMs have the capacity for learning in some way. Do you see it like that? 

ES: I think that Yann is correct, as he usually is, about the fact that they’re like children that are learning. I’ll give you a different model for you to think about it, which is that these are college students who did really well in composition in English but managed to get all their facts wrong. And what you’re stunned by is how fluid the conversation is. I mean, it’s writing that I can’t do. It can write poems. You were asking about ones that really did me in. There’s one that I actually saved because it affected me so much. The query was “write a poem about the transformer neural networks in the style of Poe’s The Raven; try very hard.” [The language bot wrote:] 

Once Upon a midnight dreary
While I pondered weak and weary
Or a Sea of David churning
What neural networks might be learning

This is stuff I can’t do. And while that fluidity and that language capability is really impressive, it masks the underlying weakness of the current models. Now this is all being worked on and there are many, many people who believe that there’s a point where you cross a certain size and they get really good, [but] we don’t exactly understand why. 

FC: It’s surprising to me that even the developers of these neural networks can’t explain to you how exactly the model arrived at a given output. They can’t point toward the pieces of training data that influenced the answer.   

advertisement

ES: There are people who are trying to understand the question that you’re asking. But a fair reading of it right now is we don’t understand why it does what it does, and we don’t understand it both practically as well as theoretically. For example, we cannot explain when or why these systems become brittle and where they break. This means that you can’t use these for life critical situations. Don’t use this to drive a car. Don’t use it to fly an airplane. Don’t use it to decide whether to operate on a patient or not. There’s nothing wrong with using it as an advisor. But you know you need a competent human to judge it at the moment of a critical situation, for sure. 

FC: What if I argued that we can’t control how the ChatGPTs of the world are going to be used. As you’ve said, the genie’s out of the bottle. We’ve released a new technology where we can’t fully explain how it works to create its output. And that leads me to ask: Did OpenAI act ethically when it released its model to the world through ChatGPT, and now through the ChatGPT API?  

ES: Yes, I think they acted ethically, and I’ll tell you why. This is how progress is made in society. You know, you put these things out and people play with them. And if you look at when Microsoft did their version of Bing, they didn’t test it enough. And after a few days they had to restrict the number of sequential queries to five. [Chat bots] have poor short-term memory and they get their facts wrong. Now that’s also true of a teenager, and yet you tolerate your teenage you’re hoping that they grow up, right? Somehow, we’ve managed to work it out with humans that are not quite perfect. And we tolerate their failures.  

I think that the one that I am worried about is when these systems are released without any limitations. If you look at ChatGPT, it’s licensed. You pay for its API access. They are constantly changing it to try to restrict the misuses of it. I think all of these systems are going to have limitations. A simple model is you’re going to have the computer and you have a front end computer. And the front end computer is going to use a different algorithm. To watch what’s going to the big computer and the big computer is going to say good query, bad query, good answer, bad answer. There’ll be some kind of a check, both in and out.  

Now, there are all sorts of problems with what I just said. For example, ChatGPT uses a technology called RLHF, which is reinforcement learning with human feedback. But how do we know that the humans that they used did not themselves have biases? I’m suggesting I don’t know how to solve that problem because you can’t give me a precise definition of bias. You can’t give me a precise definition of how to prevent it. You and I might agree on bias, but a whole bunch of other people might not. These systems have intrinsic bias in them because language is biased, the sources are biased, the training is biased. That’s got to get fixed, and I’m reasonably confident it will be fixed.  

FC: Is there a role for government to play? Should they be saying things like you’ve got to put watermarks on any AI-generated images you create, or a requirement for citations around any statements of fact made by a generative AI? 

ES: I’m in. I’m worried about a couple of things, but I think the thing I’m most worried about is social media. If you think about it, your view of life is not going to be changed very much by that. You’re a proper adult thinker and I’m the same way. We may disagree, but we recognize that. With young people in high school and college, it’s a bit more worrisome. I’m worried that without a couple of changes, we’re going to have some problems. I think the first change is the one that you’re describing, which is provenance. We need some reasonably accurate way to know where this thing came from. Was it the neighbor next door, who’s an idiot? Was it the Russians? Was it the Chinese? Was it the U.S. government? I want to know who did it. And I want to know that in a cryptographically secure way.  

I think the second thing is that you need to know who’s on the platforms, or at least the platforms do, and the example that I would offer to you is when you get into an Uber, you don’t ask for the driver’s license of the driver. And you typically don’t know their name. Because you trust that Uber has checked them out—if they have a valid driver’s license, that they’re not a criminal, that they’re safe to drive you around, and they’re being monitored. You want the social media platforms to do the same thing [with content-contributing users]. I’m not suggesting that your name be told to people on social media–you can have a fake name, but I think the platform needs to know who you really are.  

FC: There’s a part in the video where you’re discussing how AI systems should be governed, and you point out that this should be done with input from a wide range of people and organizations. But you also make clear that you believe AI systems should be based on our Western values and consistent with our democratic principles. I wondered about that statement because the governance problems we’ve been discussing here are not just western-centric problems but global problems. 

ES: When I say Western values, I’m referring to individual values of thought, free speech, the ability to peaceful protests, those kinds of things. It’s the stuff we all grew up with. I’m taking the position that democracies are fragile. And that it’s relatively easy to destroy a democracy with a demagogue. And there’s lots of historical examples of this. And I’m further taking the position that these are tools which, in the hands of a demagogue, could be used to really harm democracies. That’s my personal belief.   

Autocracies will use this stuff [AI] very differently. They’ll use this to convince their citizens of the corporate view or the government view. That’s not how democracies work. And I don’t know how to influence the autocracy’s behavior. Although you can be sure that they’re all going to shut down a whole bunch of dissent with this stuff. 

FC: Who are the main audiences you’re trying to communicate to with this video?  

ES: The people we’re talking to are not the general public, although we’re happy to have them in the debate we’re talking about, but rather the elites that run these systems. It’s government officials, universities, political leaders, business leaders, members of the press, like yourself, we want this to affect your thinking.  

And one of the core things we say is that you can’t leave the future up to the tech people. And the reason is the tech people’s incentives are not necessarily in alignment with what society wants. If you think about it, the best way to maximize revenues to maximize engagement, and the best way to maximize engagement is outrage. It’s clearly not good for the society, but it’s good for the revenue.  

Tech people as a broad group, and excuse the generalization, tend to be much more similar to each other than they are to the rest of the world. I think to build platforms that serve the whole world, you have to get everybody involved. If our video gets people to start asking the questions How do we deal with this? How does the press cover it? What are the appropriate restrictions? How do you build responsible AI? then that’s a win. 

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics