At Austin’s SXSW festival, Garry Kasparov, who many consider the greatest chess player of all time, made an appearance to speak on ethics in AI and warn of security threats.
Kasparov reigned as world chess champion from 1985 through 2000, and in 1997 was challenged to play a six-game match against IBM’s Deep Blue computer. When he lost, he became a symbol of the rise of smart machines, which has lately led to some people questioning whether the advance of AI would ultimately mean game over for humanity.
Kasparov has spent the last two decades advocating for better human-computer interactions. Writing prolifically and speaking around the world, he has emerged as a freedom fighter for democracy. A staunch critic of the Soviet Union, in 2008, he attempted to run as an opposition candidate to Vladimir Putin. With tensions rising in 2012, he was arrested and beaten while trying to attend the sentencing of the outspoken rock band, Pussy Riot. Soon after he left Russia and became a Croatian citizen.
He currently lives in New York City, is active on Twitter, and chairs the Human Rights Foundation, which fights for the rights of political prisoners. He is also a spokesperson for Avast, the security software unicorn which last year had the largest IPO in the history of the London Stock Exchange.
I spoke with Kasparov about data privacy, trust, and accountability. An edited transcript of our conversation follows.
“At first, we think, ‘No machine can ever do that'”
Fast Company: It’s been more than twenty years since you were defeated by IBM’s Deep Blue computer. Today, consumer AI is everywhere. Elon Musk says AI is the greatest known threat to our species. What are your thoughts on the coming AI apocalypse? Are we doomed or is AI going to save us all?
Garry Kasparov: We make a mistake in misunderstanding the nature of computers by adding human qualities to them. Machines know the odds but always work within within the parameters that have been originally installed. There’s a fine line where machine intelligence begins and human creativity ends.
The [Deep Blue] match taught me that human to machine relations always go through the same stages. At first we think, “No machine can ever do that. It’s impossible.” Then we believe it can do that but it’s weak. There’s a short window where we can compete and then machines are much better.
I realized in 1997 that it was a matter of time before machines could actually conquer the game of chess and not because they can solve for it mathematically. It’s not about being perfect, it’s about being better. As long as you operate within closed systems, machines will always prevail because they have a steady hand. Machines make fewer mistakes, with no psychological pressure. For instance, when we talk about self-driving cars, people say, “Oh but this self-driving car had an accident.” Yes, but something like 40,000 people are killed in this country each year in car accidents. It’s not about machines being perfect, because there’s no perfection in this universe. Machines perform better, that’s it.
Since 1998, I’ve been talking about humans working with machines, how we can get the best out of them. How can we merge human intuition and human creativity with machines brute force and memory? How can we make the most effective cooperation between humans and machines? What is the human’s role in the future?
We have to start realizing that we should not have such high expectations for computers to solve all the problems. We need to add elements that could compensate for machine deficiencies, because every computer could be put to solve a specific task but it will require certain human intervention to make them most effective.
Transferring the knowledge from the closed system to open ended system will require human assistance. Machines will never recognize the moment when they enter diminishing returns. Machines can ask questions, but they don’t know what questions are relevant. So there is still plenty of room for humans to strategize and guide these machines. Machines are covering bigger and bigger territory but it will never be hundred percent. There will always be a little room for the last few decimal points for human intervention.
Elon Musk, Stephen Hawking, and other doomsayers have talked about the end of the world yet I see not a shred of evidence that we are even close to that. It’s absolute nonsense to suggest that we should worry today about killer robots. It’s far less ominous.
“Existing political systems are under huge pressure”
FC: What about the ascent of DAOs—decentralized autonomous organizations that are transacting among themselves with no human interaction?
GK: We’ve moved so much of our lives into these digital realities. It’s like ethical electricity, DNA being transferred between parents and their children. You don’t expect it to be more ethical than what is created. We are reaching a point that there is responsibility on our side to make sure these creations will not be worse than our expectations.
Humans still have the monopoly on evil. The problem today that we’re facing is not the Terminator. The problems today are bad humans, the evil that exists in this imperfect world, using technology invented in the free world to undermine the very foundation of the free world. That’s a real problem.
FC: As a human rights activist, do you see the emergence of a global, decentralized cryptocurrency that is borderless and requires no human governance, like Bitcoin, as a means to escape rogue regimes? Will blockchain technology ring in the end of the era of nations?
GK: Every new technology that has been brought into our civilization created this kind of political disruption. Existing political systems are under huge pressure because they’ve not been built to operate in this environment. If you look at the political structure of the of the world today, its foundation goes back to 1945. The creation of nations are as a result of World War II. The global stage is now populated by so many players that simply don’t feed the old framework. I think some things are inevitable because we are now facing a historical moment of shifting from the world order to something new. If you go back in history since the defeat of Napoleon, the framework couldn’t feed the expectations and political movements that have been created in the following years. Empowering people with this technology is very important.
“The Facebooks and Googles … have so much power”
FC: What keeps you up at night?
GK: Enormous power is being allocated to big corporations that now collect data. That is one of my greatest concerns. While the big corporations, Google, Amazon, Apple, and Microsoft, follow rigid regulations like GDPR in America or in Europe, they are far less scrupulous and respectful of privacy when it comes to China or Russia. As a human rights activist and someone who grew up in the Soviet Union, I feel really bad when Google, for instance, is rejecting cooperation with the Pentagon on moral grounds about Maven but also keeps working with the Chinese government on creating Dragonfly, the system that connects your social account with your mobile phone which could be really damaging for millions upon millions of people in China. There’s not much difference between Google data collection and KGB data collection. There are consequences for people in China, Russia, Iran, Turkey, and other undemocratic countries. We need to hold the corporations responsible for dealing with this data. The Facebooks and Googles of the world have so much power now, we should look at the ways they are dealing with this information.
I hope that people will be empowered to control their data and use it for their benefit, which is why the concept of blockchain will eventually conquer the world in whatever form, and however long, it takes. I could see how we’re at the end of one chapter and the beginning of new one.
FC: What is your role at Avast?
GK: My role is to inform the public about the threats and what needs to be done. I have my reputation as a human rights activist and also defending the individual consumer.
I’m terrified by the fact that the general public is so slow in recognizing this. What can be done on the most primitive level is digital hygiene. We wash our hands, we brush our teeth. We should have the same good hygiene on our devices.
You can’t protect yourself against all the digital viruses as in [real-life viruses], but at least 90% of the threats can be eliminated, if you follow elementary procedures like putting antivirus software on your network.
Martine Paris is a San Francisco-based tech reporter who covers AI, consumer tech, gaming, crypto and blockchain for The FinTech Times, Modern Consensus, Pocket Gamer’s Blockchain Gamer, and Hacker Noon. Follow her on Twitter: @contentnow.