Chess grandmaster Garry Kasparov on what happens when 'machines reach the level that is impossible for humans to compete'
- Chess grandmaster Garry Kasparov sat down with Business Insider for a long discussion about advances in artificial intelligence since he first lost a match to the IBM chess machine Deep Blue in 1997, 20 years ago.
- He told us how it felt to lose to Deep Blue, and why the human propensity for making mistakes will make it "impossible for humans to compete" against machines in the future.
- We also talked about whether machines could ever be programmed with "intent" or "desire," to make them capable of doing things independently without human instructions.
- And we discussed his newest obsessions: privacy and security, and whether - in an era of data collection - Google is like the KGB.
LISBON - Garry Kasparov knew as early as 1997 - 20 years ago - that humans were doomed, he says. It was in May of that year, in New York, that he lost a six-game set of chess matches against IBM's Deep Blue, the most powerful chess computer of its day.
Today, it seems obvious that Kasparov should have lost. A computer's ability to calculate moves in a game by "brute force" is infinitely greater than a human's.But people forget that the Deep Blue challenge was a set of two matches, and Kasparov won the first set, in 1996, in Philadelphia. In between the two matches, IBM retooled its machine, and Kasparov accused IBM of cheating. (He later retracted some of his accusations.)
In fact, Kasparov could have won the second series had he not made a mistake in game 2, when he failed to see a move that could have forced a draw. Deep Blue also made a mistake in game 1, which, at the time, Kasparov wrongly put down to Deep Blue's "superior intelligence" giving it the ability to make counterintuitive moves.
Nonetheless, in a conversation with Business Insider at Web Summit in Lisbon this year, Kasparov said that was the point at which he first realised that humans were "doomed" in the field of games.
As long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that's enough for machines to reach the level that is impossible for humans to compete.
"I could see the trend. I could see that it's, you know, a one-way street. That's why I was preaching for collaboration with the machines, recognising that in the games environment humans were doomed. So that's why I'm not surprised to see the success of AlphaGo, or Elon Musk's Dota player AI [an AI player for the video game Dota 2], because even with limited knowledge that these machines receive, they have the goal. It's about setting the rules. And setting the rules means that you have the perimeter. And as long as a machine can operate in the perimeter knowing what the final goal is, even if this is the only piece of information, that's enough for machines to reach the level that is impossible for humans to compete," he says.
Kasparov has written a book on AI, titled "Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins." He has also currently an ambassador for Avast, the digital security firm.
Our first question was about "brute force," and whether AI has moved beyond the problem of being reliant on vast databases to make choices instead of real "thinking" or "learning."↓↓↓