Eric Schmidt dismissed the AI fears raised by Stephen Hawking and Elon Musk

Advertisement

Eric Schmidt looks unhappy

Rob Kim/Getty Images

Former Google CEO Eric Schmidt.

Google executive chairman Eric Schmidt has questioned whether renowned scientist Stephen Hawking and SpaceX billionaire Elon Musk are in a position to accurately predict the future of artificial intelligence.

Advertisement

Hawking told the BBC in 2014 that AI could end mankind, while Musk tweeted that same year that AI could be more dangerous than nuclear weapons after reading a book called "Superintelligence."

Schmidt was asked at the Brilliant Minds conference in Stockholm on Thursday what he made of their predictions.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

In response, he said: "In the case of Stephen Hawking, although a brilliant man, he's not a computer scientist. Elon [Musk] is also a brilliant man, though he too is a physicist, not a computer scientist."

Schmidt highlighted how Musk has invested into a $1 billion artificial intelligence research company called OpenAI, adding that it is "promoting precisely AI of the kind we are describing."

Advertisement

On the possibility of an artificial superintelligence trying to destroy mankind in the near future, Schmidt added:

The scenario you're just describing is the one where the computers get so smart is that they want to destroy us at some point in their evolving intelligence due to some bug. My question to you is: don't you think the humans would notice this, and start turning off the computers? We'd have a race between humans turning off computers, and the AI relocating itself to other computers, in this mad race to the last computer, and we can't turn it off, and that's a movie. It's a movie. The state of the earth currently does not support any of these scenarios.

A Google AI lab in London known as DeepMind has developed an AI off-switch - described as a "big red button" in an academic paper - in a bid to ensure humans remain in control of machines.

NOW WATCH: Mark Cuban explains why downloading Snapchat is a huge mistake