+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Stephen Hawking says we're going to get left in the dust by AI

Oct 9, 2015, 23:37 IST

AP Photo/Elizabeth Dalziel

Stephen Hawking has been vocal about the dangers of artificial intelligence (AI) and how they could pose a threat to humanity.

Advertisement

In his recent Reddit AMA, the famed physicist explains how that might happen.

When asked by a user how AI could become smarter than its creator and pose a threat to the human race, Hawking wrote:

This terrifying vision of the future relies on a concept called the intelligence explosion. It posits that once AI with human-level intelligence is built, it can then recursively improve itself until it surpasses human intelligence, what's called superintelligence. The scenario is also described as the technological singularity.

According to Thomas Dietterich, an AI researcher at Oregon State University and president of the association for the Advancement of Artificial Intelligence, this scenario was first described in 1965 by I.J. Good, a British mathematician and cryptologist, in an essay titled "Speculations Concerning the First Ultraintelligent Machine."

Advertisement

"An ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind," Good wrote. "Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

It's hard to believe that humans would be able to control a machine whose intelligence far surpasses ours. But Dietterich has a few bones to pick with this idea, even going so far as to call it as a misconception. He told Tech Insider in an email that the intelligence explosion ignores realistic limits.

"I believe that there are informational and computational limits to how intelligent any system (human or robotic) can become," Ditterich wrote. "Computers could certainly become smarter than people - they already are, along many dimensions. But they will not become omniscient!"

NOW WATCH: Here's what really makes someone a genius

Please enable Javascript to watch this video
Next Article