Stephen Hawking says we're going to get left in the dust by AI
AP Photo/Elizabeth Dalziel
In his recent Reddit AMA, the famed physicist explains how that might happen.
When asked by a user how AI could become smarter than its creator and pose a threat to the human race, Hawking wrote:
It's clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help.
If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
This terrifying vision of the future relies on a concept called the intelligence explosion. It posits that once AI with human-level intelligence is built, it can then recursively improve itself until it surpasses human intelligence, what's called superintelligence. The scenario is also described as the technological singularity.
According to Thomas Dietterich, an AI researcher at Oregon State University and president of the association for the Advancement of Artificial Intelligence, this scenario was first described in 1965 by I.J. Good, a British mathematician and cryptologist, in an essay titled "Speculations Concerning the First Ultraintelligent Machine."
"An ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind," Good wrote. "Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
It's hard to believe that humans would be able to control a machine whose intelligence far surpasses ours. But Dietterich has a few bones to pick with this idea, even going so far as to call it as a misconception. He told Tech Insider in an email that the intelligence explosion ignores realistic limits.
"I believe that there are informational and computational limits to how intelligent any system (human or robotic) can become," Ditterich wrote. "Computers could certainly become smarter than people - they already are, along many dimensions. But they will not become omniscient!"
NOW WATCH: Here's what really makes someone a genius
- US buys 81 Soviet-era combat aircraft from Russia's ally costing on average less than $20,000 each, report says
- 2 states where home prices are falling because there are too many houses and not enough buyers
- A couple accidentally shipped their cat in an Amazon return package. It arrived safely 6 days later, hundreds of miles away.
- Bengaluru's rental income highest in Q1-2024, Mumbai next: Anarock report
- Rupee falls 10 paise to settle at 83.48 against US dollar
- Include 4 hrs of physical activity, 8 hrs sleep in routine for optimal health, suggests study
- 11 must-visit tourist places in Nainital in 2024
- Indegene's ₹1,842 crore IPO to open on May 6
- JNK India IPO allotment date
- JioCinema New Plans
- Realme Narzo 70 Launched
- Apple Let Loose event
- Elon Musk Apology
- RIL cash flows
- Charlie Munger
- Feedbank IPO allotment
- Tata IPO allotment
- Most generous retirement plans
- Broadcom lays off
- Cibil Score vs Cibil Report
- Birla and Bajaj in top Richest
- Nestle Sept 2023 report
- India Equity Market