Why Elon Musk, Peter Theil and Stephen Hawking Fear the Advent of Artificial Intelligence

Advertisement
Why Elon Musk, Peter Theil and Stephen Hawking Fear
the Advent of Artificial IntelligenceArtificial Intelligence should be painstakingly measured before we start utilizing it for the comfort of humans. Then again, might be in a greed to play the character of god, the maker of the universe, humans may demolish themselves.
Advertisement

The apprehension of manmade brainpower is not going to blur any sooner. Hear it from the experts themselves:

Elon Musk

Elon Musk has set up himself as a main voice among the top techies in raising worry about how artificial intelligence may debilitate humanity. Presently, he and his companions are demonstrating their reality with their wallets.
Musk has been drifting some very confronting, futurist tech thoughts lately, for instance, how we'll make government on Mars, why we're all living in an environment like The Matrix, and how he wants to dispatch a SpaceX rocket at the phenomenal rate of once every two weeks.

He is the leader of a group of tech elites pooling together $1 billion to back another nonprofit research company, OpenAI, aimed at propelling AI technology that will help humanity. The company will have a lab that will initially work from space given by Y Combinator in San Francisco, and its discoveries and improvements will be available on an open source premise.
Advertisement


Stephen Hawking

In an interview with BBC, Prof. Stephen Hawking said, “The development of full artificial intelligence could spell the end of the human race." This warning came in when he was asked about the advancement of technology in reference to AI.

AI is not just about machines reaching an intelligence level of humans, it’s about learning and if the data processing is done fast enough—there’s no doubt it’ll surpass humans. He agrees to these facts and says, “Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”

Sam Altman

“Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity,” says Sam Altman in his blog ‘Why Should you fear Machine Intelligence’.
Advertisement
He further explained that evolution will continue, and if humans are no more the most-fit species, we may leave. In some sense, this is the framework acting as planned. In any case, as a human is made to survive and reproduce, “I feel we ought to battle it.”

In what manner would we be able to survive the development of SMI? One of my main 4 most loved clarifications for the Fermi Catch 22 is that biological intelligence dependably in the long run makes machine intelligence, which wipes out biological life and after that for reasons unknown chooses to makes itself imperceptible.

Peter Thiel

Peter Thiel is not as scared as is pal Elon Musk. He has a different perspective towards the growth of machine intelligence. In an interview when asked about AI and fure, he said “I'm super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it'll take people's jobs, it'll replace people's jobs, but I think it's much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn't be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it's unlikely to happen any time soon, so I don't worry about it as much, but it's one of these tail risk things, and it's probably the one area of technology that I think would be worrisome, because I don't think we have a clue as to how to make it friendly or not.”