scorecard
  1. Home
  2. tech
  3. news
  4. One of the 'godfathers' of AI says that today's systems don't pose an existential risk, but warned that things could get 'catastrophic'

One of the 'godfathers' of AI says that today's systems don't pose an existential risk, but warned that things could get 'catastrophic'

Grace Dean   

One of the 'godfathers' of AI says that today's systems don't pose an existential risk, but warned that things could get 'catastrophic'
  • There's a chance that AI development could get "catastrophic," Yoshua Bengio told The New York Times.
  • "Today's systems are not anywhere close to posing an existential risk," but they could in the future, he said.

The future of artificial intelligence remains murky, but there's a chance things could get "catastrophic," an expert in the field told $4.

"Today's systems are not anywhere close to posing an existential risk," $4, a professor at the Université de Montréal, told the publication. The so-called AI "godfather" was part of the three-person team that won the Turing Award in 2018 for breakthroughs in machine learning.

"But in one, two, five years? There is too much uncertainty," Bengio continued. "That is the issue. We are not sure this won't pass some point where things get catastrophic."

Anthony Aguirre, a cosmologist at the University of California, Santa Cruz, told The Times that as AI became more autonomous it could "usurp decision making and thinking from current humans and human-run institutions."

"At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down," he continued.

Experts have warned that more attention needs to be paid to the ethics of AI and many have called for closer global regulation of the technology. The recent proliferation of AI has centered around generative forms like OpenAI's ChatGPT and Microsoft Bing.

Though there are many positive use cases — including helping to summarize research notes, write shopping lists, and draft job applications — there are concerns that the technology is developing too quickly. It's been known to develop bias, "hallucinate," and lead to the spread of deepfake images.

Aguirre was one of the founders of the Future of Life Institute, which in March put out an open letter $4. Signatories included Bengio, Elon Musk, and Apple cofounder Steve Wozniak.

$4 by the Center for AI Safety last month which said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Influential tech and business figures have weighed in on the debate. $4 told The Wall Street Journal's CEO Council that he had concerns that "reasonably soon" AI would be able to "find zero-day exploits in cyber issues or discover new kinds of biology."

But others have been considerably more positive about the technology's development than others, like $4, who noted that while there were "understandable" concerns about AI, it could have major positive effects on healthcare, education, and the fight against the climate crisis.

$4 and described "AI risk doomers" as a "cult."

"AI doesn't want, it doesn't have goals, it doesn't want to kill you, because it's not alive," he wrote. "And AI is a machine — is not going to come alive any more than your toaster will."

"The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future," Andreessen continued.



Popular Right Now



Advertisement