AI poses 'extinction' risk comparable to nuclear war, CEOs of OpenAI, DeepMind, and Anthropic say

Advertisement
AI poses 'extinction' risk comparable to nuclear war, CEOs of OpenAI, DeepMind, and Anthropic say
Sam Altman, CEO of OpenAI, at a conference in Idaho.Kevin Dietsch/Getty Images
  • The CEOs of three top AI companies have signed a warning about the risks of artificial intelligence.
  • The Center for AI Safety's statement compares the risks posed by AI with nuclear war and pandemics.
Advertisement

The CEOs of three leading AI companies have signed a statement issued by the Center for AI Safety (CAIS) warning of the "extinction" risk posed by artificial intelligence.

Per CAIS, OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Anthropic CEO Dario Amodei have all signed the public statement, which compared the risks posed by AI with nuclear war and pandemics.

The CAIS statement reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

AI experts including Geoffrey Hinton and Yoshua Bengio are among the statement's signatories, along with executives at Microsoft and Google.

CAIS told Insider it had verified signatories and used additional measures for high-profile names to ensure they had actually signed.

Advertisement

Representatives for Altman and Amodei did not immediately respond to Insider's request for comment, made outside normal working hours.

DeepMind told Insider in a statement: "Artificial intelligence will have a transformative impact on society and it's essential that we recognize the potential opportunities, risks and downsides enabled by it."

It added that it was committed to developing AI responsibly, as guided by its AI Principles. It also highlighted its work with policymakers and others to harness the benefits of "technological breakthroughs" while mitigating risks.

Warnings about the potential risks of advanced AI have been coming thick and fast in recent months. Experts and industry leaders have been voicing dramatic concerns about the technology following the viral popularity of OpenAI's ChatGPT and the subsequent release of competing AI-powered products.

The general public is also wary of the tech world's new obsession. A recent poll of more than 4,000 US adults conducted by Reuters found that 61% of Americans believe AI could threaten civilization.

Advertisement

AI fears have been partially spurred on by an arms race between Big Tech companies. Google, Meta, and Microsoft have been pouring investment into AI development and have rushed to launch new AI-powered products.

Governments around the world have taken note of the acceleration and are looking to try and regulate the tech. In Europe, lawmakers are pushing ahead with an "AI Act," which would ban use cases for AI deemed to pose an unacceptable risk.

{{}}