+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

OpenAI is so worried about AI causing human extinction, it's putting together a team to control 'superintelligence'

Jul 7, 2023, 13:04 IST
Business Insider
OpenAI is so worried about AI causing human extinction, it's putting together a team to control 'superintelligence.'Beata Zawrzel/NurPhoto via Getty Images
  • OpenAI fears that superintelligent AI could lead to human extinction.
  • It is putting together a team to ensure that superintelligent AI aligns with human interests.
Advertisement

ChatGPT creator OpenAI is so worried about the potential dangers of smarter-than-humans AI that it's putting together a new team to ensure these advanced systems work for the good of people, not against them.

In a July 5 blog post, OpenAI said that though "superintelligent" artificial intelligence seemed far off, it could arrive within the decade.

"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," OpenAI co-founder Ilya Sutskever and this new team's co-head Jan Leike wrote in the blog post.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

And though this technology could help solve many of the world's most pressing problems, superintelligent AI "could lead to the disempowerment of humanity or even human extinction," the authors wrote.

The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years.

Advertisement

OpenAI is currently hiring for the team. The company said it plans to dedicate 20% of its computing power towards this research, per the blog post.

OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority.

In May, Altman joined hundreds of other key figures in tech in signing an open letter containing one sentence: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Other key figures in tech, including Elon Musk, have also sounded the alarm over regulating AI and called for a six-month pause on AI development, though some saw this as a ploy for Musk to play catch-up.

To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI.

Advertisement

In a letter published by the Distributed AI Research Institute on 31 March, prominent AI ethicists called attention to concrete and present-day issues which AI companies are currently exacerbating.

Next Article