ChatGPT's makers say AI could surpass humanity within the next 10 years as 'superintelligence' starts to exceed other powerful technologies

ChatGPT's makers say AI could surpass humanity within the next 10 years as 'superintelligence' starts to exceed other powerful technologies
Sam Altman and his fellow OpenAI cofounders discussed the "existential risk" of advanced AI in a blog post.Elizabeth Frantz/Reuters.
  • The developers of ChatGPT have a stark warning: AI could surpass humans within the next 10 years.
  • Leaders at ChatGPT developer OpenAI said AI with "superintelligence" needed to be managed.

The creators of ChatGPT say AI could surpass humanity in most domains within the next 10 years as "superintelligence" becomes more powerful than any technology the world has seen.

Cofounders of ChatGPT developer OpenAI, including CEO Sam Altman, said in a blogpost on Monday that it was conceivable AI could exceed the "expert skill level" of humans in most areas, and "carry out as much productive activity as one of today's largest corporations."

"Superintelligence will be more powerful than other technologies humanity has had to contend with in the past," the OpenAI executives said. "We can have a dramatically more prosperous future; but we have to manage risk to get there."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Since the release of ChatGPT, industry leaders have issued increasingly serious warnings about the potential for powerful AI to disrupt society by displacing jobs and helping fuel a wave of misinformation and criminal activity.

In particular, concerns have grown as the release of generative AI tools like ChatGPT has fueled an AI arms race that has put companies such as Microsoft and Google in direct competition with each other.


The concerns have prompted calls for AI to be regulated. OpenAI's leaders said in the blog post that "given the possibility of existential risk," there needed to be a proactive approach to managing the technology's potential harms.

"Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example," they said. "We must mitigate the risks of today's AI technology too, but superintelligence will require special treatment and coordination."

Last week, Altman made his first appearance before Congress to address concerns from lawmakers about the lack of rules in place to govern the development of AI.

In the post, Altman and his colleagues suggested that there would eventually need to be an organization like the International Atomic Energy Agency to oversee the advancement of AI "above a certain capability," through measures such as audits and safety compliance tests.

OpenAI did not immediately respond to Insider's request for comment made outside of normal working hours.