ChatGPT's makers say AI could surpass humanity within the next 10 years as 'superintelligence' starts to exceed other powerful technologies
- The developers of ChatGPT have a stark warning: AI could surpass humans within the next 10 years.
- Leaders at ChatGPT developer OpenAI said AI with "superintelligence" needed to be managed.
The creators of ChatGPT say AI could surpass humanity in most domains within the next 10 years as "superintelligence" becomes more powerful than any technology the world has seen.
Cofounders of ChatGPT developer OpenAI, including CEO Sam Altman, said in a blogpost on Monday that it was conceivable AI could exceed the "expert skill level" of humans in most areas, and "carry out as much productive activity as one of today's largest corporations."
"Superintelligence will be more powerful than other technologies humanity has had to contend with in the past," the OpenAI executives said. "We can have a dramatically more prosperous future; but we have to manage risk to get there."
Since the release of ChatGPT, industry leaders have issued increasingly serious warnings about the potential for powerful AI to disrupt society by displacing jobs and helping fuel a wave of misinformation and criminal activity.
In particular, concerns have grown as the release of generative AI tools like ChatGPT has fueled an AI arms race that has put companies such as Microsoft and Google in direct competition with each other.
The concerns have prompted calls for AI to be regulated. OpenAI's leaders said in the blog post that "given the possibility of existential risk," there needed to be a proactive approach to managing the technology's potential harms.
"Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example," they said. "We must mitigate the risks of today's AI technology too, but superintelligence will require special treatment and coordination."
Last week, Altman made his first appearance before Congress to address concerns from lawmakers about the lack of rules in place to govern the development of AI.
In the post, Altman and his colleagues suggested that there would eventually need to be an organization like the International Atomic Energy Agency to oversee the advancement of AI "above a certain capability," through measures such as audits and safety compliance tests.
OpenAI did not immediately respond to Insider's request for comment made outside of normal working hours.
- A 24-year-old stock trader who made over $8 million in 2 years shares the 4 indicators he uses as his guides to buy and sell
- Financial inclusion made easy for India’s small merchants with Paytm’s pioneering QR codes and Soundbox
- This frequent flyer who's been 'skiplagging' for a decade says she has 'no remorse and no angst'
- Highest strike rate in IPL 2023: Rahane beats Jaiswal; SKY only Indian in top four
- Godrej Group arm to invest ₹100 crore to acquire material handling equipment to be rented out
- These are must to do activities in Lonavala on your next visit
- Inox Wind gets 150-MW wind energy project from NTPCREL
- RBI’s gold holdings jump over 17% to a whopping ₹2,30,734 crore