Sam Altman says he worries making ChatGPT was 'something really bad' given potential AI risks

Advertisement
Sam Altman says he worries making ChatGPT was 'something really bad' given potential AI risks
OpenAI CEO Sam Altman. OpenAI has developed Chat-GPT, and its more refined successor GPT-4.Jason Redmond / AFP via Getty Images
  • Sam Altman said he worried creating ChatGPT was "something really bad" given the risks AI posed.
  • The OpenAI CEO was speaking to Satyan Gajwani, the vice chairman of Times Internet, in New Delhi.
Advertisement

OpenAI CEO Sam Altman says he loses sleep over the dangers of ChatGPT.

In a conversation during a recent trip to India, Altman said he was worried that he did "something really bad" by creating ChatGPT, which was released in November and sparked a surge of interest in artificial intelligence.

"What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT," Altman told Satyan Gajwani, the vice chairman of Times Internet, at an event on Wednesday organized by the Economic Times.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Altman said he was worried that "maybe there was something hard and complicated" that his team had missed when working on the chatbot.

Asked whether AI should be regulated similarly to atomic energy, Altman said there had to be a better system to audit the process.

Advertisement

"Let's have a system in place so that we can audit people who are doing it, license it, have safety tests before deployment," he said.

The risks are high

Numerous tech leaders and government officials have raised concerns about the pace of the development of AI platforms.

In March, a group of tech leaders, including Elon Musk and the Apple cofounder Steve Wozniak, wrote an open letter with the Future of Life Institute to warn that powerful AI systems should be developed only once there was confidence that their effects would be positive and the risks are manageable.

The letter called for a six-month pause in training AI systems more powerful than GPT-4.

Altman said the letter "lacked technical nuance about where we need the pause."

Advertisement

Earlier this month, Altman was among a group of more than 350 scientists and tech leaders who signed a statement expressing deep concern about the risks of AI.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement read.

{{}}