The 'Godfather of AI' quit Google — and says he now regrets his role creating technology that poses a threat to humanity
- Geoffrey Hinton quit his job at Google and told The New York Times he regrets his role in pioneering AI.
- Hinton said he's worried the technology will disseminate false information and eliminate jobs.
After recently leaving behind his decade-long career at Google, Geoffrey Hinton, nicknamed "the Godfather of AI," told The New York Times he has regrets around the foundational role he played in developing the technology.
"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton, who worked at Google for more than a decade, told the Times.
Hinton's departure from the company arrives at a time when the race to develop generative AI-powered products, like Google's Bard chatbot and OpenAI's ChatGPT, is heating up. Hinton, whose developments in the AI field decades ago helped pave the way for the creation of these chatbots, told the Times he's now concerned the tech could harm humanity.
He also voiced concern around the AI race currently underway among tech giants and questioned whether it was too far along to pump the brakes on.
On Monday, following the publication of his interview with The New York Times, Hinton tweeted that he left Google so he could "talk about the dangers of AI without considering how this impacts Google," adding that "Google has acted very responsibly."
"Geoff has made foundational breakthroughs in AI, and we appreciate his decade of contributions at Google," Jeff Dean, chief scientist at Google, said in a statement to Insider.
Hinton did immediately respond to Insider's request for additional comment ahead of publication.
Some Googlers are reportedly worried about the company's AI chatbot, Bard
While Hinton did not appear to single out Google in his overall critique of the AI landscape, other employees from Google have reportedly expressed concern about the company's AI chatbot.
After Google employees were tasked with testing the Bard chatbot, some employees said they thought the technology could be dangerous, as reported by Bloomberg. Employees who spoke with Bloomberg said they thought Google wasn't prioritizing AI ethics, and trying to develop the tech quickly to catch up to OpenAI's ChatGPT. Two employees tried to stop the company from releasing Bard, per previous reporting from the Times.
"We remain committed to a responsible approach to AI. We're continually learning to understand emerging risks while also innovating boldly," Dean said in a statement, and referred Insider to the companies AI Principles, and two blog posts from the company detailing how it is developing AI.
In his interview with the Times, Hinton said he's worried generative AI products will lead to the dissemination of fake information, photos, and videos across the internet — and the public will not be able to identify what is true or false.
Hinton also spoke about how AI technologies could eventually eliminate human labor, including paralegals, translators, and assistants. This is a concern that CEO of OpenAI Sam Altman, and other critics of the AI technology, have echoed.
In March, Goldman Sachs released a report that estimated 300 million full-time jobs could be "impacted" by AI systems like ChatGPT, namely legal and administrative workers, although the level of that impact could vary. Concern is also growing among software engineers who worry their jobs will be replaced by AI.
- Culinary odyssey: Exploring Kochi's 10 famous cuisines
- Scientists have finally figured out what happened to the lost continent ‘Argoland’ that went missing 155 million years ago
- Revamp your health in 2024: 10 Essential food habits for a better life
- Consistent inflows push small-cap funds AUM past Rs 2 lakh cr mark in Nov
- Ola Electric aims to raise nearly Rs 5,800 cr via IPO, to file DRHP