'Godfather of AI' warns that 'bad actors like Putin or DeSantis' could use AI to win wars or manipulate voters

Advertisement
'Godfather of AI' warns that 'bad actors like Putin or DeSantis' could use AI to win wars or manipulate voters
Geoffrey Hinton told MIT Tech Review about his worries over how AI tools, which he helped pioneer, will be used.Noah Berger/Associated Press
  • Geoffrey Hinton told MIT Tech Review he worries how AI tools he helped create will be used.
  • He said "bad actors like Putin or DeSantis" could use AI tools in wars and elections.
Advertisement

The "Godfather of AI" said he's worried about how people will use the AI tools he helped create, name-dropping Russian President Vladimir Putin and Florida Governor Ron DeSantis in an interview with the MIT Technology Review.

"Look, here's one way it could all go wrong," Geoffrey Hinton told the MIT Tech Review. "We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates."

Hinton told the MIT Tech Review he thinks generative AI tools will soon be able to make their own "subgoals," meaning the machines would be able to figure out how to make larger decisions that could potentially be used for bad.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

"Don't think for a moment that Putin wouldn't make hyper-intelligent robots with the goal of killing Ukrainians," Hinton said. "He wouldn't hesitate. And if you want them to be good at it, you don't want to micromanage them — you want them to figure out how to do it."

Hinton declined to comment further when contacted by Insider. Representatives for DeSantis and Putin did not immediately respond to Insider's request for comment ahead of publication.

Advertisement

Putin spoke about the importance of AI in a 2017 talk to students, saying that "whoever becomes the leader in this sphere will become the ruler of the world," Russian outlet RT reported.

After a decade at Google, Hinton announced he was leaving the company on Monday. He told The New York Times he has regrets about his role as a pioneer of AI technology. Hinton, along with two of his graduate students at the University of Toronto, laid the foundation for AI development.

"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Hinton told the Times.

Hinton said on Twitter that he left Google, not in critique of the company, but so he "could talk about the dangers of AI without considering how this impacts Google," and noted that "Google has acted very responsibly."

In his interview with the Times, Hinton shared his worries about the dangers of AI.

Advertisement

"It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said.

Other AI experts, including Elon Musk, who cofounded ChatGPT creator OpenAI in 2015 and is currently building his own rival AI project, released an open letter in March calling for a six month pause on training AI systems that would be more powerful than OpenAI's latest release, GPT-4.

The letter notes that the authors are not calling for a pause "on AI development in general" but "merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

In a March interview, OpenAI CEO Sam Altman said he and the company are "a little bit scared" of what artificial intelligence might be able to do in the future.

"I think if I said I were not, you should either not trust me, or be very unhappy I'm in this job," he told ABC News.

Advertisement

Hinton, for his part, told the Times that he thinks it might be too late to stop the tech giants' AI race, and that careful collaboration and discussion around controlling the technology may be the only solution.

"I don't think they should scale this up more until they have understood whether they can control it," he said.

{{}}