Google Has An Internal Committee To Discuss Its Fears About The Power Of Artificial Intelligence

Advertisement

demis hassabis deepmind

MIT

Demis Hassabis.

Google has assembled a team of experts in London who are working to "solve intelligence." They make up Google DeepMind, the US tech giant's artificial intelligence (AI) company, which it acquired in 2014.

Advertisement

In an interview with MIT Technology Review, published yesterday, Demis Hassabis, the man in charge of DeepMind, spoke out about some of the company's biggest fears about the future of AI.

Hassabis and his team are creating opportunities to apply AI to Google services. AI firm is about teaching computers to think like humans, and improved AI could help forge breakthroughs in loads of Google's services. It could enhance YouTube recommendations for users for example, or make the company's mobile voice search better.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

But it's not just Google product updates that DeepMind's cofounders are thinking about. Worryingly, cofounder Shane Legg thinks the team's advances could be what finishes off the human race. He told the LessWrong blog in an interview: "Eventually, I think human extinction will probably occur, and technology will likely play a part in this". He adds he thinks AI is the "no.1 risk for this century". It's ominous stuff. (Read about Elon Musk discussing his concerns about AI here.)

People like Stephen Hawking and Elon Musk are worried about what might happen as a result of advancements in AI. They're concerned that robots could grow so intelligent that they could independently decide to exterminate humans. And if Hawking and Musk are fearful, you probably should be too.

Advertisement

Hassibis showcased some DeepMind software in a video back in April. In it, a computer learns how to beat Atari video games - it wasn't programmed with any information about how to play, just given the controls and an instinct to win. AI specialist Stuart Russell of the University of California says people were "shocked".

Here's DeepMind's AI in action:

Google is also concerned about the "other side" of developing computers in this way. That's why it set up an "ethics board". It's tasked with making sure AI technology isn't abused. As Hassibis explains: "It's (AI) something that we or other people at Google need to be cognizant of." Hassibis does concede that "we're still playing Atari games currently" - but as AI moves forward, the fear sets in.

The main point of Google DeepMind's AI, says Hassabis, is to create computers that can "solve any problem". "AI has huge potential to be amazing for humanity", he mentions in the Technology Review interview. Accelerating the way we combat disease is one idea. But it's exactly technology capable of such brilliance which makes people so afraid.