AI is a greater threat to human existence than climate change, says the Oxford professor endorsed by Bill Gates

Advertisement
AI is a greater threat to human existence than climate change, says the Oxford professor endorsed by Bill Gates

4522 15

Future of Humanity Institute

Nick Bostrom, professor of philosophy at the University of Oxford.

Advertisement
  • Nick Bostrom is a professor of philosophy at the University of Oxford and an expert on the risks posed by artificial intelligence.
  • Bostrom, whose work has been endorsed by Elon Musk and Bill Gates, told Business Insider that AI is a greater threat to human existence than climate change.
  • He says it's "a lot to expect" big tech companies, such as Google and Facebook, to devise their own ethical frameworks for AI.
  • Bostrom also thinks there should be more people with basic knowledge of AI in governments.
  • Visit Business Insider's homepage for more stories.

One of the world's leading thinkers on artificial intelligence says the technology is a bigger menace to human civilization than climate change.

Nick Bostrom, an Oxford philosophy professor, told Business Insider: "AI is a bigger threat to human existence than climate change. Climate change is not going to be the biggest change we see this century."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

He adds: "Climate change is unlikely to bring about a good outcome, but if AI's development turns out badly, it'll be far worse than climate change. AI could turn out really well for humanity, but it could also turn out really badly."

Bostrom is a preeminent thinker in his field having published books including "Superintelligence: Paths, Dangers, Strategies." He is also unusual in appealing to both sides of the debate: His work has been endorsed by both Elon Musk, who holds apocalyptic views on AI, and Bill Gates, a cautiously upbeat advocate for the technology.

Advertisement

It's why he is careful to qualify his comparison to climate change, a force that could damage planet Earth irrevocably unless humans make radical changes in the next decade.

"The reason that AI is often depicted as evil robots in the media is because it makes for a good story. Robots are more visually compelling than a chip inside a black box; you can see and feel them in a way you can't with a chip," he says. "But malevolence isn't the problem. It's the possibility that AIs might be indifferent to human goals."

AI might be indifferent to human goals - and that's dangerous

All intelligent entities, whether human or artificial, have goals - even if they are pre-programmed. Very simple AIs, such as thermometers, have the goal of successfully measuring temperature, for example.

Bostrom's fear is that if AIs become competent enough in their pursuit of their goals, they may inadvertently harm humans, even if their goals sound harmless. In a 2003 paper, Bostrom gave the example of an AI whose only goal is to maximise paperclip production.

If this AI was capable of reprogramming itself to improve its own intelligence - something which some Google-developed AIs are already capable of - it may end up becoming so smart that it innovates new ways to maximize the number of paperclips it produces.

Advertisement

At some point, Bostrom writes, it might transform "first all of Earth and then increasing portions of space into paperclip manufacturing facilities."

terminator genisys

Paramount

"Terminator"-style robots might be visually compelling, but Bostrom says they misrepresent the threat posed by AI.

If turning the world into a paperclip machine sounds idiotic, that's simply because it doesn't align with human goals. The paperclip printer is simply following its own aims to a logical conclusion.

In becoming so astoundingly good at making paperclips, it could end up inadvertently harming humans. It would not have set out to be malevolent, but simply indifferent to any goals beyond its own. The example might sound far-fetched, but Bostrom says AI's indifference to human endeavour could already be a real threat.

"The biggest ways AI is likely to have a negative impact is in information systems roles, such as selecting news stories that confirm people's prejudices or acting as surveillance systems," he says.

Advertisement

Problems are already emerging with the latter, prompting questions about whether firms like Amazon and Microsoft should be selling facial recognition technology to public agencies.

The American Civil Liberties Union (ACLU) revealed in May last year that Amazon had sold Rekognition to government and police agencies for the purpose of public surveillance and to identify "people of interest." The ACLU also found last year that Rekognition incorrectly identified 28 members of Congress as people who had previously been arrested.

For Bostrom, the big challenge is getting AI under control and programming it to align with human goals. "The first set of challenges will be technical, such as finding a way of developing AI in a controlled way," he says. "Assuming we solve that, our next goals are societal challenges about creating a world order that serves the common good."

Big tech is struggling to figure out how to make AI safe

So does Bostrom think the big tech companies are trying hard enough to develop AI in a controlled way? Google, Amazon, Facebook, and Apple are at the cutting-edge of AI development, and yet some academics think they are not developing AI compatible with human goals.

"People I've spoken to [at the big tech companies] do care about making AI safe and compatible with human goals," he says. "I also get some sense that they're not able to figure out how to go about doing this. It's a lot to expect each tech company to come up with their own ethical framework for controlling AI."

Advertisement

Read more: After an employee backlash, Google has cancelled its AI ethics board a little more than a week after announcing it

Tech firms are visibly grappling with this issue. Google, for example, disbanded an AI ethics board after just a week after thousands of employees campaigned against the inclusion of Kay Coles James, president of right-wing think tank Heritage Foundation.

Sundar Pichai

Getty

Google CEO Sundar Pichai.

But if big tech can't be trusted to create frameworks for developing AI ethically and safely, who can? What about governments? Again, Bostrom is sceptical.

"There aren't that many clear policy proposals yet regarding how governments should intervene," he says. "Right now, it's not clear what you'd want governments to do. We have to widen the conversation.

Advertisement

"Capitalism is supposed to function with governments creating the rules of behaviour and companies functioning within those rules, [but] there's a cultural mismatch between Silicon Valley and governments. Silicon Valley has a libertarian-leaning ethos, where governments are behind the curve and not entrepreneurial.

"I do think there will be a need for government to have more people who understand AI - not necessarily brilliant, researchers, but people with enough of a background, like a master's degree in computer science. The ability to understand AI comes in degrees. Sometimes it's better to not be too much of a specialist, because if you're too specialized, you can have a narrow view of a field's wider societal ramifications."

For Bostrom, however, he believes he can shape the conversation around AI while remaining in academia. He also says that activist employees are playing their part in holding their companies to account, such as the recent action at Google over its AI board.

"There are things that AI researchers can do to influence the behaviour of big tech companies without having to leave academia," he says. "There is also a degree of public activism within AI research communities anyway, such as the recent Google uprising."

Get the latest Google stock price here.

Advertisement
{{}}