DeepMind has hired AI safety experts to protect us from dangerous machines
The London-based AI lab, which was acquired by Google in 2014 for £400 million, is building computer systems that can learn and think for themselves.
So far the company's algorithms have been used to defeat humans at complex board games like Go and helped Google to cut its huge electricity bill. But DeepMind doesn't plan to stop there; ultimately it wants to "solve intelligence" and use it to "make the world a better place."In a bid to reduce the chance of creating dangerous artificial intelligence, DeepMind has hired Viktoriya Krakovna, Jan Leike, and Pedro Ortega into its AI safety group. It's currently unclear when this group was formed.
Some of the world's smartest minds, including physicist Stephen Hawking and Tesla founder Elon Musk, have warned that "superintelligent" machines - described in a book called "Superintelligence" by Oxford University philosopher Nick Bostrom - could end up being one of the greatest threats to humanity. They're concerned that they could outsmart humans within a matter of decades and decide that we're no longer necessary.
Krakovna, who joins DeepMind as a research scientist, holds a PhD in statistics from Harvard University and she cofounded the Future of Life Institute in Boston with MIT cosmologist Max Tegmark and Skype cofounder Jaan Tallinn.
The institute, which counts Hawking and Musk as board advisors, was created to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI).
While at DeepMind, the former Google engineer will carry out technical research on AI safety, according to her LinkedIn profile.
In addition to his role at DeepMind Leike is also a research associate at Oxford University's Future of Humanity Institute, which is lead by Bostrom.
On his website, Leike writes: "My research aims at making machine learning robust and beneficial. I work on problems in reinforcement learning orthogonal to capability: How do we design or learn a good objective function? How can we design agents such that they are incentivised to act in our best interests? How can we avoid degenerate solutions to the objective function?"
Ortega, also a research scientist at DeepMind, holds a PhD in machine learning from Cambridge University. According to a short bio on his personal website: "His work includes the application of information-theoretic and statistical mechanical ideas to sequential decision-making."
DeepMind did not immediately respond to Business Insider's request for comment.