Here's why we should be building killer robots

Advertisement

Advertisement
terminator

getty

On July 27, more than a thousand artificial intelligence researchers co-signed an open letter urging the United Nations (UN) to ban the development and use of autonomous weapons.

Presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, the letter features prominent researchers studying artificial intelligence (AI) including scientists like Google's director of research Peter Norvig, alongside Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking. Since Monday, over 16,000 additional people signed the letter, according to the Guardian.

The letter states that the development of autonomous weapons, or weapons that can target and fire without a human at the controls, could bring about a "third revolution in warfare," much like the creation of guns and nuclear bombs before it.

While killer robots sound terrifying, there are some real reasons that weapons powered by sophisticated AI might even be preferable to humans.

Autonomous weapons would take human soldiers out of the line of fire and potentially reduce the number of casualties in wars. Killer robots would be better soldiers all around - they're faster, more accurate, more powerful, and can take more physical damage than humans.

Advertisement

Stuart Russell, an AI researcher and the co-author of "Artificial Intelligence: A Modern Approach," is a vocal advocate for the ban on autonomous weapons. Even as he fears that autonomous weapons could fall into the wrong hands, he admits that there are some valid arguments in favor of autonomous weapons.

"I've spent quite a long time thinking about what position I should take," Russell told Tech Insider. "They can be incredibly effective, they can have much faster reactions than humans, they can be much more accurate. They don't have bodies so they don't need life support ... I think those are the primary reasons various militaries, not just the UK but the US, are doing this."

Autonomous weapons wouldn't become afraid, freeze up, or lose their tempers. They can do their jobs without allowing their emotions to color their actions. IEEE Spectrum's Evan Ackerman wrote that autonomous weapons could be programmed to follow the rules of engagement and other laws that govern war.

"If a hostile target has a weapon and that weapon is pointed at you, you can engage before the weapon is fired rather than after in the interests of self-protection," he wrote. "Robots could be even more cautious than this, you could program them to not engage a hostile target with deadly force unless they confirm with whatever level of certainty that you want that the target is actively engaging them already."

Advertisement

Robot ethicist Sean Welsh echoes this idea in The Conversation, where he writes that killer robots would be "completely focused on the strictures of International Humanitarian Law and thus, in theory, preferable even to human war fighters who may panic, seek revenge, or just plain [mess] stuff up."

Ackerman suggests doing away with the misconception that technology is either "inherently good or bad" and rather focus on how its used. He suggests coming up with a way to make "autonomous armed robots ethical."

"Any technology can be used for evil," he wrote. "Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil: we'd need a much bigger petition for that."

Heather Roff, a contributor to the open letter and a professor at the University of Denver's Josef Korbel School of International Studies, wouldn't disagree with him.

"The United States military doesn't want to give up smart bombs," Roff told Tech Insider. "I, frankly, probably wouldn't want them to give that up. Those are very discriminate weapons. However, we do want to limit different weapons going forward [that] have no meaningful human control."

Advertisement

And it's likely this recent public outcry may not be enough to stop an international war machine that's already building semi-autonomous weapons to identify and aim at targets by themselves. Many, like the Australian Navy's anti-missile and close-in weapons systems, attract no scrutiny or objections.

"Why? Because they're employed far out at sea and only in cases where an object is approaching in a hostile fashion," defense researcher Jai Gaillot wrote for The Conversation. "That is, they're employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat."

And, as Ackerman writes, it might be impossible to stop the tank now that it's rolling, and "barriers keeping people from developing this kind of systems are just too low."

NOW WATCH: Here's what we know about the new 'Earth' - a planet that could support life