The Moral Implications Of Robots That Kill

Advertisement

Robot competing in Robotics Challenge Trials

Andrew Innerarity / Reuters

ATLAS, a robot by Boston Dynamics

Lethal autonomous weapons - robots that can kill people without human intervention - aren't yet on our battlefields, but the technology is right there.

Advertisement

As you can imagine, the killer robot issue is one that raises a number of concerns in the arenas of wartime strategy, morality, and philosophy. The hubbub is probably best summarized with this soundbite from The Washington Post: "Who is responsible when a fully autonomous robot kills an innocent? How can we allow a world where decisions over life and death are entirely mechanized?"

They are questions the United Nations is taking quite seriously, discussing these issues in-depth at a meeting last month. Nobel Peace Prize laureates Jody Williams, Archbishop Desmond Tutu, and former South African President F.W. de Klerk are among a group calling for an outright ban on such technology, but others are skeptical about that method's efficacy as there's historical precedent that banning weapons is counterproductive:

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants.

Arkin suggests that "if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable." He's quite literally suggesting that if a robot kills a person outside its rules or boundaries, the people involved in that robot's creation are responsible, but here's his hedge from a 2007 book called "Killer Robots":

Advertisement

"It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield. But I am convinced that they can perform more ethically than human soldiers."

This is one of several issues we'll have to resolve as technology continues to develop like a runaway train.