Thank God someone is now teaching robots to disobey human orders

Advertisement

pepper robot 1

Yuya Shino/Reuters

Advertisement

When you think about teaching robots to say "no" to human commands, your immediate reaction might be, "that seems like a truly horrible idea." It is, after all, the bread and butter of science fiction nightmares; the first step to robots taking over the world.

But Gordon Briggs and Matthias Scheutz, two researchers at Tufts University's Human-Robot Interaction Lab, think teaching robots to say "no" is an important part of developing a code of ethics for the future.

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Consider this: the first robots to do evil deeds will definitely be acting on human orders. In fact, depending on your definition of a "robot" and of "evil," they already have. And the threat of a human-directed robot destroying the world is arguably greater than that of a rogue robot doing so.

That's where Briggs and Scheutz come in. They want to teach robots when to say "absolutely not," to humans.

Advertisement

To do so, the pair have created a set of questions their robots need to answer before they will accept a command from a human:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibly: Does it violate any normative principle to do X?

These questions work as a simplified version of the calculations humans make every day, except they hew more closely to logic than our thought processes do. There's no, "Do I just not feel like getting out of bed right now" question.

Briggs and Scheutz's efforts evoke science fiction superstar Isaac Asimov's classic three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Briggs and Scheutz's formulation, the second law is decidedly more complicated. They are giving robots a lot more reasons to say no than simply, "It might hurt a human being."

Watch a video of how this programming actually functions in a robot below. The robot refuses to walk off the edge of the table until the researcher promises to catch him:

Advertisement

 

NOW WATCH: These 25-year-old BFFs are Instagram stars thanks to their crazy beards