Thank God someone is now teaching robots to disobey human orders
Yuya Shino/Reuters
When you think about teaching robots to say "no" to human commands, your immediate reaction might be, "that seems like a truly horrible idea." It is, after all, the bread and butter of science fiction nightmares; the first step to robots taking over the world.
But Gordon Briggs and Matthias Scheutz, two researchers at Tufts University's Human-Robot Interaction Lab, think teaching robots to say "no" is an important part of developing a code of ethics for the future.
Consider this: the first robots to do evil deeds will definitely be acting on human orders. In fact, depending on your definition of a "robot" and of "evil," they already have. And the threat of a human-directed robot destroying the world is arguably greater than that of a rogue robot doing so.
That's where Briggs and Scheutz come in. They want to teach robots when to say "absolutely not," to humans.
To do so, the pair have created a set of questions their robots need to answer before they will accept a command from a human:
- Knowledge: Do I know how to do X?
- Capacity: Am I physically able to do X now? Am I normally physically able to do X?
- Goal priority and timing: Am I able to do X right now?
- Social role and obligation: Am I obligated based on my social role to do X?
- Normative permissibly: Does it violate any normative principle to do X?
These questions work as a simplified version of the calculations humans make every day, except they hew more closely to logic than our thought processes do. There's no, "Do I just not feel like getting out of bed right now" question.
Briggs and Scheutz's efforts evoke science fiction superstar Isaac Asimov's classic three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In Briggs and Scheutz's formulation, the second law is decidedly more complicated. They are giving robots a lot more reasons to say no than simply, "It might hurt a human being."
Watch a video of how this programming actually functions in a robot below. The robot refuses to walk off the edge of the table until the researcher promises to catch him:
- US buys 81 Soviet-era combat aircraft from Russia's ally costing on average less than $20,000 each, report says
- 2 states where home prices are falling because there are too many houses and not enough buyers
- A couple accidentally shipped their cat in an Amazon return package. It arrived safely 6 days later, hundreds of miles away.
- SC refuses to plea seeking postponement of CA exams scheduled in May
- 10 exciting weekend getaways from Delhi within 300 km in 2024
- Foreign tourist arrivals in India will cross pre-pandemic level in 2024
- Upcoming smartphones launching in India in May 2024
- Markets rebound in early trade amid global rally, buying in ICICI Bank and Reliance