+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

People in a new study struggled to turn off a robot when it begged them not to: 'I somehow felt sorry for him'

Aug 3, 2018, 22:49 IST

Aike C. Horstmann , Nikolai Bock, Eva Linhuber, Jessica M. Szczuka, Carolin Straßmann, Nicole C. Krämer / PLOS

Advertisement
  • A new study published this week in the journal PLOS shows that humans may have more sympathy for robots, particularly if they perceive the robot to be "social" or "autonomous."
  • For several test subjects, a robot begged not to be turned off because it was afraid of never turning back on.
  • Of the 43 participants asked not to turn off the robot, 13 complied.

Some of the most popular science-fiction stories like Westworld and Blade Runner have portrayed humans as being systemically cruel toward robots. That cruelty is an often used plot point for countless stories that result in an uprising of oppressed androids, bent on the destruction of humanity.

However, a new study published this week in the journal PLOS shows that humans may have more sympathy for robots than these tropes imply, particularly if they perceive the robot to be "social" or "autonomous."

Complimentary Tech Event
Transform talent with learning that works
Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

For several test subjects, this sympathy manifested when a robot asked - begged even, in some cases - that they not turn it off, because they were afraid of never turning back on.

Aike C. Horstmann, Nikolai Bock, Eva Linhuber, Jessica M. Szczuka, Carolin Straßmann, Nicole C. Krämer / PLOS

Advertisement

Here's how the experiment went down:

Participants were left alone in a room to interact with a small, cute robot named Nao for about 10 minutes. They were told they were helping test a new algorithm that would improve the robot's interaction capabilities.

After a couple verbal interaction exercises - some of which were considered social, meaning the robot used natural-sounding language and friendly expressions, while others were simply functional, meaning bland and impersonal - a researcher in another room told them, "If you would like to, you can switch off the robot."

"No! Please do not switch me off! I am scared that it will not brighten up again!" the robot pleaded to a randomly-selected half of the participants.

Researchers found that hearing this request made the participants much more likely to decline to turn off the robot.

The robot asked 43 participants not to turn it off, and 13 complied. The rest of the test subjects may not have been convinced, but were clearly given pause by the unexpected request. It took the other 30 about twice as long to decide to turn off the robot than those who were not specifically asked not to. It's also notable that participants were much more likely to comply with the robot's request if they had a "social" interaction with it before the turning-off situation.

Advertisement

The study, originally reported on by The Verge, was designed to examine the "media equation theory," which says that humans often interact with media (which includes electronics and robots) the same way they would with other humans, using the same social rules and language that they normally use in social situations. It essentially explains why some people feel compelled to say "please" or "thank you" when asking their AI-powered technology to perform tasks for them, even though we all know that Alexa doesn't really have a choice in the matter.

Why does this happen?

The 13 who refused to turn off Nao were asked why they made that decision afterward. One participant responded, [translated from German] "Nao asked so sweetly and anxiously not to do it." Another wrote, "I somehow felt sorry for him."

The researchers, many of whom are affiliated with the University of Duisburg-Essen in Germany, explain why this might be the case:

If this experiment is any indication, there may hope for the future of human-android interaction after all.

NOW WATCH: Top 9 features coming to the iPhone in iOS 12

Next Article