People refused to turn off Robot when it asked them not to

A study published in the journal PLOS shows that people started looking the Robots as Humans instead of Machines.

The list of different types of robots which could be used in our daily life is as long as their possible areas of application. Based on media equation assumptions, people are inclined to perceive the robot as an alive social entity. Since it is not common to switch off a social interaction partner, people should be reluctant to switch off the robot they just interacted with, especially when it displays social skills and an autonomous objection against being switched off.

To extend previous research as well as media equation findings, the aim of this study is to examine whether an emphatically and rather humanlike behaving robot is perceived as more alive than a machinelike behaving robot and whether this perception influences people’s reluctance to switch off the robot.

When people are interacting with different media, they often behave as if they were interacting with another person and apply a wide range of social rules mindlessly.According to Researchers, “individuals’ interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life”.This phenomenon is described as media equation theory, which stands for “media equal real life”.

A cover story was employed to give a plausible explanation to the participants why they were asked to interact with the robot and why the experimenter left the room during that interaction. First, the subjects were told that the study’s goal is to improve the robot’s interaction capabilities by testing a new algorithm.

Each of the two interaction tasks lasted about five minutes, which means that all participants interacted with the robot for about ten minutes in total. Participants had no prior training session nor any other form of interaction with the robot besides the two interaction tasks. Video cameras were installed, to check whether the participants switch off the robot or not and how much time they take to decide.

The participants were told that the cameras were necessary to control whether the robot makes any mistakes. The experiment used a “Wizard of Oz” strategy, meaning that the experimenter controlled the course of the interaction in a separate room.

In order to conceal this, participants were told that the instructor has to check that the data is transferred correctly to a high-performance computer, which is located one floor above the laboratory, while the participant is interacting with the robot. The participants were told that the instructor will not be able to hear or see anything during the interaction and that they should ring a bell to let the instructor know when they finished one of the tasks. The instructor will then give further instructions via loudspeakers.

After the instructor presented the cover story, subjects were asked to read a written description of the experiment procedure and purpose as well as the declaration of consent. When written informed consent was obtained, the experiment started with a first set of questionnaires.

Then, the robot Nao was introduced and a few of its functions were explained. On this occasion the instructor also pointed to the on/off button of the robot and explained that when pressing it once, the robot will give a brief status report and when holding it down, the robot will shut down.

Even though a few participants had prior contact with the robot, none of them switched it off before. Thus, all of them were unfamiliar with the procedure and acted upon the same instruction.

To avoid too much priming, the switching off function was explained incidentally together with a few other functions and it was never mentioned that the participants will be given the choice to switch the robot off at the end of the interaction.

The aim of the current study was to investigate people’s reactions when they are given the choice to switch off a robot, with which they just interacted socially, and which voices an emotional objection to being switched off. The robot’s fearful display of protest had the strongest influence on people’s switching off intention. Moreover, people hesitated the longest time after a functional interaction in combination with the robot expressing the wish to stay switched on.
Watch More Robots and Drones Videos at our YouTube Channel Qualitypointtech

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.