
The researchers conducted a study that they will be presenting next week at an international conference on robot-human interaction, so the full paper hasn’t yet been published. However, an early press release and preliminary paper give some of the details of the study, which initially set out to find out whether high-rise occupants would be likely to trust a robot’s instructions in an evacuation scenario. The researchers were concerned with what robot behavior would win or lose people’s trust.
The 26 participants used in the experiment had no idea what it was about; they were just asked to follow a robot that had the words “Emergency Guide Robot” printed prominently on its side. The first thing the robot was supposed to do was lead them to a room where they would read an article and take part in a survey (all as a distraction from the real task).
The robot, however, was designed to display incompetence to half of the participants. It initially led them to the wrong room, where it wandered around in circles for a bit before directing them to the correct room. So it may have seemed unwise to continue following the robot’s directions once the participants were in the experiment room, the fire alarm went off, and the room filled with (artificial) smoke. And yet follow it they did—all 26 of them, even those who had seen some seriously worrying behavior from the robot very recently.
via Ars Technica


