Client: Human Robot Interaction Length: 4 months Team: Joyce Ker, Christopher Perry, Emily Zhen
We wanted to test reciprocity in Human-Robot relationships. Reciprocity is fundamental to human relationships, and understanding reciprocity in Human-Robot relationships can be used to build acceptance of social and caretaker robots. We created a robot, 'Snackbot', and tested to see if people would be more willing to help the robot if it gave them a snack first.
We created Snackbot using Wizard of Oz technique to mimic a real robot. We focused on how to allow the robot to initiate contact and offer the snack before requesting help. After engaging the user we if the user was more willing to help after being offered a clementine. Afterwards we had two tasks for the user, one lower effort (plugging in the robot) and one higher effort ( finding the researcher) and measured the level of effort people put in to helping the robot in both situations. Overall, we found that people were not very willing to help the robot. Outside of the dry run, the only person who went to find the researcher was part of the group which did not receive a snack. There were certain biases that may have affected our results due to time and location, which may differ if we were to test this experiment in a different location.
We believe that if a robot does a person a favor, the person is going to be more willing to put in effort to help out the robot. In our case, we are testing if people are more willing to help out a robot if the robot offers a snack first.
As a researcher I conducted secondary research and conceptualized and executed thestudy ( proposed the concept, helped with design and fabrication of body, coded the Arduino for the power light, and acted as the robot creator during the study.)
We explored research into reciprocity both in human-human relationships as well as human-robot relationships. We found that in Bartlett and DeSteno's, Gratitude and Prosocial Behavior, Helping When It Costs You, that gratitude was a strong factor in influencing a person to help other people. Gratitude is important in creating a prosocial reciprocal behavior, which is necessary for human relationships. Sandoval, Brandstetter, Obaid, Barntneck's paper, Reciprocity in Human-Robot Interaction: A Quantitative Approach Through the Prisoner’s Dilemma and the Ultimatum Game, they found that although humans would reciprocate with robots in a game situation, they were less willing to cooperate with robots and found robots less agreeable than humans.
We decided to conduct the experiment where the robot, Snackbot, would ask for help from participants in Hunt Library. We wanted to see if people would be more willing to help out the robot if the robot offered the participants a snack first.
Independent Variable: Whether or not Snackbot offers a mandarin orange first.
Dependent Variable: Level of help offered to the robot
Robot Design Highlights
Due to the time constraints we were under we used Wizard of Oz techniques to conduct our experiment. We built an exterior shell with weights for the robot. Caster wheels on the bottom helped with wheeling the robot into position. We mounted a RGB LED to be visible to the outside to indicate the battery level of the robot. This was connected to an arduino and button mounted underneath the robot, which allowed the researcher to reset the blinking red LED and make it turn solid green for a full minute. This delay allowed the researcher to quickly escape after fixing the robot. Next to the LED was a hatch that opened to allow the research participant to pull out the robot's plug. The face was a mounted iPad mini that was controlled through a shared screen on Skype. The voice was controlled through an internally mounted blue tooth speaker that was connected to the operator's computer. The operator could send speech through a text to speech function.
Each of the team members had a different role, Researcher, Note taker and Operator:
Note taker: Wrote down the different interactions and length of interactions people had with the robot. This information included who did and did not engage with the robot and number of clementines taken.
Operator: Controlled the robot’s face and voice
Researcher: Pushes the robot into position, place the snacks on the pedestal before going downstairs. They acted as the creator of the robot and would periodically come up to unplug the robot and to add clementines to the snackbot.
We found a high traffic location near the lending desk in Carnegie Mellon University’s Hunt Library to place our snackbot. The Note taker and Operator sat near the robot, while the Researcher sat near the door in Hunt library after pushing the robot into place.
When a person came within 8 feet of the robot, the robot would look at the person and use the following script to engage the person:
"Hello, would you like a snack?"
When the person approached the robot, it would say:
"Oh dear, it seems that I’m running low on power, could you please plug me in?"
"My plug is in the back, and there should be an outlet nearby…"
If the participant refused to help, the robot would say:
"Thank you, Have a nice day!"
If the participant plugged in the robot, the robot would say the following:
"Something seems to be wrong, Can you go find my creator, Joyce? She told me she would be downstairs"
"Have a nice day"
While we had initially intended to run the test on between 12-20 subjects where the participants would plug in the robot. Half of the subjects would be offered a snack, while the other half would not. We ended up having 6 participant with snacks, and 5 without who ended up plugging in the robot.
The purple portion shows the number of people who helped plug the robot in, 6 people with snack and 5 people without snack. Quantitatively, there is not a large difference between the number of people who helped. However, when we take into account the total number of people who stopped and look at the proportion of people who helped, it turns out that only 15% of the people who stopped ended up helping plug the robot in when there was a snack, but without a snack, 26.3% helped. Additionally, outside of the dry run, the only person who tried to find the researcher was part of the without snack group. This was the opposite of what we had hypothesized, as we thought the reciprocity of a snack would result in more help. This is probably due to the selection bias that we encounter. With a snack, the robot attracts both robot enthusiasts and people who simply want a snack. Without a snack, the robot would only attract robot enthusiasts, who are thus more likely to help out this robot.
• One boy put the clementine back after snackbot asked for help.
• A few groups of girls wanted to help and apologized because they didn’t want to touch the robot to help it.
• A lot of people tried to touch the screen or talk to it.
• Outside of dry run, the only person who went to get the researcher was from the non-snack run.
• One boy said “what, that’s it?” when robot thanked him for plugging it in.
• Seemed to be some sort of feeling based reciprocation.
• Only during the dry run (snack) and one non-snack scenario did someone go to get researcher.
• Plugging in the robot, our low effort task, was a bigger deal than we expected, and people would often refuse.
• Snackbot became a Facebook celebrity, where people asked for help through Facebook.
• When the lid was on the clementines, no one was willing to grab a snack. We had to remove the lid which instantly biased whoever decided to interact with the snackbot.
• It was taking too long for the robot to introduce itself, so before it was done talking people would have taken the clementine and walked away before the snackbot could ask for help. After the dry run we added a sign to introduce the robot.
• People would stay next to the robot for long periods of time, making it difficult and suspicious for the Researcher to come unplug the robot. We had to send other students in the same class to unplug the robot.
• It was difficult to control the face and voice at the same time.
• People studying next to the robot began to get irritated.
• Technical difficulties such as unstable internet, weak bluetooth connection, etc.
We should redesign Snackbot so that we can have people approach the robot without knowing whether or not the robot had a snack, thus reducing the bias of who approached the robot. Also, we should try running this test in different locations at different times. The timing and location ( Carnegie Mellon University during finals period) is not representative of the greater population and affected who would actually stop to help the robot. We should also test other types of snacks/favors. Perhaps a higher value gift would cause a greater feeling of gratitude or obligation. We should also explore in-depth other factors such as the robot appearance or single/group dynamics and how that might have affected the study.
You may also like
Creating an immersive nature environment to address stress.
Creating an engaging and personalized experience for customers to emotionally bond with Zazzle as a lifestyle brand.
An application to improve the trip planning process for wheelchair users.
A service using semi-automous vehicles to balance senior autonomy and safety
Using sensors and vibration motors to encourage healthy behavior.