↓ Skip to main content

Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

Overview of attention for article published in Frontiers in Psychology, July 2015
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (73rd percentile)
  • Good Attention Score compared to outputs of the same age and source (66th percentile)

Mentioned by

twitter
3 X users
wikipedia
1 Wikipedia page

Citations

dimensions_citation
56 Dimensions

Readers on

mendeley
95 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations
Published in
Frontiers in Psychology, July 2015
DOI 10.3389/fpsyg.2015.00931
Pubmed ID
Authors

Manuel Giuliani, Nicole Mirnig, Gerald Stollnberger, Susanne Stadler, Roland Buchner, Manfred Tscheligi

Abstract

Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 95 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 95 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 19 20%
Student > Master 15 16%
Researcher 12 13%
Student > Bachelor 9 9%
Student > Doctoral Student 4 4%
Other 14 15%
Unknown 22 23%
Readers by discipline Count As %
Computer Science 19 20%
Psychology 17 18%
Engineering 17 18%
Social Sciences 6 6%
Linguistics 4 4%
Other 9 9%
Unknown 23 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 5. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 15 July 2020.
All research outputs
#6,044,976
of 22,816,807 outputs
Outputs from Frontiers in Psychology
#8,590
of 29,760 outputs
Outputs of similar age
#69,247
of 262,361 outputs
Outputs of similar age from Frontiers in Psychology
#182
of 552 outputs
Altmetric has tracked 22,816,807 research outputs across all sources so far. This one has received more attention than most of these and is in the 73rd percentile.
So far Altmetric has tracked 29,760 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one has gotten more attention than average, scoring higher than 70% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 262,361 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 73% of its contemporaries.
We're also able to compare this research output to 552 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 66% of its contemporaries.