↓ Skip to main content

The Voice of Emotion across Species: How Do Human Listeners Recognize Animals' Affective States?

Overview of attention for article published in PLOS ONE, March 2014
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (95th percentile)
  • High Attention Score compared to outputs of the same age and source (92nd percentile)

Mentioned by

blogs
2 blogs
twitter
42 X users

Citations

dimensions_citation
44 Dimensions

Readers on

mendeley
151 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The Voice of Emotion across Species: How Do Human Listeners Recognize Animals' Affective States?
Published in
PLOS ONE, March 2014
DOI 10.1371/journal.pone.0091192
Pubmed ID
Authors

Marina Scheumann, Anna S. Hasting, Sonja A. Kotz, Elke Zimmermann

Abstract

Voice-induced cross-taxa emotional recognition is the ability to understand the emotional state of another species based on its voice. In the past, induced affective states, experience-dependent higher cognitive processes or cross-taxa universal acoustic coding and processing mechanisms have been discussed to underlie this ability in humans. The present study sets out to distinguish the influence of familiarity and phylogeny on voice-induced cross-taxa emotional perception in humans. For the first time, two perspectives are taken into account: the self- (i.e. emotional valence induced in the listener) versus the others-perspective (i.e. correct recognition of the emotional valence of the recording context). Twenty-eight male participants listened to 192 vocalizations of four different species (human infant, dog, chimpanzee and tree shrew). Stimuli were recorded either in an agonistic (negative emotional valence) or affiliative (positive emotional valence) context. Participants rated the emotional valence of the stimuli adopting self- and others-perspective by using a 5-point version of the Self-Assessment Manikin (SAM). Familiarity was assessed based on subjective rating, objective labelling of the respective stimuli and interaction time with the respective species. Participants reliably recognized the emotional valence of human voices, whereas the results for animal voices were mixed. The correct classification of animal voices depended on the listener's familiarity with the species and the call type/recording context, whereas there was less influence of induced emotional states and phylogeny. Our results provide first evidence that explicit voice-induced cross-taxa emotional recognition in humans is shaped more by experience-dependent cognitive mechanisms than by induced affective states or cross-taxa universal acoustic coding and processing mechanisms.

X Demographics

X Demographics

The data shown below were collected from the profiles of 42 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 151 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 3 2%
Hungary 2 1%
Germany 2 1%
Japan 2 1%
Austria 1 <1%
Chile 1 <1%
United States 1 <1%
Unknown 139 92%

Demographic breakdown

Readers by professional status Count As %
Student > Master 34 23%
Student > Ph. D. Student 29 19%
Researcher 23 15%
Student > Bachelor 12 8%
Other 10 7%
Other 27 18%
Unknown 16 11%
Readers by discipline Count As %
Agricultural and Biological Sciences 48 32%
Psychology 33 22%
Veterinary Science and Veterinary Medicine 8 5%
Neuroscience 8 5%
Arts and Humanities 6 4%
Other 21 14%
Unknown 27 18%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 38. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 February 2024.
All research outputs
#1,070,853
of 25,394,764 outputs
Outputs from PLOS ONE
#13,757
of 221,086 outputs
Outputs of similar age
#10,331
of 235,737 outputs
Outputs of similar age from PLOS ONE
#414
of 5,769 outputs
Altmetric has tracked 25,394,764 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 95th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 221,086 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 15.7. This one has done particularly well, scoring higher than 93% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 235,737 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 95% of its contemporaries.
We're also able to compare this research output to 5,769 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 92% of its contemporaries.