↓ Skip to main content

Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition

Overview of attention for article published in Frontiers in Psychology, January 2013
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
2 X users
googleplus
1 Google+ user

Citations

dimensions_citation
35 Dimensions

Readers on

mendeley
54 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition
Published in
Frontiers in Psychology, January 2013
DOI 10.3389/fpsyg.2013.00367
Pubmed ID
Authors

Simon Rigoulot, Eugen Wassiliwizky, Marc D. Pell

Abstract

Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400-1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 54 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
India 1 2%
United Kingdom 1 2%
Mexico 1 2%
Argentina 1 2%
Japan 1 2%
Unknown 49 91%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 13 24%
Researcher 10 19%
Student > Master 6 11%
Professor > Associate Professor 4 7%
Other 4 7%
Other 10 19%
Unknown 7 13%
Readers by discipline Count As %
Psychology 22 41%
Neuroscience 6 11%
Agricultural and Biological Sciences 5 9%
Arts and Humanities 4 7%
Computer Science 3 6%
Other 6 11%
Unknown 8 15%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 July 2013.
All research outputs
#14,215,275
of 24,027,644 outputs
Outputs from Frontiers in Psychology
#13,856
of 32,249 outputs
Outputs of similar age
#165,520
of 288,044 outputs
Outputs of similar age from Frontiers in Psychology
#556
of 968 outputs
Altmetric has tracked 24,027,644 research outputs across all sources so far. This one is in the 40th percentile – i.e., 40% of other outputs scored the same or lower than it.
So far Altmetric has tracked 32,249 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.8. This one has gotten more attention than average, scoring higher than 55% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 288,044 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 42nd percentile – i.e., 42% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 968 others from the same source and published within six weeks on either side of this one. This one is in the 41st percentile – i.e., 41% of its contemporaries scored the same or lower than it.