↓ Skip to main content

On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common

Overview of attention for article published in Frontiers in Psychology, January 2013
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (89th percentile)
  • Good Attention Score compared to outputs of the same age and source (71st percentile)

Mentioned by

twitter
8 X users
patent
3 patents

Citations

dimensions_citation
156 Dimensions

Readers on

mendeley
268 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common
Published in
Frontiers in Psychology, January 2013
DOI 10.3389/fpsyg.2013.00292
Pubmed ID
Authors

Felix Weninger, Florian Eyben, Björn W. Schuller, Marcello Mortillaro, Klaus R. Scherer

Abstract

WITHOUT DOUBT, THERE IS EMOTIONAL INFORMATION IN ALMOST ANY KIND OF SOUND RECEIVED BY HUMANS EVERY DAY: be it the affective state of a person transmitted by means of speech; the emotion intended by a composer while writing a musical piece, or conveyed by a musician while performing it; or the affective state connected to an acoustic event occurring in the environment, in the soundtrack of a movie, or in a radio play. In the field of affective computing, there is currently some loosely connected research concerning either of these phenomena, but a holistic computational model of affect in sound is still lacking. In turn, for tomorrow's pervasive technical systems, including affective companions and robots, it is expected to be highly beneficial to understand the affective dimensions of "the sound that something makes," in order to evaluate the system's auditory environment and its own audio output. This article aims at a first step toward a holistic computational model: starting from standard acoustic feature extraction schemes in the domains of speech, music, and sound analysis, we interpret the worth of individual features across these three domains, considering four audio databases with observer annotations in the arousal and valence dimensions. In the results, we find that by selection of appropriate descriptors, cross-domain arousal, and valence regression is feasible achieving significant correlations with the observer annotations of up to 0.78 for arousal (training on sound and testing on enacted speech) and 0.60 for valence (training on enacted speech and testing on music). The high degree of cross-domain consistency in encoding the two main dimensions of affect may be attributable to the co-evolution of speech and music from multimodal affect bursts, including the integration of nature sounds for expressive effects.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 268 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 <1%
United Kingdom 2 <1%
Switzerland 1 <1%
France 1 <1%
Germany 1 <1%
India 1 <1%
Canada 1 <1%
Italy 1 <1%
Japan 1 <1%
Other 3 1%
Unknown 254 95%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 59 22%
Researcher 41 15%
Student > Master 39 15%
Student > Bachelor 24 9%
Professor > Associate Professor 12 4%
Other 36 13%
Unknown 57 21%
Readers by discipline Count As %
Computer Science 54 20%
Psychology 45 17%
Engineering 29 11%
Arts and Humanities 25 9%
Social Sciences 11 4%
Other 39 15%
Unknown 65 24%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 11. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 04 July 2023.
All research outputs
#3,095,902
of 24,187,594 outputs
Outputs from Frontiers in Psychology
#5,889
of 32,513 outputs
Outputs of similar age
#31,553
of 288,811 outputs
Outputs of similar age from Frontiers in Psychology
#275
of 968 outputs
Altmetric has tracked 24,187,594 research outputs across all sources so far. Compared to these this one has done well and is in the 87th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 32,513 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.8. This one has done well, scoring higher than 81% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 288,811 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 89% of its contemporaries.
We're also able to compare this research output to 968 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 71% of its contemporaries.