↓ Skip to main content

Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training

Overview of attention for article published in Frontiers in Neuroscience, January 2013
Altmetric Badge

Mentioned by

twitter
2 X users

Readers on

mendeley
137 Mendeley
citeulike
2 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training
Published in
Frontiers in Neuroscience, January 2013
DOI 10.3389/fnins.2013.00034
Pubmed ID
Authors

Lynne E. Bernstein, Edward T. Auer, Silvio P. Eberhardt, Jintao Jiang

Abstract

Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 137 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 4 3%
Japan 2 1%
France 1 <1%
Germany 1 <1%
India 1 <1%
Unknown 128 93%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 36 26%
Researcher 24 18%
Student > Master 14 10%
Student > Doctoral Student 11 8%
Professor 10 7%
Other 24 18%
Unknown 18 13%
Readers by discipline Count As %
Psychology 41 30%
Neuroscience 19 14%
Medicine and Dentistry 13 9%
Linguistics 12 9%
Engineering 8 6%
Other 21 15%
Unknown 23 17%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 25 March 2013.
All research outputs
#19,962,154
of 25,394,764 outputs
Outputs from Frontiers in Neuroscience
#8,675
of 11,544 outputs
Outputs of similar age
#221,450
of 289,149 outputs
Outputs of similar age from Frontiers in Neuroscience
#169
of 246 outputs
Altmetric has tracked 25,394,764 research outputs across all sources so far. This one is in the 18th percentile – i.e., 18% of other outputs scored the same or lower than it.
So far Altmetric has tracked 11,544 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 11.0. This one is in the 18th percentile – i.e., 18% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 289,149 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 20th percentile – i.e., 20% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 246 others from the same source and published within six weeks on either side of this one. This one is in the 24th percentile – i.e., 24% of its contemporaries scored the same or lower than it.