↓ Skip to main content

Observed effects of “distributional learning” may not relate to the number of peaks. A test of “dispersion” as a confounding factor

Overview of attention for article published in Frontiers in Psychology, September 2015
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
3 X users

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
13 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Observed effects of “distributional learning” may not relate to the number of peaks. A test of “dispersion” as a confounding factor
Published in
Frontiers in Psychology, September 2015
DOI 10.3389/fpsyg.2015.01341
Pubmed ID
Authors

Karin Wanrooij, Paul Boersma, Titia Benders

Abstract

Distributional learning of speech sounds is learning from simply being exposed to frequency distributions of speech sounds in one's surroundings. In laboratory settings, the mechanism has been reported to be discernible already after a few minutes of exposure, in both infants and adults. These "effects of distributional training" have traditionally been attributed to the difference in the number of peaks between the experimental distribution (two peaks) and the control distribution (one or zero peaks). However, none of the earlier studies fully excluded a possibly confounding effect of the dispersion in the distributions. Additionally, some studies with a non-speech control condition did not control for a possible difference between processing speech and non-speech. The current study presents an experiment that corrects both imperfections. Spanish listeners were exposed to either a bimodal distribution encompassing the Dutch contrast /ɑ/∼/a/ or a unimodal distribution with the same dispersion. Before and after training, their accuracy of categorization of [ɑ]- and [a]-tokens was measured. A traditionally calculated p-value showed no significant difference in categorization improvement between bimodally and unimodally trained participants. Because of this null result, a Bayesian method was used to assess the odds in favor of the null hypothesis. Four different Bayes factors, each calculated on a different belief in the truth value of previously found effect sizes, indicated the absence of a difference between bimodally and unimodally trained participants. The implication is that "effects of distributional training" observed in the lab are not induced by the number of peaks in the distributions.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 13 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 13 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 7 54%
Student > Bachelor 1 8%
Lecturer 1 8%
Professor 1 8%
Student > Postgraduate 1 8%
Other 0 0%
Unknown 2 15%
Readers by discipline Count As %
Psychology 5 38%
Linguistics 3 23%
Agricultural and Biological Sciences 1 8%
Neuroscience 1 8%
Unknown 3 23%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 05 October 2015.
All research outputs
#15,346,908
of 22,828,180 outputs
Outputs from Frontiers in Psychology
#18,679
of 29,801 outputs
Outputs of similar age
#157,512
of 268,887 outputs
Outputs of similar age from Frontiers in Psychology
#395
of 572 outputs
Altmetric has tracked 22,828,180 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 29,801 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 268,887 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 32nd percentile – i.e., 32% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 572 others from the same source and published within six weeks on either side of this one. This one is in the 26th percentile – i.e., 26% of its contemporaries scored the same or lower than it.