↓ Skip to main content

How many trials are required for parameter estimation in diffusion modeling? A comparison of different optimization criteria

Overview of attention for article published in Behavior Research Methods, June 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
3 X users

Citations

dimensions_citation
111 Dimensions

Readers on

mendeley
147 Mendeley
citeulike
1 CiteULike
Title
How many trials are required for parameter estimation in diffusion modeling? A comparison of different optimization criteria
Published in
Behavior Research Methods, June 2016
DOI 10.3758/s13428-016-0740-2
Pubmed ID
Authors

Veronika Lerche, Andreas Voss, Markus Nagler

Abstract

Diffusion models (Ratcliff, 1978) make it possible to identify and separate different cognitive processes underlying responses in binary decision tasks (e.g., the speed of information accumulation vs. the degree of response conservatism). This becomes possible because of the high degree of information utilization involved. Not only mean response times or error rates are used for the parameter estimation, but also the response time distributions of both correct and error responses. In a series of simulation studies, the efficiency and robustness of parameter recovery were compared for models differing in complexity (i.e., in numbers of free parameters) and trial numbers (ranging from 24 to 5,000) using three different optimization criteria (maximum likelihood, Kolmogorov-Smirnov, and chi-square) that are all implemented in the latest version of fast-dm (Voss, Voss, & Lerche, 2015). The results revealed that maximum likelihood is superior for uncontaminated data, but in the presence of fast contaminants, Kolmogorov-Smirnov outperforms the other two methods. For most conditions, chi-square-based parameter estimations lead to less precise results than the other optimization criteria. The performance of the fast-dm methods was compared to the EZ approach (Wagenmakers, van der Maas, & Grasman, 2007) and to a Bayesian implementation (Wiecki, Sofer, & Frank, 2013). Recommendations for trial numbers are derived from the results for models of different complexities. Interestingly, under certain conditions even small numbers of trials (N < 100) are sufficient for robust parameter estimation.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 147 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 1 <1%
Germany 1 <1%
Unknown 145 99%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 33 22%
Researcher 24 16%
Student > Master 24 16%
Student > Bachelor 15 10%
Student > Doctoral Student 10 7%
Other 25 17%
Unknown 16 11%
Readers by discipline Count As %
Psychology 65 44%
Neuroscience 25 17%
Linguistics 3 2%
Computer Science 3 2%
Agricultural and Biological Sciences 3 2%
Other 20 14%
Unknown 28 19%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 June 2016.
All research outputs
#15,739,010
of 25,371,288 outputs
Outputs from Behavior Research Methods
#1,421
of 2,524 outputs
Outputs of similar age
#203,062
of 360,128 outputs
Outputs of similar age from Behavior Research Methods
#23
of 40 outputs
Altmetric has tracked 25,371,288 research outputs across all sources so far. This one is in the 37th percentile – i.e., 37% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,524 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.1. This one is in the 41st percentile – i.e., 41% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 360,128 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 42nd percentile – i.e., 42% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 40 others from the same source and published within six weeks on either side of this one. This one is in the 40th percentile – i.e., 40% of its contemporaries scored the same or lower than it.