↓ Skip to main content

The Quality of Response Time Data Inference: A Blinded, Collaborative Assessment of the Validity of Cognitive Models

Overview of attention for article published in Psychonomic Bulletin & Review, February 2018
Altmetric Badge

Mentioned by

twitter
10 X users

Citations

dimensions_citation
110 Dimensions

Readers on

mendeley
141 Mendeley
Title
The Quality of Response Time Data Inference: A Blinded, Collaborative Assessment of the Validity of Cognitive Models
Published in
Psychonomic Bulletin & Review, February 2018
DOI 10.3758/s13423-017-1417-2
Pubmed ID
Authors

Gilles Dutilh, Jeffrey Annis, Scott D. Brown, Peter Cassey, Nathan J. Evans, Raoul P. P. P. Grasman, Guy E. Hawkins, Andrew Heathcote, William R. Holmes, Angelos-Miltiadis Krypotos, Colin N. Kupitz, Fábio P. Leite, Veronika Lerche, Yi-Shin Lin, Gordon D. Logan, Thomas J. Palmeri, Jeffrey J. Starns, Jennifer S. Trueblood, Leendert van Maanen, Don van Ravenzwaaij, Joachim Vandekerckhove, Ingmar Visser, Andreas Voss, Corey N. White, Thomas V. Wiecki, Jörg Rieskamp, Chris Donkin

Abstract

Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors hinge upon the validity of the models' parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants' behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler's degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.

X Demographics

X Demographics

The data shown below were collected from the profiles of 10 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 141 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Brazil 1 <1%
Unknown 140 99%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 34 24%
Student > Master 18 13%
Researcher 17 12%
Student > Bachelor 9 6%
Student > Doctoral Student 8 6%
Other 24 17%
Unknown 31 22%
Readers by discipline Count As %
Psychology 66 47%
Neuroscience 16 11%
Agricultural and Biological Sciences 3 2%
Social Sciences 3 2%
Business, Management and Accounting 2 1%
Other 8 6%
Unknown 43 30%