↓ Skip to main content

Participants shift response deadlines based on list difficulty during reading-aloud megastudies

Overview of attention for article published in Memory & Cognition, February 2017
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
11 Mendeley
Title
Participants shift response deadlines based on list difficulty during reading-aloud megastudies
Published in
Memory & Cognition, February 2017
DOI 10.3758/s13421-016-0678-8
Pubmed ID
Authors

Michael J. Cortese, Maya M. Khanna, Robert Kopp, Jonathan B. Santo, Kailey S. Preston, Tyler Van Zuiden

Abstract

We tested the list homogeneity effect in reading aloud (e.g., Lupker, Brown, & Colombo, 1997) using a megastudy paradigm. In each of two conditions, we used 25 blocks of 100 trials. In the random condition, words were selected randomly for each block, whereas in the experimental condition, words were blocked by difficulty (e.g., easy words together, etc.), but the order of the blocks was randomized. We predicted that standard factors (e.g., frequency) would be more predictive of reaction times (RTs) in the blocked than in the random condition, because the range of RTs across the experiment would increase in the blocked condition. Indeed, we found that the standard deviations and ranges of RTs were larger in the blocked than in the random condition. In addition, an examination of items at the difficulty extremes (i.e., very easy vs. very difficult) demonstrated a response bias. In regression analyses, a predictor set of seven sublexical, lexical, and semantic variables accounted for 2.8% more RT variance (and 2.6% more zRT variance) in the blocked than in the random condition. These results indicate that response deadlines apply to megastudies of reading aloud, and that the influence of predictors may be underestimated in megastudies when item presentation is randomized. In addition, the CDP++ model accounted for 0.8% more variance in RTs (1.2% in zRTs) in the blocked than in the random condition. Thus, computational models may have more predictive power on item sets blocked by difficulty than on those presented in random order. The results also indicate that models of word processing need to accommodate response criterion shifts.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 11 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 11 100%

Demographic breakdown

Readers by professional status Count As %
Professor 3 27%
Lecturer 2 18%
Student > Bachelor 1 9%
Student > Doctoral Student 1 9%
Researcher 1 9%
Other 1 9%
Unknown 2 18%
Readers by discipline Count As %
Psychology 4 36%
Arts and Humanities 1 9%
Linguistics 1 9%
Medicine and Dentistry 1 9%
Neuroscience 1 9%
Other 1 9%
Unknown 2 18%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 February 2017.
All research outputs
#20,403,545
of 22,953,506 outputs
Outputs from Memory & Cognition
#1,499
of 1,580 outputs
Outputs of similar age
#267,781
of 307,002 outputs
Outputs of similar age from Memory & Cognition
#17
of 19 outputs
Altmetric has tracked 22,953,506 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,580 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 8.6. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 307,002 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 19 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.