↓ Skip to main content

The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)

Overview of attention for article published in Frontiers in Psychology, July 2016
Altmetric Badge

Mentioned by

twitter
2 X users

Readers on

mendeley
15 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The Interpretation of Scholars' Interpretations of Confidence Intervals: Criticism, Replication, and Extension of Hoekstra et al. (2014)
Published in
Frontiers in Psychology, July 2016
DOI 10.3389/fpsyg.2016.01042
Pubmed ID
Authors

Miguel A. García-Pérez, Rocío Alcalá-Quintana

Abstract

Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 15 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Macao 1 7%
Austria 1 7%
Unknown 13 87%

Demographic breakdown

Readers by professional status Count As %
Researcher 5 33%
Student > Master 2 13%
Student > Ph. D. Student 2 13%
Other 1 7%
Lecturer 1 7%
Other 2 13%
Unknown 2 13%
Readers by discipline Count As %
Psychology 4 27%
Computer Science 2 13%
Medicine and Dentistry 2 13%
Social Sciences 2 13%
Business, Management and Accounting 1 7%
Other 3 20%
Unknown 1 7%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 11 August 2016.
All research outputs
#15,866,607
of 23,577,654 outputs
Outputs from Frontiers in Psychology
#19,563
of 31,443 outputs
Outputs of similar age
#228,727
of 357,125 outputs
Outputs of similar age from Frontiers in Psychology
#279
of 390 outputs
Altmetric has tracked 23,577,654 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 31,443 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.6. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 357,125 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 27th percentile – i.e., 27% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 390 others from the same source and published within six weeks on either side of this one. This one is in the 22nd percentile – i.e., 22% of its contemporaries scored the same or lower than it.