↓ Skip to main content

Publication bias and the failure of replication in experimental psychology

Overview of attention for article published in Psychonomic Bulletin & Review, October 2012
Altmetric Badge

Mentioned by

news
6 news outlets
blogs
5 blogs
policy
1 policy source
twitter
25 X users

Citations

dimensions_citation
155 Dimensions

Readers on

mendeley
361 Mendeley
citeulike
2 CiteULike
Title
Publication bias and the failure of replication in experimental psychology
Published in
Psychonomic Bulletin & Review, October 2012
DOI 10.3758/s13423-012-0322-y
Pubmed ID
Authors

Gregory Francis

Abstract

Replication of empirical findings plays a fundamental role in science. Among experimental psychologists, successful replication enhances belief in a finding, while a failure to replicate is often interpreted to mean that one of the experiments is flawed. This view is wrong. Because experimental psychology uses statistics, empirical findings should appear with predictable probabilities. In a misguided effort to demonstrate successful replication of empirical findings and avoid failures to replicate, experimental psychologists sometimes report too many positive results. Rather than strengthen confidence in an effect, too much successful replication actually indicates publication bias, which invalidates entire sets of experimental findings. Researchers cannot judge the validity of a set of biased experiments because the experiment set may consist entirely of type I errors. This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication. Simulated experiments demonstrate that the publication bias test is able to discriminate biased experiment sets from unbiased experiment sets, but it is conservative about reporting bias. The test is then applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias. Such methods should be part of a systematic process to remove publication bias from experimental psychology and reinstate the important role of replication as a final arbiter of scientific findings.

X Demographics

X Demographics

The data shown below were collected from the profiles of 25 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 361 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 11 3%
Canada 3 <1%
Netherlands 2 <1%
Switzerland 2 <1%
Belgium 2 <1%
United Kingdom 2 <1%
France 1 <1%
Italy 1 <1%
Germany 1 <1%
Other 6 2%
Unknown 330 91%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 76 21%
Student > Master 61 17%
Student > Bachelor 49 14%
Researcher 36 10%
Professor > Associate Professor 21 6%
Other 69 19%
Unknown 49 14%
Readers by discipline Count As %
Psychology 212 59%
Social Sciences 16 4%
Neuroscience 11 3%
Medicine and Dentistry 8 2%
Agricultural and Biological Sciences 7 2%
Other 40 11%
Unknown 67 19%