↓ Skip to main content

Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

Overview of attention for article published in PeerJ, November 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (92nd percentile)
  • High Attention Score compared to outputs of the same age and source (83rd percentile)

Mentioned by

blogs
2 blogs
twitter
18 X users
facebook
1 Facebook page

Citations

dimensions_citation
16 Dimensions

Readers on

mendeley
33 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
Published in
PeerJ, November 2017
DOI 10.7717/peerj.4115
Pubmed ID
Authors

Andreas Schneck

Abstract

Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. Four tests on publication bias, Egger's test (FAT), p-uniform, the test of excess significance (TES), as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100%) were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5), effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500), and the number of observations for the publication bias tests (K = 100, 1,000) were varied. All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a discipline-wide setting when primary studies cover different research problems.

X Demographics

X Demographics

The data shown below were collected from the profiles of 18 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 33 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 33 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 6 18%
Student > Bachelor 4 12%
Librarian 3 9%
Student > Doctoral Student 3 9%
Researcher 3 9%
Other 9 27%
Unknown 5 15%
Readers by discipline Count As %
Psychology 7 21%
Medicine and Dentistry 6 18%
Social Sciences 3 9%
Agricultural and Biological Sciences 2 6%
Neuroscience 2 6%
Other 6 18%
Unknown 7 21%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 23. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 24 February 2022.
All research outputs
#1,593,054
of 24,904,819 outputs
Outputs from PeerJ
#1,697
of 14,851 outputs
Outputs of similar age
#35,621
of 449,414 outputs
Outputs of similar age from PeerJ
#57
of 337 outputs
Altmetric has tracked 24,904,819 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 93rd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 14,851 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 17.0. This one has done well, scoring higher than 88% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 449,414 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 92% of its contemporaries.
We're also able to compare this research output to 337 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 83% of its contemporaries.