↓ Skip to main content

Testing Hypotheses on Risk Factors for Scientific Misconduct via Matched-Control Analysis of Papers Containing Problematic Image Duplications

Overview of attention for article published in Science and Engineering Ethics, February 2018
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#50 of 975)
  • High Attention Score compared to outputs of the same age (94th percentile)
  • High Attention Score compared to outputs of the same age and source (91st percentile)

Mentioned by

blogs
4 blogs
twitter
41 X users
facebook
2 Facebook pages
wikipedia
4 Wikipedia pages

Citations

dimensions_citation
33 Dimensions

Readers on

mendeley
80 Mendeley
Title
Testing Hypotheses on Risk Factors for Scientific Misconduct via Matched-Control Analysis of Papers Containing Problematic Image Duplications
Published in
Science and Engineering Ethics, February 2018
DOI 10.1007/s11948-018-0023-7
Pubmed ID
Authors

Daniele Fanelli, Rodrigo Costas, Ferric C. Fang, Arturo Casadevall, Elisabeth M. Bik

Abstract

It is commonly hypothesized that scientists are more likely to engage in data falsification and fabrication when they are subject to pressures to publish, when they are not restrained by forms of social control, when they work in countries lacking policies to tackle scientific misconduct, and when they are male. Evidence to test these hypotheses, however, is inconclusive due to the difficulties of obtaining unbiased data. Here we report a pre-registered test of these four hypotheses, conducted on papers that were identified in a previous study as containing problematic image duplications through a systematic screening of the journal PLoS ONE. Image duplications were classified into three categories based on their complexity, with category 1 being most likely to reflect unintentional error and category 3 being most likely to reflect intentional fabrication. We tested multiple parameters connected to the hypotheses above with a matched-control paradigm, by collecting two controls for each paper containing duplications. Category 1 duplications were mostly not associated with any of the parameters tested, as was predicted based on the assumption that these duplications were mostly not due to misconduct. Categories 2 and 3, however, exhibited numerous statistically significant associations. Results of univariable and multivariable analyses support the hypotheses that academic culture, peer control, cash-based publication incentives and national misconduct policies might affect scientific integrity. No clear support was found for the "pressures to publish" hypothesis. Female authors were found to be equally likely to publish duplicated images compared to males. Country-level parameters generally exhibited stronger effects than individual-level parameters, because developing countries were significantly more likely to produce problematic image duplications. This suggests that promoting good research practices in all countries should be a priority for the international research integrity agenda.

X Demographics

X Demographics

The data shown below were collected from the profiles of 41 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 80 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 80 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 14 18%
Student > Master 13 16%
Researcher 9 11%
Student > Doctoral Student 4 5%
Student > Bachelor 4 5%
Other 11 14%
Unknown 25 31%
Readers by discipline Count As %
Social Sciences 10 13%
Psychology 6 8%
Agricultural and Biological Sciences 4 5%
Arts and Humanities 4 5%
Computer Science 3 4%
Other 28 35%
Unknown 25 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 51. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 14 January 2022.
All research outputs
#842,960
of 25,759,158 outputs
Outputs from Science and Engineering Ethics
#50
of 975 outputs
Outputs of similar age
#18,760
of 345,351 outputs
Outputs of similar age from Science and Engineering Ethics
#2
of 23 outputs
Altmetric has tracked 25,759,158 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 96th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 975 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.0. This one has done particularly well, scoring higher than 94% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 345,351 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 94% of its contemporaries.
We're also able to compare this research output to 23 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 91% of its contemporaries.