↓ Skip to main content

Variations in Scientific Data Production: What Can We Learn from #Overlyhonestmethods?

Overview of attention for article published in Science and Engineering Ethics, December 2014
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (83rd percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
13 X users

Citations

dimensions_citation
6 Dimensions

Readers on

mendeley
32 Mendeley
citeulike
2 CiteULike
Title
Variations in Scientific Data Production: What Can We Learn from #Overlyhonestmethods?
Published in
Science and Engineering Ethics, December 2014
DOI 10.1007/s11948-014-9618-9
Pubmed ID
Authors

Louise Bezuidenhout

Abstract

In recent months months the hashtag #overlyhonestmethods has steadily been gaining popularity. Posts under this hashtag-presumably by scientists-detail aspects of daily scientific research that differ considerably from the idealized interpretation of scientific experimentation as standardized, objective and reproducible. Over and above its entertainment value, the popularity of this hashtag raises two important points for those who study both science and scientists. Firstly, the posts highlight that the generation of data through experimentation is often far less standardized than is commonly assumed. Secondly, the popularity of the hashtag together with its relatively blasé reception by the scientific community reveal that the actions reported in the tweets are far from shocking and indeed may be considered just "part of scientific research". Such observations give considerable pause for thought, and suggest that current conceptions of data might be limited by failing to recognize this "inherent variability" within the actions of generation-and thus within data themselves. Is it possible, we must ask, that epistemic virtues such as standardization, consistency, reportability and reproducibility need to be reevaluated? Such considerations are, of course, of particular importance to data sharing discussions and the Open Data movement. This paper suggests that the notion of a "moral professionalism" for data generation and sharing needs to be considered in more detail if the inherent variability of data are to be addressed in any meaningful manner.

X Demographics

X Demographics

The data shown below were collected from the profiles of 13 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 32 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 2 6%
South Africa 1 3%
Unknown 29 91%

Demographic breakdown

Readers by professional status Count As %
Researcher 5 16%
Student > Ph. D. Student 5 16%
Student > Doctoral Student 4 13%
Student > Master 3 9%
Professor > Associate Professor 2 6%
Other 3 9%
Unknown 10 31%
Readers by discipline Count As %
Social Sciences 6 19%
Agricultural and Biological Sciences 3 9%
Business, Management and Accounting 3 9%
Environmental Science 2 6%
Philosophy 2 6%
Other 6 19%
Unknown 10 31%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 8. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 September 2017.
All research outputs
#4,775,273
of 25,715,849 outputs
Outputs from Science and Engineering Ethics
#341
of 974 outputs
Outputs of similar age
#60,871
of 362,156 outputs
Outputs of similar age from Science and Engineering Ethics
#6
of 10 outputs
Altmetric has tracked 25,715,849 research outputs across all sources so far. Compared to these this one has done well and is in the 81st percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 974 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.7. This one has gotten more attention than average, scoring higher than 64% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 362,156 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 83% of its contemporaries.
We're also able to compare this research output to 10 others from the same source and published within six weeks on either side of this one. This one has scored higher than 4 of them.