↓ Skip to main content

Learning from open source software projects to improve scientific review

Overview of attention for article published in Frontiers in Computational Neuroscience, January 2012
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (87th percentile)
  • Good Attention Score compared to outputs of the same age and source (79th percentile)

Mentioned by

twitter
8 X users
googleplus
4 Google+ users

Citations

dimensions_citation
26 Dimensions

Readers on

mendeley
84 Mendeley
citeulike
2 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Learning from open source software projects to improve scientific review
Published in
Frontiers in Computational Neuroscience, January 2012
DOI 10.3389/fncom.2012.00018
Pubmed ID
Authors

Satrajit S. Ghosh, Arno Klein, Brian Avants, K. Jarrod Millman

Abstract

Peer-reviewed publications are the primary mechanism for sharing scientific results. The current peer-review process is, however, fraught with many problems that undermine the pace, validity, and credibility of science. We highlight five salient problems: (1) reviewers are expected to have comprehensive expertise; (2) reviewers do not have sufficient access to methods and materials to evaluate a study; (3) reviewers are neither identified nor acknowledged; (4) there is no measure of the quality of a review; and (5) reviews take a lot of time, and once submitted cannot evolve. We propose that these problems can be resolved by making the following changes to the review process. Distributing reviews to many reviewers would allow each reviewer to focus on portions of the article that reflect the reviewer's specialty or area of interest and place less of a burden on any one reviewer. Providing reviewers materials and methods to perform comprehensive evaluation would facilitate transparency, greater scrutiny, and replication of results. Acknowledging reviewers makes it possible to quantitatively assess reviewer contributions, which could be used to establish the impact of the reviewer in the scientific community. Quantifying review quality could help establish the importance of individual reviews and reviewers as well as the submitted article. Finally, we recommend expediting post-publication reviews and allowing for the dialog to continue and flourish in a dynamic and interactive manner. We argue that these solutions can be implemented by adapting existing features from open-source software management and social networking technologies. We propose a model of an open, interactive review system that quantifies the significance of articles, the quality of reviews, and the reputation of reviewers.

X Demographics

X Demographics

The data shown below were collected from the profiles of 8 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 84 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United States 6 7%
Germany 2 2%
Norway 2 2%
Brazil 2 2%
Australia 1 1%
Hong Kong 1 1%
Chile 1 1%
Canada 1 1%
Sweden 1 1%
Other 2 2%
Unknown 65 77%

Demographic breakdown

Readers by professional status Count As %
Researcher 23 27%
Student > Ph. D. Student 15 18%
Professor > Associate Professor 9 11%
Other 7 8%
Student > Master 7 8%
Other 14 17%
Unknown 9 11%
Readers by discipline Count As %
Agricultural and Biological Sciences 20 24%
Medicine and Dentistry 10 12%
Computer Science 9 11%
Psychology 9 11%
Social Sciences 7 8%
Other 17 20%
Unknown 12 14%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 9. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 18 June 2014.
All research outputs
#3,523,262
of 22,675,759 outputs
Outputs from Frontiers in Computational Neuroscience
#166
of 1,336 outputs
Outputs of similar age
#29,838
of 244,088 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#14
of 69 outputs
Altmetric has tracked 22,675,759 research outputs across all sources so far. Compared to these this one has done well and is in the 84th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,336 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.2. This one has done well, scoring higher than 87% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 244,088 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 87% of its contemporaries.
We're also able to compare this research output to 69 others from the same source and published within six weeks on either side of this one. This one has done well, scoring higher than 79% of its contemporaries.