↓ Skip to main content

Open Evaluation: A Vision for Entirely Transparent Post-Publication Peer Review and Rating for Science

Overview of attention for article published in Frontiers in Computational Neuroscience, January 2012
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • Among the highest-scoring outputs from this source (#23 of 1,455)
  • High Attention Score compared to outputs of the same age (98th percentile)
  • High Attention Score compared to outputs of the same age and source (92nd percentile)

Mentioned by

blogs
6 blogs
twitter
64 X users
peer_reviews
2 peer review sites
facebook
3 Facebook pages
googleplus
4 Google+ users
reddit
1 Redditor

Readers on

mendeley
128 Mendeley
citeulike
4 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Open Evaluation: A Vision for Entirely Transparent Post-Publication Peer Review and Rating for Science
Published in
Frontiers in Computational Neuroscience, January 2012
DOI 10.3389/fncom.2012.00079
Pubmed ID
Authors

Nikolaus Kriegeskorte

Abstract

The two major functions of a scientific publishing system are to provide access to and evaluation of scientific papers. While open access (OA) is becoming a reality, open evaluation (OE), the other side of the coin, has received less attention. Evaluation steers the attention of the scientific community and thus the very course of science. It also influences the use of scientific findings in public policy. The current system of scientific publishing provides only journal prestige as an indication of the quality of new papers and relies on a non-transparent and noisy pre-publication peer-review process, which delays publication by many months on average. Here I propose an OE system, in which papers are evaluated post-publication in an ongoing fashion by means of open peer review and rating. Through signed ratings and reviews, scientists steer the attention of their field and build their reputation. Reviewers are motivated to be objective, because low-quality or self-serving signed evaluations will negatively impact their reputation. A core feature of this proposal is a division of powers between the accumulation of evaluative evidence and the analysis of this evidence by paper evaluation functions (PEFs). PEFs can be freely defined by individuals or groups (e.g., scientific societies) and provide a plurality of perspectives on the scientific literature. Simple PEFs will use averages of ratings, weighting reviewers (e.g., by H-index), and rating scales (e.g., by relevance to a decision process) in different ways. Complex PEFs will use advanced statistical techniques to infer the quality of a paper. Papers with initially promising ratings will be more deeply evaluated. The continual refinement of PEFs in response to attempts by individuals to influence evaluations in their own favor will make the system ungameable. OA and OE together have the power to revolutionize scientific publishing and usher in a new culture of transparency, constructive criticism, and collaboration.

X Demographics

X Demographics

The data shown below were collected from the profiles of 64 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 128 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Germany 3 2%
United States 2 2%
Poland 2 2%
United Kingdom 1 <1%
Canada 1 <1%
Mexico 1 <1%
Netherlands 1 <1%
Greece 1 <1%
Italy 1 <1%
Other 2 2%
Unknown 113 88%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 34 27%
Student > Master 16 13%
Researcher 14 11%
Student > Bachelor 14 11%
Librarian 12 9%
Other 26 20%
Unknown 12 9%
Readers by discipline Count As %
Social Sciences 19 15%
Agricultural and Biological Sciences 17 13%
Computer Science 15 12%
Psychology 14 11%
Business, Management and Accounting 9 7%
Other 37 29%
Unknown 17 13%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 80. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 25 April 2024.
All research outputs
#528,943
of 25,351,219 outputs
Outputs from Frontiers in Computational Neuroscience
#23
of 1,455 outputs
Outputs of similar age
#2,850
of 256,301 outputs
Outputs of similar age from Frontiers in Computational Neuroscience
#6
of 70 outputs
Altmetric has tracked 25,351,219 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 97th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,455 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 6.8. This one has done particularly well, scoring higher than 98% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 256,301 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 98% of its contemporaries.
We're also able to compare this research output to 70 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 92% of its contemporaries.