↓ Skip to main content

Effect of standardized training on the reliability of the Cochrane risk of bias assessment tool: a prospective study

Overview of attention for article published in Systematic Reviews, March 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • Good Attention Score compared to outputs of the same age (78th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (58th percentile)

Mentioned by

twitter
15 X users

Citations

dimensions_citation
46 Dimensions

Readers on

mendeley
97 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Effect of standardized training on the reliability of the Cochrane risk of bias assessment tool: a prospective study
Published in
Systematic Reviews, March 2017
DOI 10.1186/s13643-017-0441-7
Pubmed ID
Authors

Bruno R. da Costa, Brooke Beckett, Alison Diaz, Nina M. Resta, Bradley C. Johnston, Matthias Egger, Peter Jüni, Susan Armijo-Olivo

Abstract

The Cochrane risk of bias tool is commonly criticized for having a low reliability. We aimed to investigate whether training of raters, with objective and standardized instructions on how to assess risk of bias, can improve the reliability of the Cochrane risk of bias tool. In this pilot study, four raters inexperienced in risk of bias assessment were randomly allocated to minimal or intensive standardized training for risk of bias assessment of randomized trials of physical therapy treatments for patients with knee osteoarthritis pain. Two raters were experienced risk of bias assessors who served as reference. The primary outcome of our study was between-group reliability, defined as the agreement of the risk of bias assessments of inexperienced raters with the reference assessments of experienced raters. Consensus-based assessments were used for this purpose. The secondary outcome was within-group reliability, defined as the agreement of assessments within pairs of inexperienced raters. We calculated the chance-corrected weighted Kappa to quantify agreement within and between groups of raters for each of the domains of the risk of bias tool. A total of 56 trials were included in our analysis. The Kappa for the agreement of inexperienced raters with reference across items of the risk of bias tool ranged from 0.10 to 0.81 for the minimal training group and from 0.41 to 0.90 for the standardized training group. The Kappa values for the agreement within pairs of inexperienced raters across the items of the risk of bias tool ranged from 0 to 0.38 for the minimal training group and from 0.93 to 1 for the standardized training group. Between-group differences in Kappa for the agreement of inexperienced raters with reference always favored the standardized training group and was most pronounced for incomplete outcome data (difference in Kappa 0.52, p < 0.001) and allocation concealment (difference in Kappa 0.30, p = 0.004). Intensive, standardized training on risk of bias assessment may significantly improve the reliability of the Cochrane risk of bias tool.

X Demographics

X Demographics

The data shown below were collected from the profiles of 15 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 97 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 97 100%

Demographic breakdown

Readers by professional status Count As %
Student > Bachelor 14 14%
Student > Master 11 11%
Researcher 7 7%
Student > Ph. D. Student 6 6%
Student > Postgraduate 5 5%
Other 15 15%
Unknown 39 40%
Readers by discipline Count As %
Medicine and Dentistry 22 23%
Nursing and Health Professions 13 13%
Psychology 4 4%
Pharmacology, Toxicology and Pharmaceutical Science 3 3%
Agricultural and Biological Sciences 2 2%
Other 11 11%
Unknown 42 43%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 9. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 19 December 2017.
All research outputs
#4,148,587
of 25,595,500 outputs
Outputs from Systematic Reviews
#728
of 2,242 outputs
Outputs of similar age
#68,527
of 324,219 outputs
Outputs of similar age from Systematic Reviews
#26
of 60 outputs
Altmetric has tracked 25,595,500 research outputs across all sources so far. Compared to these this one has done well and is in the 83rd percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,242 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 13.2. This one has gotten more attention than average, scoring higher than 67% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 324,219 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 78% of its contemporaries.
We're also able to compare this research output to 60 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 58% of its contemporaries.