↓ Skip to main content

Updated clinical guidelines experience major reporting limitations

Overview of attention for article published in Implementation Science, October 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (90th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (52nd percentile)

Mentioned by

twitter
43 X users

Citations

dimensions_citation
10 Dimensions

Readers on

mendeley
39 Mendeley
Title
Updated clinical guidelines experience major reporting limitations
Published in
Implementation Science, October 2017
DOI 10.1186/s13012-017-0651-3
Pubmed ID
Authors

Robin W.M. Vernooij, Laura Martínez García, Ivan Dario Florez, Laura Hildago Armas, Michiel H.F. Poorthuis, Melissa Brouwers, Pablo Alonso-Coello

Abstract

The Checklist for the Reporting of Updated Guidelines (CheckUp) was recently developed. However, so far, no systematic assessment of the reporting of updated clinical guidelines (CGs) exists. We aimed to examine (1) the completeness of reporting the updating process in CGs and (2) the inter-observer reliability of CheckUp. We conducted a systematic assessment of the reporting of the updating process in a sample of updated CGs using CheckUp. We performed a systematic search to identify updated CGs published in 2015, developed by a professional society, reporting a systematic review of the evidence, and containing at least one recommendation. Three reviewers independently assessed the CGs with CheckUp (16 items). We calculated the median score per item, per domain, and overall, converting scores to a 10-point scale. Multiple linear regression analyses were used to identify differences according to country, type of organisation, scope, and health topic of updated CGs. We calculated the intraclass coefficient (ICC) and 95% confidence interval (95% CI) for domains and overall score. We included in total 60 updated CGs. The median domain score on a 10-point scale for presentation was 5.8 (range 1.7 to 10), for editorial independence 8.3 (range 3.3 to 10), and for methodology 5.7 (range 0 to 10). The median overall score on a 10-point scale was 6.3 (range 3.1 to 10). Presentation and justification items at recommendation level (respectively reported by 27 and 38% of the CGs) and the methods used for the external review and implementing changes in practice were particularly poorly reported (both reported by 38% of the CGs). CGs developed by a European or international institution obtained a statistically significant higher overall score compared to North American or Asian institutions (p = 0.014). Finally, the agreement among the reviewers on the overall score was excellent (ICC 0.88, 95% CI 0.75 to 0.95). The reporting of updated CGs varies considerably with significant room for improvement. We recommend using CheckUp to assess the updating process in updated CGs and as a blueprint to inform methods and reporting strategies in updating.

X Demographics

X Demographics

The data shown below were collected from the profiles of 43 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 39 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 39 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 7 18%
Student > Master 5 13%
Student > Doctoral Student 4 10%
Lecturer 3 8%
Student > Bachelor 3 8%
Other 9 23%
Unknown 8 21%
Readers by discipline Count As %
Medicine and Dentistry 12 31%
Nursing and Health Professions 3 8%
Pharmacology, Toxicology and Pharmaceutical Science 2 5%
Psychology 2 5%
Economics, Econometrics and Finance 1 3%
Other 6 15%
Unknown 13 33%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 24. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 01 October 2019.
All research outputs
#1,420,220
of 23,509,982 outputs
Outputs from Implementation Science
#275
of 1,728 outputs
Outputs of similar age
#30,384
of 325,891 outputs
Outputs of similar age from Implementation Science
#16
of 34 outputs
Altmetric has tracked 23,509,982 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 93rd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 1,728 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.8. This one has done well, scoring higher than 84% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 325,891 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 90% of its contemporaries.
We're also able to compare this research output to 34 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 52% of its contemporaries.