↓ Skip to main content

Evaluating impact of clinical guidelines using a realist evaluation framework

Overview of attention for article published in Journal of Evaluation in Clinical Practice, December 2015
Altmetric Badge

Mentioned by

facebook
1 Facebook page

Citations

dimensions_citation
12 Dimensions

Readers on

mendeley
74 Mendeley
citeulike
1 CiteULike
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Evaluating impact of clinical guidelines using a realist evaluation framework
Published in
Journal of Evaluation in Clinical Practice, December 2015
DOI 10.1111/jep.12482
Pubmed ID
Authors

Sandeep Reddy, John Wakerman, Gill Westhorp, Sally Herring

Abstract

The Remote Primary Health Care Manuals (RPHCM) project team manages the development and publication of clinical protocols and procedures for primary care clinicians practicing in remote Australia. The Central Australian Rural Practitioners Association Standard Treatment Manual, the flagship manual of the RPHCM suite, has been evaluated for accessibility and acceptability in remote clinics three times in its 20-year history. These evaluations did not consider a theory-based framework or a programme theory, resulting in some limitations with the evaluation findings. With the RPHCM having an aim of enabling evidence-based practice in remote clinics and anecdotally reported to do so, testing this empirically for the full suite is vital for both stakeholders and future editions of the RPHCM. The project team utilized a realist evaluation framework to assess how, why and for what the RPHCM were being used by remote practitioners. A theory regarding the circumstances in which the manuals have and have not enabled evidence-based practice in the remote clinical context was tested. The project assessed this theory for all the manuals in the RPHCM suite, across government and aboriginal community-controlled clinics, in three regions of Australia. Implementing a realist evaluation framework to generate robust findings in this context has required innovation in the evaluation design and adaptation by researchers. This article captures the RPHCM team's experience in designing this evaluation.

Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 74 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
United Kingdom 1 1%
Canada 1 1%
Unknown 72 97%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 11 15%
Researcher 11 15%
Student > Master 11 15%
Student > Bachelor 6 8%
Student > Postgraduate 5 7%
Other 14 19%
Unknown 16 22%
Readers by discipline Count As %
Medicine and Dentistry 19 26%
Social Sciences 12 16%
Nursing and Health Professions 8 11%
Computer Science 3 4%
Pharmacology, Toxicology and Pharmaceutical Science 3 4%
Other 9 12%
Unknown 20 27%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 February 2016.
All research outputs
#21,997,751
of 24,542,484 outputs
Outputs from Journal of Evaluation in Clinical Practice
#1,447
of 1,530 outputs
Outputs of similar age
#339,036
of 398,509 outputs
Outputs of similar age from Journal of Evaluation in Clinical Practice
#24
of 26 outputs
Altmetric has tracked 24,542,484 research outputs across all sources so far. This one is in the 1st percentile – i.e., 1% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,530 research outputs from this source. They typically receive more attention than average, with a mean Attention Score of 10.0. This one is in the 1st percentile – i.e., 1% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 398,509 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 26 others from the same source and published within six weeks on either side of this one. This one is in the 1st percentile – i.e., 1% of its contemporaries scored the same or lower than it.