↓ Skip to main content

Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 6: how to assess relevance of the data

Overview of attention for article published in Implementation Science, January 2018
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (64th percentile)

Mentioned by

twitter
6 X users

Citations

dimensions_citation
154 Dimensions

Readers on

mendeley
236 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Applying GRADE-CERQual to qualitative evidence synthesis findings–paper 6: how to assess relevance of the data
Published in
Implementation Science, January 2018
DOI 10.1186/s13012-017-0693-6
Pubmed ID
Authors

Jane Noyes, Andrew Booth, Simon Lewin, Benedicte Carlsen, Claire Glenton, Christopher J. Colvin, Ruth Garside, Meghan A. Bohren, Arash Rashidian, Megan Wainwright, Özge Tunςalp, Jacqueline Chandler, Signe Flottorp, Tomas Pantoja, Joseph D. Tucker, Heather Munthe-Kaas

Abstract

The GRADE-CERQual (Confidence in Evidence from Reviews of Qualitative research) approach has been developed by the GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group. The approach has been developed to support the use of findings from qualitative evidence syntheses in decision-making, including guideline development and policy formulation. CERQual includes four components for assessing how much confidence to place in findings from reviews of qualitative research (also referred to as qualitative evidence syntheses): (1) methodological limitations, (2) coherence, (3) adequacy of data and (4) relevance. This paper is part of a series providing guidance on how to apply CERQual and focuses on CERQual's relevance component. We developed the relevance component by searching the literature for definitions, gathering feedback from relevant research communities and developing consensus through project group meetings. We tested the CERQual relevance component within several qualitative evidence syntheses before agreeing on the current definition and principles for application. When applying CERQual, we define relevance as the extent to which the body of data from the primary studies supporting a review finding is applicable to the context (perspective or population, phenomenon of interest, setting) specified in the review question. In this paper, we describe the relevance component and its rationale and offer guidance on how to assess relevance in the context of a review finding. This guidance outlines the information required to assess relevance, the steps that need to be taken to assess relevance and examples of relevance assessments. This paper provides guidance for review authors and others on undertaking an assessment of relevance in the context of the CERQual approach. Assessing the relevance component requires consideration of potentially important contextual factors at an early stage in the review process. We expect the CERQual approach, and its individual components, to develop further as our experiences with the practical implementation of the approach increase.

X Demographics

X Demographics

The data shown below were collected from the profiles of 6 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 236 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 236 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 38 16%
Student > Ph. D. Student 36 15%
Student > Master 30 13%
Student > Doctoral Student 17 7%
Professor 15 6%
Other 48 20%
Unknown 52 22%
Readers by discipline Count As %
Medicine and Dentistry 49 21%
Social Sciences 36 15%
Nursing and Health Professions 31 13%
Psychology 22 9%
Business, Management and Accounting 10 4%
Other 26 11%
Unknown 62 26%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 01 February 2018.
All research outputs
#8,381,343
of 25,205,864 outputs
Outputs from Implementation Science
#1,303
of 1,791 outputs
Outputs of similar age
#159,550
of 453,475 outputs
Outputs of similar age from Implementation Science
#40
of 43 outputs
Altmetric has tracked 25,205,864 research outputs across all sources so far. This one has received more attention than most of these and is in the 66th percentile.
So far Altmetric has tracked 1,791 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 14.9. This one is in the 26th percentile – i.e., 26% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 453,475 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 64% of its contemporaries.
We're also able to compare this research output to 43 others from the same source and published within six weeks on either side of this one. This one is in the 9th percentile – i.e., 9% of its contemporaries scored the same or lower than it.