↓ Skip to main content

Strategies to improve recruitment to randomised trials

Overview of attention for article published in Cochrane database of systematic reviews, February 2018
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (97th percentile)
  • High Attention Score compared to outputs of the same age and source (91st percentile)

Mentioned by

blogs
2 blogs
twitter
149 X users
facebook
1 Facebook page

Citations

dimensions_citation
395 Dimensions

Readers on

mendeley
554 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Strategies to improve recruitment to randomised trials
Published in
Cochrane database of systematic reviews, February 2018
DOI 10.1002/14651858.mr000013.pub6
Pubmed ID
Authors

Shaun Treweek, Marie Pitkethly, Jonathan Cook, Cynthia Fraser, Elizabeth Mitchell, Frank Sullivan, Catherine Jackson, Tyna K Taskila, Heidi Gardner

Abstract

Recruiting participants to trials can be extremely difficult. Identifying strategies that improve trial recruitment would benefit both trialists and health research. To quantify the effects of strategies for improving recruitment of participants to randomised trials. A secondary objective is to assess the evidence for the effect of the research setting (e.g. primary care versus secondary care) on recruitment. We searched the Cochrane Methodology Review Group Specialised Register (CMR) in the Cochrane Library (July 2012, searched 11 February 2015); MEDLINE and MEDLINE In Process (OVID) (1946 to 10 February 2015); Embase (OVID) (1996 to 2015 Week 06); Science Citation Index & Social Science Citation Index (ISI) (2009 to 11 February 2015) and ERIC (EBSCO) (2009 to 11 February 2015). Randomised and quasi-randomised trials of methods to increase recruitment to randomised trials. This includes non-healthcare studies and studies recruiting to hypothetical trials. We excluded studies aiming to increase response rates to questionnaires or trial retention and those evaluating incentives and disincentives for clinicians to recruit participants. We extracted data on: the method evaluated; country in which the study was carried out; nature of the population; nature of the study setting; nature of the study to be recruited into; randomisation or quasi-randomisation method; and numbers and proportions in each intervention group. We used a risk difference to estimate the absolute improvement and the 95% confidence interval (CI) to describe the effect in individual trials. We assessed heterogeneity between trial results. We used GRADE to judge the certainty we had in the evidence coming from each comparison. We identified 68 eligible trials (24 new to this update) with more than 74,000 participants. There were 63 studies involving interventions aimed directly at trial participants, while five evaluated interventions aimed at people recruiting participants. All studies were in health care.We found 72 comparisons, but just three are supported by high-certainty evidence according to GRADE.1. Open trials rather than blinded, placebo trials. The absolute improvement was 10% (95% CI 7% to 13%).2. Telephone reminders to people who do not respond to a postal invitation. The absolute improvement was 6% (95% CI 3% to 9%). This result applies to trials that have low underlying recruitment. We are less certain for trials that start out with moderately good recruitment (i.e. over 10%).3. Using a particular, bespoke, user-testing approach to develop participant information leaflets. This method involved spending a lot of time working with the target population for recruitment to decide on the content, format and appearance of the participant information leaflet. This made little or no difference to recruitment: absolute improvement was 1% (95% CI -1% to 3%).We had moderate-certainty evidence for eight other comparisons; our confidence was reduced for most of these because the results came from a single study. Three of the methods were changes to trial management, three were changes to how potential participants received information, one was aimed at recruiters, and the last was a test of financial incentives. All of these comparisons would benefit from other researchers replicating the evaluation. There were no evaluations in paediatric trials.We had much less confidence in the other 61 comparisons because the studies had design flaws, were single studies, had very uncertain results or were hypothetical (mock) trials rather than real ones. The literature on interventions to improve recruitment to trials has plenty of variety but little depth. Only 3 of 72 comparisons are supported by high-certainty evidence according to GRADE: having an open trial and using telephone reminders to non-responders to postal interventions both increase recruitment; a specialised way of developing participant information leaflets had little or no effect. The methodology research community should improve the evidence base by replicating evaluations of existing strategies, rather than developing and testing new ones.

X Demographics

X Demographics

The data shown below were collected from the profiles of 149 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 554 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Australia 1 <1%
Unknown 553 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 71 13%
Student > Master 71 13%
Student > Ph. D. Student 63 11%
Student > Bachelor 42 8%
Other 29 5%
Other 97 18%
Unknown 181 33%
Readers by discipline Count As %
Medicine and Dentistry 126 23%
Nursing and Health Professions 57 10%
Psychology 34 6%
Social Sciences 27 5%
Pharmacology, Toxicology and Pharmaceutical Science 15 3%
Other 85 15%
Unknown 210 38%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 105. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 27 March 2023.
All research outputs
#408,491
of 25,759,158 outputs
Outputs from Cochrane database of systematic reviews
#707
of 13,137 outputs
Outputs of similar age
#9,238
of 345,336 outputs
Outputs of similar age from Cochrane database of systematic reviews
#19
of 227 outputs
Altmetric has tracked 25,759,158 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 98th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 13,137 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 35.9. This one has done particularly well, scoring higher than 94% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 345,336 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 97% of its contemporaries.
We're also able to compare this research output to 227 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 91% of its contemporaries.