↓ Skip to main content

Cohort Multiple Randomised Controlled Trials (cmRCT) design: efficient but biased? A simulation study to evaluate the feasibility of the Cluster cmRCT design

Overview of attention for article published in BMC Medical Research Methodology, August 2016
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
23 Dimensions

Readers on

mendeley
43 Mendeley
Title
Cohort Multiple Randomised Controlled Trials (cmRCT) design: efficient but biased? A simulation study to evaluate the feasibility of the Cluster cmRCT design
Published in
BMC Medical Research Methodology, August 2016
DOI 10.1186/s12874-016-0208-1
Pubmed ID
Authors

Alexander Pate, Jane Candlish, Matthew Sperrin, Tjeerd Pieter Van Staa, on behalf of GetReal Work Package 2

Abstract

The Cohort Multiple Randomised Controlled Trial (cmRCT) is a newly proposed pragmatic trial design; recently several cmRCT have been initiated. This study tests the unresolved question of whether differential refusal in the intervention arm leads to bias or loss of statistical power and how to deal with this. We conduct simulations evaluating a hypothetical cluster cmRCT in patients at risk of cardiovascular disease (CVD). To deal with refusal, we compare the analysis methods intention to treat (ITT), per protocol (PP) and two instrumental variable (IV) methods: two stage predictor substitution (2SPS) and two stage residual inclusion (2SRI) with respect to their bias and power. We vary the correlation between treatment refusal probability and the probability of experiencing the outcome to create different scenarios. We found ITT to be biased in all scenarios, PP the most biased when correlation is strong and 2SRI the least biased on average. Trials suffer a drop in power unless the refusal rate is factored into the power calculation. The ITT effect in routine practice is likely to lie somewhere between the ITT and IV estimates from the trial which differ significantly depending on refusal rates. More research is needed on how refusal rates of experimental interventions correlate with refusal rates in routine practice to help answer the question of which analysis more relevant. We also recommend updating the required sample size during the trial as more information about the refusal rate is gained.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 43 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 43 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 10 23%
Researcher 7 16%
Student > Ph. D. Student 5 12%
Other 4 9%
Professor > Associate Professor 3 7%
Other 8 19%
Unknown 6 14%
Readers by discipline Count As %
Medicine and Dentistry 17 40%
Mathematics 3 7%
Nursing and Health Professions 3 7%
Pharmacology, Toxicology and Pharmaceutical Science 2 5%
Psychology 2 5%
Other 6 14%
Unknown 10 23%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 20 September 2016.
All research outputs
#15,384,989
of 22,889,074 outputs
Outputs from BMC Medical Research Methodology
#1,514
of 2,024 outputs
Outputs of similar age
#216,034
of 338,627 outputs
Outputs of similar age from BMC Medical Research Methodology
#33
of 45 outputs
Altmetric has tracked 22,889,074 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,024 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.1. This one is in the 16th percentile – i.e., 16% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 338,627 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 27th percentile – i.e., 27% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 45 others from the same source and published within six weeks on either side of this one. This one is in the 22nd percentile – i.e., 22% of its contemporaries scored the same or lower than it.