↓ Skip to main content

A Bayesian comparative effectiveness trial in action: developing a platform for multisite study adaptive randomization

Overview of attention for article published in Trials, August 2016
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
3 X users

Citations

dimensions_citation
20 Dimensions

Readers on

mendeley
46 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
A Bayesian comparative effectiveness trial in action: developing a platform for multisite study adaptive randomization
Published in
Trials, August 2016
DOI 10.1186/s13063-016-1544-5
Pubmed ID
Authors

Alexandra R. Brown, Byron J. Gajewski, Lauren S. Aaronson, Dinesh Pal Mudaranthakam, Suzanne L. Hunt, Scott M. Berry, Melanie Quintana, Mamatha Pasnoor, Mazen M. Dimachkie, Omar Jawdat, Laura Herbelin, Richard J. Barohn

Abstract

In the last few decades, the number of trials using Bayesian methods has grown rapidly. Publications prior to 1990 included only three clinical trials that used Bayesian methods, but that number quickly jumped to 19 in the 1990s and to 99 from 2000 to 2012. While this literature provides many examples of Bayesian Adaptive Designs (BAD), none of the papers that are available walks the reader through the detailed process of conducting a BAD. This paper fills that gap by describing the BAD process used for one comparative effectiveness trial (Patient Assisted Intervention for Neuropathy: Comparison of Treatment in Real Life Situations) that can be generalized for use by others. A BAD was chosen with efficiency in mind. Response-adaptive randomization allows the potential for substantially smaller sample sizes, and can provide faster conclusions about which treatment or treatments are most effective. An Internet-based electronic data capture tool, which features a randomization module, facilitated data capture across study sites and an in-house computation software program was developed to implement the response-adaptive randomization. A process for adapting randomization with minimal interruption to study sites was developed. A new randomization table can be generated quickly and can be seamlessly integrated in the data capture tool with minimal interruption to study sites. This manuscript is the first to detail the technical process used to evaluate a multisite comparative effectiveness trial using adaptive randomization. An important opportunity for the application of Bayesian trials is in comparative effectiveness trials. The specific case study presented in this paper can be used as a model for conducting future clinical trials using a combination of statistical software and a web-based application. ClinicalTrials.gov Identifier: NCT02260388 , registered on 6 October 2014.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 46 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 46 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 13 28%
Student > Ph. D. Student 6 13%
Student > Doctoral Student 3 7%
Student > Master 3 7%
Student > Bachelor 2 4%
Other 7 15%
Unknown 12 26%
Readers by discipline Count As %
Medicine and Dentistry 11 24%
Nursing and Health Professions 6 13%
Psychology 4 9%
Computer Science 3 7%
Mathematics 3 7%
Other 6 13%
Unknown 13 28%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 06 December 2017.
All research outputs
#17,154,245
of 25,986,827 outputs
Outputs from Trials
#23
of 45 outputs
Outputs of similar age
#221,136
of 350,641 outputs
Outputs of similar age from Trials
#71
of 102 outputs
Altmetric has tracked 25,986,827 research outputs across all sources so far. This one is in the 31st percentile – i.e., 31% of other outputs scored the same or lower than it.
So far Altmetric has tracked 45 research outputs from this source. They receive a mean Attention Score of 5.0. This one scored the same or higher as 22 of them.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 350,641 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 34th percentile – i.e., 34% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 102 others from the same source and published within six weeks on either side of this one. This one is in the 24th percentile – i.e., 24% of its contemporaries scored the same or lower than it.