↓ Skip to main content

A scoping review of indirect comparison methods and applications using individual patient data

Overview of attention for article published in BMC Medical Research Methodology, April 2016
Altmetric Badge

About this Attention Score

  • Good Attention Score compared to outputs of the same age (65th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (59th percentile)

Mentioned by

twitter
7 X users

Citations

dimensions_citation
36 Dimensions

Readers on

mendeley
51 Mendeley
Title
A scoping review of indirect comparison methods and applications using individual patient data
Published in
BMC Medical Research Methodology, April 2016
DOI 10.1186/s12874-016-0146-y
Pubmed ID
Authors

Areti Angeliki Veroniki, Sharon E. Straus, Charlene Soobiah, Meghan J. Elliott, Andrea C. Tricco

Abstract

Several indirect comparison methods, including network meta-analyses (NMAs), using individual patient data (IPD) have been developed to synthesize evidence from a network of trials. Although IPD indirect comparisons are published with increasing frequency in health care literature, there is no guidance on selecting the appropriate methodology and on reporting the methods and results. In this paper we examine the methods and reporting of indirect comparison methods using IPD. We searched MEDLINE, Embase, the Cochrane Library, and CINAHL from inception until October 2014. We included published and unpublished studies reporting a method, application, or review of indirect comparisons using IPD and at least three interventions. We identified 37 papers, including a total of 33 empirical networks. Of these, only 9 (27 %) IPD-NMAs reported the existence of a study protocol, whereas 3 (9 %) studies mentioned that protocols existed without providing a reference. The 33 empirical networks included 24 (73 %) IPD-NMAs and 9 (27 %) matching adjusted indirect comparisons (MAICs). Of the 21 (64 %) networks with at least one closed loop, 19 (90 %) were IPD-NMAs, 13 (68 %) of which evaluated the prerequisite consistency assumption, and only 5 (38 %) of the 13 IPD-NMAs used statistical approaches. The median number of trials included per network was 10 (IQR 4-19) (IPD-NMA: 15 [IQR 8-20]; MAIC: 2 [IQR 3-5]), and the median number of IPD trials included in a network was 3 (IQR 1-9) (IPD-NMA: 6 [IQR 2-11]; MAIC: 2 [IQR 1-2]). Half of the networks (17; 52 %) applied Bayesian hierarchical models (14 one-stage, 1 two-stage, 1 used IPD as an informative prior, 1 unclear-stage), including either IPD alone or with aggregated data (AD). Models for dichotomous and continuous outcomes were available (IPD alone or combined with AD), as were models for time-to-event data (IPD combined with AD). One in three indirect comparison methods modeling IPD adjusted results from different trials to estimate effects as if they had come from the same, randomized, population. Key methodological and reporting elements (e.g., evaluation of consistency, existence of study protocol) were often missing from an indirect comparison paper.

X Demographics

X Demographics

The data shown below were collected from the profiles of 7 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 51 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Japan 2 4%
United States 1 2%
France 1 2%
Unknown 47 92%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 9 18%
Researcher 8 16%
Professor 3 6%
Other 3 6%
Professor > Associate Professor 3 6%
Other 8 16%
Unknown 17 33%
Readers by discipline Count As %
Medicine and Dentistry 13 25%
Agricultural and Biological Sciences 4 8%
Pharmacology, Toxicology and Pharmaceutical Science 4 8%
Mathematics 2 4%
Nursing and Health Professions 1 2%
Other 9 18%
Unknown 18 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 4. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 07 November 2017.
All research outputs
#7,137,760
of 23,312,088 outputs
Outputs from BMC Medical Research Methodology
#1,060
of 2,057 outputs
Outputs of similar age
#99,560
of 300,179 outputs
Outputs of similar age from BMC Medical Research Methodology
#14
of 32 outputs
Altmetric has tracked 23,312,088 research outputs across all sources so far. This one has received more attention than most of these and is in the 68th percentile.
So far Altmetric has tracked 2,057 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.3. This one is in the 47th percentile – i.e., 47% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 300,179 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 65% of its contemporaries.
We're also able to compare this research output to 32 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 59% of its contemporaries.