↓ Skip to main content

An investigation of the impact of using different methods for network meta-analysis: a protocol for an empirical evaluation

Overview of attention for article published in Systematic Reviews, June 2017
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
2 X users

Citations

dimensions_citation
7 Dimensions

Readers on

mendeley
31 Mendeley
Title
An investigation of the impact of using different methods for network meta-analysis: a protocol for an empirical evaluation
Published in
Systematic Reviews, June 2017
DOI 10.1186/s13643-017-0511-x
Pubmed ID
Authors

Amalia (Emily) Karahalios, Georgia Salanti, Simon L. Turner, G. Peter Herbison, Ian R. White, Areti Angeliki Veroniki, Adriani Nikolakopoulou, Joanne E. Mckenzie

Abstract

Network meta-analysis, a method to synthesise evidence from multiple treatments, has increased in popularity in the past decade. Two broad approaches are available to synthesise data across networks, namely, arm- and contrast-synthesis models, with a range of models that can be fitted within each. There has been recent debate about the validity of the arm-synthesis models, but to date, there has been limited empirical evaluation comparing results using the methods applied to a large number of networks. We aim to address this gap through the re-analysis of a large cohort of published networks of interventions using a range of network meta-analysis methods. We will include a subset of networks from a database of network meta-analyses of randomised trials that have been identified and curated from the published literature. The subset of networks will include those where the primary outcome is binary, the number of events and participants are reported for each direct comparison, and there is no evidence of inconsistency in the network. We will re-analyse the networks using three contrast-synthesis methods and two arm-synthesis methods. We will compare the estimated treatment effects, their standard errors, treatment hierarchy based on the surface under the cumulative ranking (SUCRA) curve, the SUCRA value, and the between-trial heterogeneity variance across the network meta-analysis methods. We will investigate whether differences in the results are affected by network characteristics and baseline risk. The results of this study will inform whether, in practice, the choice of network meta-analysis method matters, and if it does, in what situations differences in the results between methods might arise. The results from this research might also inform future simulation studies.

X Demographics

X Demographics

The data shown below were collected from the profiles of 2 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 31 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 31 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 8 26%
Student > Ph. D. Student 5 16%
Other 4 13%
Professor 3 10%
Student > Doctoral Student 2 6%
Other 4 13%
Unknown 5 16%
Readers by discipline Count As %
Medicine and Dentistry 7 23%
Mathematics 3 10%
Biochemistry, Genetics and Molecular Biology 2 6%
Economics, Econometrics and Finance 2 6%
Nursing and Health Professions 1 3%
Other 5 16%
Unknown 11 35%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 16 April 2018.
All research outputs
#14,819,211
of 22,990,068 outputs
Outputs from Systematic Reviews
#1,551
of 2,005 outputs
Outputs of similar age
#185,891
of 315,941 outputs
Outputs of similar age from Systematic Reviews
#42
of 56 outputs
Altmetric has tracked 22,990,068 research outputs across all sources so far. This one is in the 34th percentile – i.e., 34% of other outputs scored the same or lower than it.
So far Altmetric has tracked 2,005 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.8. This one is in the 22nd percentile – i.e., 22% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 315,941 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 40th percentile – i.e., 40% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 56 others from the same source and published within six weeks on either side of this one. This one is in the 25th percentile – i.e., 25% of its contemporaries scored the same or lower than it.