↓ Skip to main content

The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams

Overview of attention for article published in BMC Medical Research Methodology, December 2017
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (84th percentile)
  • Good Attention Score compared to outputs of the same age and source (74th percentile)

Mentioned by

twitter
18 X users

Citations

dimensions_citation
3 Dimensions

Readers on

mendeley
37 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams
Published in
BMC Medical Research Methodology, December 2017
DOI 10.1186/s12874-017-0449-7
Pubmed ID
Authors

Yuanyuan Yu, Hongkai Li, Xiaoru Sun, Ping Su, Tingting Wang, Yi Liu, Zhongshang Yuan, Yanxun Liu, Fuzhong Xue

Abstract

Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.

X Demographics

X Demographics

The data shown below were collected from the profiles of 18 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 37 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 37 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 6 16%
Student > Ph. D. Student 5 14%
Lecturer 3 8%
Student > Master 3 8%
Student > Doctoral Student 2 5%
Other 8 22%
Unknown 10 27%
Readers by discipline Count As %
Medicine and Dentistry 5 14%
Social Sciences 3 8%
Pharmacology, Toxicology and Pharmaceutical Science 3 8%
Engineering 2 5%
Decision Sciences 2 5%
Other 11 30%
Unknown 11 30%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 11. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 21 October 2021.
All research outputs
#3,305,322
of 25,758,211 outputs
Outputs from BMC Medical Research Methodology
#502
of 2,315 outputs
Outputs of similar age
#69,226
of 451,442 outputs
Outputs of similar age from BMC Medical Research Methodology
#13
of 51 outputs
Altmetric has tracked 25,758,211 research outputs across all sources so far. Compared to these this one has done well and is in the 87th percentile: it's in the top 25% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 2,315 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 10.6. This one has done well, scoring higher than 78% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 451,442 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 84% of its contemporaries.
We're also able to compare this research output to 51 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 74% of its contemporaries.