↓ Skip to main content

Using Latent Class Analysis to Model Preference Heterogeneity in Health: A Systematic Review

Overview of attention for article published in PharmacoEconomics, October 2017
Altmetric Badge

About this Attention Score

  • Above-average Attention Score compared to outputs of the same age (54th percentile)
  • Average Attention Score compared to outputs of the same age and source

Mentioned by

twitter
5 X users
facebook
1 Facebook page

Citations

dimensions_citation
102 Dimensions

Readers on

mendeley
100 Mendeley
Title
Using Latent Class Analysis to Model Preference Heterogeneity in Health: A Systematic Review
Published in
PharmacoEconomics, October 2017
DOI 10.1007/s40273-017-0575-4
Pubmed ID
Authors

Mo Zhou, Winter Maxwell Thayer, John F. P. Bridges

Abstract

Latent class analysis (LCA) has been increasingly used to explore preference heterogeneity, but the literature has not been systematically explored and hence best practices are not understood. We sought to document all applications of LCA in the stated-preference literature in health and to inform future studies by identifying current norms in published applications. We conducted a systematic review of the MEDLINE, EMBASE, EconLit, Web of Science, and PsycINFO databases. We included stated-preference studies that used LCA to explore preference heterogeneity in healthcare or public health. Two co-authors independently evaluated titles, abstracts, and full-text articles. Abstracted key outcomes included segmentation methods, preference elicitation methods, number of attributes and levels, sample size, model selection criteria, number of classes reported, and hypotheses tests. Study data quality and validity were assessed with the Purpose, Respondents, Explanation, Findings, and Significance (PREFS) quality checklist. We identified 2560 titles, 99 of which met the inclusion criteria for the review. Two-thirds of the studies focused on the preferences of patients and the general population. In total, 80% of the studies used discrete choice experiments. Studies used between three and 20 attributes, most commonly four to six. Sample size in LCAs ranged from 47 to 2068, with one-third between 100 and 300. Over 90% of the studies used latent class logit models for segmentation. Bayesian information criterion (BIC), Akaike information criterion (AIC), and log-likelihood (LL) were commonly used for model selection, and class size and interpretability were also considered in some studies. About 80% of studies reported two to three classes. The number of classes reported was not correlated with any study characteristics or study population characteristics (p > 0.05). Only 30% of the studies reported using statistical tests to detect significant variations in preferences between classes. Less than half of the studies reported that individual characteristics were included in the segmentation models, and 30% reported that post-estimation analyses were conducted to examine class characteristics. While a higher percentage of studies discussed clinical implications of the segmentation results, an increasing number of studies proposed policy recommendations based on segmentation results since 2010. LCA is increasingly used to study preference heterogeneity in health and support decision-making. However, there is little consensus on best practices as its application in health is relatively new. With an increasing demand to study preference heterogeneity, guidance is needed to improve the quality of applications of segmentation methods in health to support policy development and clinical practice.

X Demographics

X Demographics

The data shown below were collected from the profiles of 5 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 100 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 100 100%

Demographic breakdown

Readers by professional status Count As %
Researcher 19 19%
Student > Ph. D. Student 16 16%
Student > Master 12 12%
Student > Bachelor 5 5%
Student > Doctoral Student 4 4%
Other 14 14%
Unknown 30 30%
Readers by discipline Count As %
Medicine and Dentistry 16 16%
Social Sciences 11 11%
Nursing and Health Professions 9 9%
Economics, Econometrics and Finance 7 7%
Engineering 4 4%
Other 16 16%
Unknown 37 37%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 3. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 29 December 2017.
All research outputs
#7,756,393
of 23,576,969 outputs
Outputs from PharmacoEconomics
#899
of 1,880 outputs
Outputs of similar age
#123,456
of 323,871 outputs
Outputs of similar age from PharmacoEconomics
#21
of 38 outputs
Altmetric has tracked 23,576,969 research outputs across all sources so far. This one is in the 44th percentile – i.e., 44% of other outputs scored the same or lower than it.
So far Altmetric has tracked 1,880 research outputs from this source. They typically receive a little more attention than average, with a mean Attention Score of 7.0. This one is in the 27th percentile – i.e., 27% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 323,871 tracked outputs that were published within six weeks on either side of this one in any source. This one has gotten more attention than average, scoring higher than 54% of its contemporaries.
We're also able to compare this research output to 38 others from the same source and published within six weeks on either side of this one. This one is in the 47th percentile – i.e., 47% of its contemporaries scored the same or lower than it.