↓ Skip to main content

Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches

Overview of attention for article published in Frontiers in Psychology, March 2018
Altmetric Badge

Mentioned by

twitter
3 X users

Readers on

mendeley
8 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches
Published in
Frontiers in Psychology, March 2018
DOI 10.3389/fpsyg.2018.00255
Pubmed ID
Authors

Nigel Guenole

Abstract

The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 8 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 8 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 5 63%
Researcher 2 25%
Professor > Associate Professor 1 13%
Readers by discipline Count As %
Psychology 3 38%
Arts and Humanities 1 13%
Computer Science 1 13%
Nursing and Health Professions 1 13%
Social Sciences 1 13%
Other 1 13%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 02 March 2018.
All research outputs
#15,492,327
of 23,023,224 outputs
Outputs from Frontiers in Psychology
#18,968
of 30,282 outputs
Outputs of similar age
#211,875
of 331,398 outputs
Outputs of similar age from Frontiers in Psychology
#429
of 568 outputs
Altmetric has tracked 23,023,224 research outputs across all sources so far. This one is in the 22nd percentile – i.e., 22% of other outputs scored the same or lower than it.
So far Altmetric has tracked 30,282 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 31st percentile – i.e., 31% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 331,398 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 27th percentile – i.e., 27% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 568 others from the same source and published within six weeks on either side of this one. This one is in the 14th percentile – i.e., 14% of its contemporaries scored the same or lower than it.