↓ Skip to main content

Collapsing factors in multitrait-multimethod models: examining consequences of a mismatch between measurement design and model

Overview of attention for article published in Frontiers in Psychology, August 2015
Altmetric Badge

Mentioned by

twitter
1 X user

Citations

dimensions_citation
26 Dimensions

Readers on

mendeley
34 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Collapsing factors in multitrait-multimethod models: examining consequences of a mismatch between measurement design and model
Published in
Frontiers in Psychology, August 2015
DOI 10.3389/fpsyg.2015.00946
Pubmed ID
Authors

Christian Geiser, Jacob Bishop, Ginger Lockhart

Abstract

Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. Many applications of CFA-MTMM and similarly structured models result in solutions in which at least one method (or specific) factor shows non-significant loading or variance estimates. Eid et al. (2008) distinguished between MTMM measurement designs with interchangeable (randomly selected) vs. structurally different (fixed) methods and showed that each type of measurement design implies specific CFA-MTMM measurement models. In the current study, we hypothesized that some of the problems that are commonly seen in applications of CFA-MTMM models may be due to a mismatch between the underlying measurement design and fitted models. Using simulations, we found that models with M method factors (where M is the total number of methods) and unconstrained loadings led to a higher proportion of solutions in which at least one method factor became empirically unstable when these models were fit to data generated from structurally different methods. The simulations also revealed that commonly used model goodness-of-fit criteria frequently failed to identify incorrectly specified CFA-MTMM models. We discuss implications of these findings for other complex CFA models in which similar issues occur, including nested (bifactor) and latent state-trait models.

X Demographics

X Demographics

The data shown below were collected from the profile of 1 X user who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 34 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 34 100%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 9 26%
Researcher 7 21%
Student > Master 4 12%
Professor 2 6%
Professor > Associate Professor 2 6%
Other 5 15%
Unknown 5 15%
Readers by discipline Count As %
Psychology 21 62%
Social Sciences 3 9%
Nursing and Health Professions 1 3%
Earth and Planetary Sciences 1 3%
Mathematics 1 3%
Other 0 0%
Unknown 7 21%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 1. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 03 August 2015.
All research outputs
#18,420,033
of 22,818,766 outputs
Outputs from Frontiers in Psychology
#22,143
of 29,769 outputs
Outputs of similar age
#189,802
of 263,982 outputs
Outputs of similar age from Frontiers in Psychology
#452
of 550 outputs
Altmetric has tracked 22,818,766 research outputs across all sources so far. This one is in the 11th percentile – i.e., 11% of other outputs scored the same or lower than it.
So far Altmetric has tracked 29,769 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 19th percentile – i.e., 19% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 263,982 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 16th percentile – i.e., 16% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 550 others from the same source and published within six weeks on either side of this one. This one is in the 3rd percentile – i.e., 3% of its contemporaries scored the same or lower than it.