↓ Skip to main content

Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies

Overview of attention for article published in Frontiers in Psychology, October 2014
Altmetric Badge

About this Attention Score

  • Average Attention Score compared to outputs of the same age

Mentioned by

twitter
3 X users

Readers on

mendeley
60 Mendeley
You are seeing a free-to-access but limited selection of the activity Altmetric has collected about this research output. Click here to find out more.
Title
Testing for measurement invariance and latent mean differences across methods: interesting incremental information from multitrait-multimethod studies
Published in
Frontiers in Psychology, October 2014
DOI 10.3389/fpsyg.2014.01216
Pubmed ID
Authors

Christian Geiser, G. Leonard Burns, Mateu Servera

Abstract

Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. We show that interesting incremental information about method effects can be gained from including mean structures and tests of MI across methods in MTMM models. We present a modeling framework for testing MI in the first step of a CFA-MTMM analysis. We also discuss the relevance of MI in the context of four more complex CFA-MTMM models with method factors. We focus on three recently developed multiple-indicator CFA-MTMM models for structurally different methods [the correlated traits-correlated (methods - 1), latent difference, and latent means models; Geiser et al., 2014a; Pohl and Steyer, 2010; Pohl et al., 2008] and one model for interchangeable methods (Eid et al., 2008). We demonstrate that some of these models require or imply MI by definition for a proper interpretation of trait or method factors, whereas others do not, and explain why MI may or may not be required in each model. We show that in the model for interchangeable methods, testing for MI is critical for determining whether methods can truly be seen as interchangeable. We illustrate the theoretical issues in an empirical application to an MTMM study of attention deficit and hyperactivity disorder (ADHD) with mother, father, and teacher ratings as methods.

X Demographics

X Demographics

The data shown below were collected from the profiles of 3 X users who shared this research output. Click here to find out more about how the information was compiled.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 60 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Angola 1 2%
Sweden 1 2%
Unknown 58 97%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 21 35%
Researcher 8 13%
Student > Doctoral Student 6 10%
Student > Master 4 7%
Lecturer 3 5%
Other 8 13%
Unknown 10 17%
Readers by discipline Count As %
Psychology 30 50%
Arts and Humanities 4 7%
Social Sciences 4 7%
Business, Management and Accounting 3 5%
Computer Science 2 3%
Other 7 12%
Unknown 10 17%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 2. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 31 October 2014.
All research outputs
#14,660,995
of 22,768,097 outputs
Outputs from Frontiers in Psychology
#15,860
of 29,681 outputs
Outputs of similar age
#142,083
of 260,456 outputs
Outputs of similar age from Frontiers in Psychology
#280
of 377 outputs
Altmetric has tracked 22,768,097 research outputs across all sources so far. This one is in the 35th percentile – i.e., 35% of other outputs scored the same or lower than it.
So far Altmetric has tracked 29,681 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 12.5. This one is in the 45th percentile – i.e., 45% of its peers scored the same or lower than it.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 260,456 tracked outputs that were published within six weeks on either side of this one in any source. This one is in the 44th percentile – i.e., 44% of its contemporaries scored the same or lower than it.
We're also able to compare this research output to 377 others from the same source and published within six weeks on either side of this one. This one is in the 24th percentile – i.e., 24% of its contemporaries scored the same or lower than it.