Collapsing factors in multitrait-multimethod models: Examining consequences of a mismatch between measurement design and modelFrontiers in Psychology: Quantitative Psychology and Measurement
AbstractModels of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. Many applications of CFA-MTMM and similarly structured models result in solutions in which at least one method (or specific) factor shows non-significant loading or variance estimates. Eid et al. (2008) distinguished between MTMM measurement designs with interchangeable (randomly selected) vs. structurally different (fixed) methods and showed that each type of measurement design implies specific CFA-MTMM measurement models. In the current study, we hypothesized that some of the problems that are commonly seen in applications of CFA-MTMM models may be due to a mismatch between the underlying measurement design and fitted models. Using simulations, we found that models with M method factors (where M is the total number of methods) and unconstrained loadings led to a higher proportion of solutions in which at least one method factor became empirically unstable when these models were fit to data generated from structurally different methods. The simulations also revealed that commonly used model goodness-of-fit criteria frequently failed to identify incorrectly specified CFA-MTMM models. We discuss implications of these findings for other complex CFA models in which similar issues occur, including nested (bifactor) and latent state-trait models.
Citation InformationChristian Geiser, Jacob Bishop and Ginger Lockhart. "Collapsing factors in multitrait-multimethod models: Examining consequences of a mismatch between measurement design and model" Frontiers in Psychology: Quantitative Psychology and Measurement Vol. 6 (2015)
Available at: http://works.bepress.com/christian-geiser/20/