Skip to main content
Article
The Performance of IRT Model Selection Methods with Mixed-Format Tests
Applied Psychological Measurement (2012)
  • Tiffany A. Whittaker, University of Texas at Austin
  • Wanchen Chang, University of Texas at Austin
  • Barbara G. Dodd, University of Texas at Austin
Abstract
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the likelihood ratio test, Akaike’s information criterion (AIC), corrected AIC, Bayesian information criterion, Hannon and Quinn’s information criterion, and consistent AIC, with respect to correct model selection among a set of three competing mixed-format IRT models (i.e., one-parameter logistic/partial credit [1PL/PC], two-parameter logistic/generalized partial credit [2PL/GPC], and three-parameter logistic/generalized partial credit [3PL/GPC]). The criteria were able to correctly select less parameterized IRT models, including the PC, 1PL, and 1PL/PC models. In contrast, the criteria were less able to correctly select more parameterized IRT models, including the GPC, 3PL, and 3PL/GPC models. Implications of the findings and recommendations are discussed.
Keywords
  • IRT model selection,
  • mixed-format IRT,
  • likelihood ratio test,
  • AIC,
  • AICC,
  • BIC,
  • HQIC,
  • CAIC
Publication Date
May, 2012
DOI
10.1177/0146621612440305
Citation Information
Tiffany A. Whittaker, Wanchen Chang and Barbara G. Dodd. "The Performance of IRT Model Selection Methods with Mixed-Format Tests" Applied Psychological Measurement Vol. 36 Iss. 3 (2012) p. 159 - 180
Available at: http://works.bepress.com/wanchen_chang/6/