Skip to main content
Article
Adaptive testing for psychological assessment: how many items are enough to run an adaptive testing algorithm?
Journal of Applied Measurement (2013)
  • Michaela M Wagner-Menghin
  • Geoff N Masters, Australian Council for Educational Research (ACER)
Abstract
Although the principles of adaptive testing were established in the psychometric literature many years ago (e.g., Weiss, 1977), and practice of adaptive testing is established in educational assessment, it is not yet widespread in psychological assessment. One obstacle to adaptive psychological testing is a lack of clarity about the necessary number of items to run an adaptive algorithm. The study explores the relationship between item bank size, test length and measurement precision. Simulated adaptive test runs (allowing a maximum of 30 items per person) out of an item bank with 10 items per ability level (covering .5 logits, 150 items total) yield a standard error of measurement (SEM) of .47 (.39) after an average of 20 (29) items for 85-93% (64-82%) of the simulated rectangular sample. Expanding the bank to 20 items per level (300 items total) did not improve the algorithm's performance significantly. With a small item bank (5 items per ability level, 75 items total) it is possible to reach the same SEM as with a conventional test, but with fewer items or a better SEM with the same number of items.
Keywords
  • Testing,
  • Psychometric literature,
  • Adaptive testing,
  • Assessment,
  • Education,
  • Tests,
  • Item bank,
  • Psychological testing,
  • Algorithm,
  • Measurement
Publication Date
2013
Citation Information
Michaela M Wagner-Menghin and Geoff N Masters. "Adaptive testing for psychological assessment: how many items are enough to run an adaptive testing algorithm?" Journal of Applied Measurement Vol. 14 Iss. 2 (2013)
Available at: http://works.bepress.com/geoff_masters/164/