Skip to main content
Contribution to Book
Why Large‐Scale Assessments Use Scaling and Item Response Theory
Implementation of Large-Scale Education Assessments (2017)
  • Raymond J Adams, Australian Council for Educational Research (ACER)
  • Alla Berezner, Australian Council for Educational Research (ACER)
Abstract
As raw scores obtained from the instruments used in assessments are not amenable to statistical analysis or the provision of valid and reliable comparisons across students, schools, states or countries and over time, most LSAs commonly use item response models in the scaling of cognitive data. Raymond Adams and Alla Berezner describe and illustrate three reasons for using IRT in this chapter. These include that IRT models (i) support the process of test development and construct validation, (ii) facilitate the usage of the tests consisting of a number of rotated test forms within one assessment to increase content coverage and (iii) enable the maintenance of scales that are comparable across countries and over time when used in conjunction with multiple imputation methodology.
Keywords
  • Large scale assessment,
  • Scaling,
  • Item response theory,
  • Cognitive measurement,
  • Test construction,
  • Measurement techniques,
  • International surveys,
  • Primary secondary education
Publication Date
2017
Editor
Petra Lietz (Editor), John Cresswell (Editor), Keith F. Rust (Editor), Raymond D. Adams (Editor)
Publisher
Wiley
ISBN
9781118762479 (PDF) 9781118762493 (ebk) 9781118336090 (print)
Citation Information
Raymond J Adams and Alla Berezner. "Why Large‐Scale Assessments Use Scaling and Item Response Theory" Chichester, UKImplementation of Large-Scale Education Assessments (2017)
Available at: http://works.bepress.com/ray_adams/60/