Skip to main content
Other
A framework for predicting item difficulty in reading tests
OECD Programme for International Student Assessment (PISA)
  • Tom Lumley, ACER
  • Alla Routitsky, ACER
  • Juliette Mendelovits, ACER
  • Dara Ramalingam, ACER
Publication Date
4-1-2012
Comments

Paper presented at the the Annual Meeting of the American Educational Research Association (AERA), Vancouver, 13-17 April 2012.

Abstract
Results on reading tests are typically reported on scales composed of levels, each giving a statement of student achievement or proficiency. The PISA reading scales provide broad descriptions of skill levels associated with reading items, intended to communicate to policy makers and teachers about the reading proficiency of students at different levels. However, the described scales are not explicitly tied to features that predict difficulty. Difficulty is thus treated as an empirical issue, using a post hoc solution, while a priori estimates of item difficulty have tended to be unreliable. Understanding features influencing the difficulty of reading tasks has the potential to help test developers, teachers and researchers interested in understanding the construct of reading. This paper presents work, conducted over a period of more than a decade, intended to provide a scheme for describing the difficulty of reading items used in PISA. Whereas the mathematics research in earlier papers in this symposium focused on mathematical competencies, the reading research concentrates on describing the reading tasks and the parts of texts that students are required to engage with.
Citation Information
Tom Lumley, Alla Routitsky, Juliette Mendelovits and Dara Ramalingam. "A framework for predicting item difficulty in reading tests" (2012)
Available at: http://works.bepress.com/juliette_mendelovits/6/