Skip to main content
Article
Exploring an intelligent tutoring system as a conversation-based assessment tool for reading comprehension
Behaviormetrika
  • Genghu Shi, University of Memphis
  • Anne M. Lippert, University of Memphis
  • Keith Shubeck, University of Memphis
  • Ying Fang, University of Memphis
  • Su Chen, University of Memphis
  • Philip Pavlik, University of Memphis
  • Daphne Greenberg, Georgia State University
  • Arthur C. Graesser, University of Memphis
Abstract

Reading comprehension is often assessed by having students read passages and administering a test that assesses their understanding of the text. Shorter assessments may fail to give a full picture of comprehension ability while more thorough ones can be time consuming and costly. This study used data from a conversational intelligent tutoring system (AutoTutor) to assess reading comprehension ability in 52 low-literacy adults who interacted with the system. We analyzed participants’ accuracy and time spent answering questions in conversations in lessons that targeted four theoretical components of comprehension: Word, Textbase, Situation Model, and Rhetorical Structure. Accuracy and answer response time were analyzed to track adults’ proficiency for comprehension components, and we analyzed whether the four components predicted reading grade level. We discuss the results with respect to the advantages that a conversational intelligent tutoring system assessment may provide over traditional assessment tools and the linking of theory to practice in adult literacy.

Publication Date
10-1-2018
Disciplines
Citation Information
Genghu Shi, Anne M. Lippert, Keith Shubeck, Ying Fang, et al.. "Exploring an intelligent tutoring system as a conversation-based assessment tool for reading comprehension" Behaviormetrika Vol. 45 Iss. 2 (2018) p. 615 - 633
Available at: http://works.bepress.com/anne-lippert/4/