Skip to main content
Article
Validity Arguments for Diagnostic Assessment Using Automated Writing Evaluation
Language Testing
  • Carol Chapelle, Iowa State University
  • Elena Cotos, Iowa State University
  • Jooyoung Lee, Iowa State University
Document Type
Article
Publication Version
Submitted Manuscript
Publication Date
1-1-2015
DOI
10.1177/0265532214565386
Abstract

Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). Criterion ®, was developed by Educational Testing Service to analyze students’ papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of the diagnostic assessment process in undergraduate university English for academic purposes (EAP) classes. The Intelligent Academic Discourse Evaluator (IADE) was developed for use in graduate EAP university classes, where the goal was to help students improve their discipline-specific writing. The validation for each was designed to support claims about the intended purposes of the assessments. We present the interpretive argument for each and show some of the data that have been gathered as backing for the respective validity arguments, which include the range of inferences that one would make in claiming validity of the interpretations, uses, and consequences of diagnostic AWE-based assessments.

Comments

This is a manuscript of an article from Language Testing 32 (2015): 385, doi: 10.1177/0265532214565386. Posted with permission.

Copyright Owner
The Authors
Language
en
File Format
application/pdf
Citation Information
Carol Chapelle, Elena Cotos and Jooyoung Lee. "Validity Arguments for Diagnostic Assessment Using Automated Writing Evaluation" Language Testing Vol. 32 Iss. 3 (2015) p. 385 - 405
Available at: http://works.bepress.com/carol_chapelle/22/