Skip to main content
Article
Learner Fit in Scaling Up Automated Writing Evaluation
International Journal of Computer-Assisted Language Learning and Teaching,
  • Elena Cotos, Iowa State University
  • Sarah R. Huffman, Iowa State University
Document Type
Article
Publication Version
Published Version
Publication Date
1-1-2013
DOI
10.4018/ijcallt.2013070105
Abstract

Valid evaluations of automated writing evaluation (AWE) design, development, and implementation should integrate the learners’ perspective in order to ensure the attainment of desired outcomes. This paper explores the learner fit quality of the Research Writing Tutor (RWT), an emerging AWE tool tested with L2 writers at an early stage of its development. Employing a mixed-methods approach, the authors sought to answer questions regarding the nature of learners’ interactional modifications with RWT and their perceptions of appropriateness of its feedback about the communicative effectiveness of research article Introductions discourse. The findings reveal that RWT’s move, step, and sentence-level feedback provides various opportunities for learners to engage with the revision task at a useful level of difficulty and to stimulate interaction appropriate to their individual characteristics. The authors also discuss insights about usefulness, user-friendliness, and trust as important concepts inherent to appropriateness.

Comments

This article is published as Cotos, E., & Huffman, S. (2013). Learner fit in scaling up automated writing evaluation. International Journal of Computer-Assisted Language Learning and Teaching, 3(3), 77-98. DOI: 10.4018/ijcallt.2013070105. Posted with permission.

Copyright Owner
I G I Global
Language
en
File Format
application/pdf
Citation Information
Elena Cotos and Sarah R. Huffman. "Learner Fit in Scaling Up Automated Writing Evaluation" International Journal of Computer-Assisted Language Learning and Teaching, Vol. 3 Iss. 3 (2013) p. 77 - 98
Available at: http://works.bepress.com/elena_cotos/23/