Skip to main content
Article
Learner fit in scaling up automated writing evaluation
International Journal of Computer-Assisted Language Learning and Teaching (2013)
  • Elena Cotos, Iowa State University
  • Sarah Huffman, Iowa State University
Abstract
Valid evaluations of automated writing evaluation (AWE) design, development, and implementation should integrate the learners’ perspective in order to ensure the attainment of desired outcomes. This paper explores the learner fit quality of the Research Writing Tutor (RWT), an emerging AWE tool tested with L2 writers at an early stage of its development. Employing a mixed-methods approach, the authors sought to answer questions regarding the nature of learners’ interactional modifications with RWT and their perceptions of appropriateness of its feedback about the communicative effectiveness of research article Introductions discourse. The findings reveal that RWT’s move, step, and sentence-level feedback provides various opportunities for learners to engage with the revision task at a useful level of difficulty and to stimulate interaction appropriate to their individual characteristics. The authors also discuss insights about usefulness, user-friendliness, and trust as important concepts inherent to appropriateness.
Publication Date
2013
Citation Information
Elena Cotos and Sarah Huffman. "Learner fit in scaling up automated writing evaluation" International Journal of Computer-Assisted Language Learning and Teaching Vol. 3 Iss. 3 (2013)
Available at: http://works.bepress.com/elena_cotos/11/