Skip to main content
Article
Human versus Automated Essay Scoring: A Critical Review
Arab World English Journal (2018)
  • Beata Lewis Sevcikova, Arab Society of English Language Studies
Abstract
In the last 30 years, numerous scholars have described the possible changes in marking writing
assignments. The paper reflects these developments as it charts the paths recently taken in the field, evaluates automated and human essay scoring systems in academic environments and analyzes the implications that both systems offer. In recent years, ways and opportunities for giving feedback have changed as computer programs have been more widely used in assessing students writing.  Numerous researchers have studied computerized feedback and its potential. Different problems, such as quality of this type of feedback, validity, and reliability have been analyzed. This critical review examines two major types of academic writing support. The objective of the study based on the literature review is to examine the potential support of human and automated proofreaders for teaching and learning purposes.
Keywords
  • assessment,
  • rubrics,
  • feedback,
  • writing,
  • automated essay scoring,
  • human raters
Disciplines
Publication Date
Summer June 15, 2018
DOI
https://dx.doi.org/10.24093/awej/vol9no2.11
Citation Information
Beata Lewis Sevcikova. "Human versus Automated Essay Scoring: A Critical Review" Arab World English Journal Vol. 9 Iss. 2 (2018) p. 157 - 174 ISSN: 2229-9327
Available at: http://works.bepress.com/arabworldenglishjournal-awej/487/