Skip to main content
Article
Machine Scoring of Student Essays: Truth and Consequences (review)
Language Testing
  • Deborah J. Crusan, Wright State University - Main Campus
Document Type
Book Review
Publication Date
1-1-2010
Abstract

For some time, it has been claimed that a divide exists between commercial test developers and the academic community (White, 1990, 1996). Nowhere is this division more apparent than in regard to the machine scoring of essays [a.k.a. automated essay scoring (AES) or automated writing evaluation (AWE)]. In 2003, Burstein and Shermis published an edited collection on automated essay scoring (Automated essay scoring: A crossdisciplinary perspective), examining psychometric issues and explaining in detail how automated essay scorers such as e-rater®, IntelliMetricTM, and Intelligent Essay Assessor (IEA) work. Subsequently, Ericsson and Haswell published Machine scoring of student essays: Truth and consequences in 2006, touted by some as the academy’s answer to the Burstein and Shermis contribution. Both collections are fervent in their positions and serve as important equalizers in the discussion of using technology to assess writing.

DOI
10.1177/0265532210363274
Citation Information
Deborah J. Crusan. "Machine Scoring of Student Essays: Truth and Consequences (review)" Language Testing Vol. 27 Iss. 3 (2010) p. 437 - 440 ISSN: 0265-5322
Available at: http://works.bepress.com/deborah-crusan/22/