Automated writing evaluation (AWE) entered the scene of academic writing pedagogy with a promising potential to enhance writing development through individualized formative feedback. However, despite evidence of positive impact (Stevenson, 2016), AWE technologies have been vehemently criticized because, to writing teachers, their engines running in the background are nothing but black-boxes (Herrington & Moran, 2012) that evaluate writing based on aggregated quantifiable text features (Shermis & Burstein, 2013). Writing, however, is essentially about meaning making and reflects rhetorical aspects of different social and academic genres (Perelman, 2012). This criticism requires revisiting a challenging yet foundational question for academic writing teachers: How can we, as stakeholders in AWE-assisted writing support, ensure that AWE technologies help us appropriately focus on important traits of writing as communicative practice?
Available at: http://works.bepress.com/elena_cotos/33/
This presentation is published as Cotos, E., Advancing writing analytics methodologies: a hybrid approach to analyzing errors in automated rhetorical feedback. Presented at 2021 Writing Analytics Virtual Symposium: Incubating Writing Analytics Research in the Time of COVID-19; 18th - 27th May 2021 (online). Posted with permission.