A new approach towards marking large-scale complex assessments: Developing a distributed marking system that uses an automatically scaffolding and rubric-targeted interface for guided peer-reviewAssessing Writing (2015)
Currently, complex tasks incur significant costs to mark, becoming exorbitant for courses with large number of students (e.g., in MOOCs). Large scale assessments are currently dependent on automated scoring systems. However, these systems tend to work best in assessments where correct responses can be explicitly defined. There is considerable scoring challenge when it comes to assessing tasks that require deeper analysis and richer responses.
Structured peer-grading can be reliable, but the diversity inherent in very large classes can be a weakness for peer-grading systems because it raises objections that peer-reviewers may not have qualifications matching the level of the task being assessed. Distributed marking can offer a solution to handle both the volume and complexity of these assessments.
We propose a solution wherein peer scoring is assisted by a guidance system to improve peer-review and increase the efficiency of large scale marking of complex tasks. The system involves developing an engine that automatically scaffolds the target paper based on predefined rubrics so that relevant content and indicators of higher level thinking skills are framed and drawn to the attention of the marker. Eventually, we aim to establish that the scores produced are comparable to scores produced by expert raters.
- Guided peer review,
- Online assessment,
- Distributed marking,
- Automated scoring
Publication DateApril, 2015
Citation InformationAlvin Vista, Esther Care and Patrick Griffin. "A new approach towards marking large-scale complex assessments: Developing a distributed marking system that uses an automatically scaffolding and rubric-targeted interface for guided peer-review" Assessing Writing Vol. 24 (2015) p. 1 - 15 ISSN: 1075-2935
Available at: http://works.bepress.com/alvin-vista/6/