Skip to main content
Web-based student peer review: A research summary
SoTL Commons (2014)
  • Edward F Gehringer, North Carolina State University at Raleigh
Interest in Web-based peer-review systems dates back nearly 20 years. Systems were built to let students give feedback to other students, mainly to help them improve their writing. But students are not necessarily effective peer reviewers. Left to their own devices, they will submit cursory reviews, which are not very helpful to their peers. Techniques have been developed to improve the quality of reviews. Calibration is one such technique. Students are asked to assess samples of writing that have previously been assessed by experts. Students must submit an evaluation “close enough” to the experts’ before they are allowed to review their peers. Another approach is meta-reviewing: students are graded on their reviewing as well as their writing, either by experts or by other students. Some MOOCs have employed a “crowdsourcing” approach to vetting reviews. A new area of research is automated meta-reviewing, where natural-language processing techniques are used to give students formative feedback on reviews they are about to submit. There is also a debate over whether students should rate peers on an absolute scale, or rank their work compared to the work of other students. This presentation summarizes findings from a broad range of research in Web-based peer review.
  • collaboration,
  • assessment,
  • peer review,
  • active learning
Publication Date
March 28, 2014
Citation Information
Edward F Gehringer. "Web-based student peer review: A research summary" SoTL Commons (2014)
Available at: