Background: Peer assessment of performance in the objective structured clinical examination (OSCE) is emerging as a learning instrument. While peers can provide reliable scores, there may be a trade-off with students’ learning. The purpose of this study is to evaluate a peer-based OSCE as a viable assessment instrument and its potential to promote learning and explore the interplay between these two roles. Methods: A total of 334 medical students completed an 11-station OSCE from 2015 to 2016. Each station had 1–2 peer examiners (PE) and one faculty examiner (FE). Examinees were rated on a 7-point scale across 5 dimensions: Look, Feel, Move, Special Tests and Global Impression. Students participated in voluntary focus groups in 2016 to provide qualitative feedback on the OSCE. Authors analysed assessment data and transcripts of focus group discussions. Results: Overall, PE awarded higher ratings compared with FE, sources of variance were similar across 2 years with unique variance consistently being the largest source, and reliability (rφ) was generally low. Focus group analysis revealed four themes: Conferring with Faculty Examiners, Difficulty Rating Peers, Insider Knowledge, and Observing and Scoring. Conclusions: While peer assessment was not reliable for evaluating OSCE performance, PE’s perceived that it was beneficial for their learning. Insight gained into exam technique and self-appraisal of skills allows students to understand expectations in clinical situations and plan approaches to self-assessment of competence.
Available at: http://works.bepress.com/caitlin-cassidy/5/