Skip to main content
Article
Rethinking assessment in response to generative artificial intelligence
Medical Education (2023)
  • Jacob Pearce, Australian Council for Educational Research (ACER)
  • Neville Chiavaroli, Australian Council for Educational Research (ACER)
Abstract
The use of decision-making support tools during assessments, such as electronic differential diagnosis in examinations, is just the tip of the iceberg when it comes to how technology is currently changing assessment practice. We have reached a transformative stage in the development of artificial intelligence (AI). We can no longer rely on non-invigilated assessments and submitted ‘artefacts’ to demonstrate student learning and competence. This is bringing many long-term demands on educators, course coordinators and curriculum designers, forcing us to rethink assessment approaches. Going forward, we see an important distinction between ‘assisted’ assessments and ‘unassisted’ assessments. With the recent increase and facilitation of virtual assessment through convenient online platforms, and the new challenge to non-invigilated assessment formats posed by AI, we think the time has come for the ‘rehabilitation’ and re-acceptance of the oral format as a highly valuable and unique form of assessment in medical education. Nevertheless, generative AI need not threaten the validity or trustworthiness of our assessments in either formative or summative contexts. Rather, it can add fidelity and nuance to assisted assessment while facilitating a greater focus and purposefulness to unassisted assessment.
Keywords
  • Artificial intelligence,
  • Student assessment,
  • Medical education,
  • Oral tests
Publication Date
2023
DOI
https://doi.org/10.1111/medu.15092
Citation Information
Jacob Pearce and Neville Chiavaroli. "Rethinking assessment in response to generative artificial intelligence" Medical Education (2023) ISSN: 1365-2923
Available at: http://works.bepress.com/jacob_pearce/80/