Skip to main content
Article
How Useful Are Educational Questions Generated by Large Language Models?
Communications in Computer and Information Science
  • Sabina Elkins, Université McGill
  • Ekaterina Kochmar, Institut de Recherche en Immunologie et en Cancérologie de l’Université de Montréal & Mohamed bin Zayed University of Artificial Intelligence
  • Iulian Serban, Institut de Recherche en Immunologie et en Cancérologie de l’Université de Montréal
  • Jackie C.K. Cheung, Université McGill
Document Type
Conference Proceeding
Abstract

Controllable text generation (CTG) by large language models has a huge potential to transform education for teachers and students alike. Specifically, high quality and diverse question generation can dramatically reduce the load on teachers and improve the quality of their educational content. Recent work in this domain has made progress with generation, but fails to show that real teachers judge the generated questions as sufficiently useful for the classroom setting; or if instead the questions have errors and/or pedagogically unhelpful content. We conduct a human evaluation with teachers to assess the quality and usefulness of outputs from combining CTG and question taxonomies (Bloom’s and a difficulty taxonomy). The results demonstrate that the questions generated are high quality and sufficiently useful, showing their promise for widespread use in the classroom setting.

DOI
10.1007/978-3-031-36336-8_83
Publication Date
6-30-2023
Keywords
  • Controllable Text Generation,
  • Personalized Learning,
  • Prompting,
  • Question Generation
Comments

IR conditions: non-described

Citation Information
S. Elkins, E. Kochmar, I. Serban, and J. C. Cheung, “How useful are educational questions generated by large language models?,” Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky, pp. 536–542, 2023. doi:10.1007/978-3-031-36336-8_83