Skip to main content
Article
Modeling Teacher Ratings of Online Resources: A Human-Machine Approach to Quality
American Educational Reserach Association
  • Mimi Recker, Utah State University
  • Heather Leary, Utah State University
  • Andrew Walker, Utah State University
  • Anne R. Diekama, Utah State University
  • Philipp Wetzler, University of Colorado at Boulder
  • Tamara Sumner, University of Colorado at Boulder
  • James Martin, University of Colorado at Boulder
Document Type
Conference Paper
Publication Date
4-1-2011
Abstract

In education, the scalable deployment of media-rich online resources supports peer production in ways that promise to radically transform teaching and learning (CRA, 2005; Pea et al., 2008). Online educational repositories such as the Digital Library for Earth Systems Education (DLESE.org) and the National Science Digital Library (NSDL.org) collect and curate online learning resources created for a wide range of educational audiences and subject areas (McArthur & Zia, 2008). Through a simple, web-based authoring tool, called the Instructional Architect (IA.usu.edu) teachers locate and share educational resources and activities in an IA project. These IA projects can then be viewed, copied, and adapted by other IA users, in ways that support innovative teacher peer production. A vexing problem for such initiatives remains the elusive notion of quality. In peer production environments, how does one identify quality online content? Moreover, how does one do so in sustainable, cost-effective, and scalable ways? Previous work (Bethard, et al, 2009) presented an innovative approach for using machine learning models to automatically assess the quality and pedagogic utility of educational digital library resources. They demonstrated the feasibility and accuracy of automatic quality assessments for a single STEM domain and audience-level: high school Earth science. This work reports recent efforts to extend these models to support a broader range of STEM topics and grade levels, specifically applied to IA projects and compared model outputs to quality assessments made by K-12 teachers. Since the nature of the resources being compared in the IA (peer) versus DLESE (expert) are different, results of this study provide insights on the generalizability of this machine learning approach and its potential for facilitating teacher peer production.

Comments
Paper presented at the American Educational Research Association annual meeting in April 2011, New Orleans, LA.
Citation Information
Recker, M., Leary, H., Walker, A., Diekema, A., Wetzler, P., Sumner, T., Martin, J. (2011). Modeling Teacher Ratings of Online Resources: A Human-Machine Approach to Quality. Paper presented at the American Educational Research Association annual meeting, New Orleans, LA.