Article
A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric
Journal of Information Technology Education: Research
(2012)
Abstract
Current literature provides many examples of rubrics that are used to evaluate the quality of webquest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the currently available rubrics and improved based on the comments provided in the literature and feedback received from the educators. The ZUNAL webquest design rubric was developed in three stages. First, a large set of rubric items was generated based on the operational definitions and existing literature on currently available webquest rubrics (version 1). This step included item selections from the three most widely used rubrics created by Bellofatto, Bohl, Casey, Krill & Dodge (2001), March (2004), and eMints (2006). Second, students (n=15) enrolled in a graduate course titled “Technology and Data” were asked to assess the clarity of each item of the rubric on a four-point scale ranging from (1) “not at all” to (4) “very well/very clear.” This scale was used only during the construction of the ZUNAL rubric; therefore, it was not a part of the analyses presented in this study. The students were also asked to supply written feedback for items that were either unclear or unrelated to the constructs. Items were revised based on the feedback (version 2,). Finally, K-12 classroom teachers (n=23) that are involved with webquest creation and implementation in classrooms were invited for a survey that asked them to rate rubric elements for their value and clarity. Items were revised based on the feedback. At the conclusion of this three-step process, the webquest design rubric was composed of nine main indicators with 23 items underlying the proposed webquest rubric constructs: title (4 items), introduction (1 item), task (2 items), process (3 items), resources (3 items), evaluation (2 items), conclusion (2 items), teacher page (2 items) and overall design (4 items). A three-point response scale including “unacceptable”, “acceptable”, and “target” was utilized. After the rubric was created, twentythree participants were given a week to evaluate three pre-selected webquests with varying quality using the latest version of the rubric. A month later, the evaluators were asked to re-evaluate the same webquests.
In order to investigate the internal consistency and intrarater (test retest) reliability of the ZUNAL webquest design rubric, a series of statistical procedures were employed. The statistical analyses conducted on the ZUNAL webquest rubric pointed to its acceptable reliability. It is reasonable to expect that the consistency we observed in the rubric scores was due to the comprehensiveness of the rubric and clarity of the rubric items and descriptors. Because there are no existing studies focusing on reliability of webquests design rubrics, researchers were unable to make comparisons to discuss the merits of the ZUNAL rubric in relation to others at this point.
Keywords
- webquest,
- webquest rubric,
- rubric reliability analysis,
- internal consistency,
- test-retest reliability,
- interrater reliability
Disciplines
Publication Date
2012
Publisher Statement
Copyright and Open Access: https://v2.sherpa.ac.uk/id/publication/25867
Citation Information
Zafer Unal, Yasar Bodur and Aslihan Unal. "A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric" Journal of Information Technology Education: Research Vol. 11 (2012) p. 169 - 183 ISSN: 1539-3585 Available at: http://works.bepress.com/unal/9/
Creative Commons license
This work is licensed under a Creative Commons CC_BY-NC International License.