You are here:

A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric
ARTICLE

, USF St. Petersburg, United States ; , Georgia Southern University, United States ; , Usak Unversity, Turkey

JITE-Research Volume 11, Number 1, ISSN 1539-3585 Publisher: Informing Science Institute

Abstract

Current literature provides many examples of rubrics that are used to evaluate the quality of webquest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the currently available rubrics and improved based on the comments provided in the literature and feedback received from the educators. The ZUNAL webquest design rubric was developed in three stages. First, a large set of rubric items was generated based on the operational definitions and existing literature on currently available webquest rubrics (version 1). This step included item selections from the three most widely used rubrics created by Bellofatto, Bohl, Casey, Krill & Dodge (2001), March (2004), and eMints (2006). Second, students (n=15) enrolled in a graduate course titled “Technology and Data” were asked to assess the clarity of each item of the rubric on a four-point scale ranging from (1) “not at all” to (4) “very well/very clear.” This scale was used only during the construction of the ZUNAL rubric; therefore, it was not a part of the analyses presented in this study. The students were also asked to supply written feedback for items that were either unclear or unrelated to the constructs. Items were revised based on the feedback (version 2,). Finally, K-12 classroom teachers (n=23) that are involved with webquest creation and implementation in classrooms were invited for a survey that asked them to rate rubric elements for their value and clarity. Items were revised based on the feedback. At the conclusion of this three-step process, the webquest design rubric was composed of nine main indicators with 23 items underlying the proposed webquest rubric constructs: title (4 items), introduction (1 item), task (2 items), process (3 items), resources (3 items), evaluation (2 items), conclusion (2 items), teacher page (2 items) and overall design (4 items). A three-point response scale including “unacceptable”, “acceptable”, and “target” was utilized. After the rubric was created, twenty three participants were given a week to evaluate three pre-selected webquests with varying quality using the latest version of the rubric. A month later, the evaluators were asked to re-evaluate the same webquests. In order to investigate the internal consistency and intrarater (test retest) reliability of the ZUNAL webquest design rubric, a series of statistical procedures were employed. The statistical analyses conducted on the ZUNAL webquest rubric pointed to its acceptable reliability. It is reasonable to expect that the consistency we observed in the rubric scores was due to the comprehensiveness of the rubric and clarity of the rubric items and descriptors. Because there are no existing studies focusing on reliability of webquests design rubrics, researchers were unable to make comparisons to discuss the merits of the ZUNAL rubric in relation to others at this point.

Citation

Unal, Z., Bodur, Y. & Unal, A. (2012). A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric. Journal of Information Technology Education: Research, 11(1), 169-183. Informing Science Institute. Retrieved March 26, 2019 from .

Keywords

View References & Citations Map

References

  1. Abbit, J., & Ophus, J. (2008). What we know about the impacts of webquests: A Review of Research. AACE Journal, 16(4), 441-456.
  2. Abu-Elwan, R. (2007). The use of webquest to enhance the mathematical problem-posing skills of preservice teachers. International Journal for Technology in Mathematics Education, 14(1), 31-39.
  3. Allen, J., & Street, M. (2007). The quest for deeper learning: An investigation into the impact of a knowledge pooling webquest in primary initial teacher training. British Journal of Educational Technology, 38(6), 1102-1112.
  4. Barrett, P. (2001). Assessing the reliability of rating data. Retrieved July 11, 2011, from http://www.liv.ac.uk/~pbarrett/rater.pdf Barroso, M., & Coutinho, C. (2010). A webquest for adult learners: A report on a biology course. In J. Sanchez& K. Zhang (Eds.), Proceedings of World Conference on E-Learning inCorporate, Government, Healthcare, and Higher Education (pp. 1566-1569). Chesapeake, VA: AACE.
  5. Bartoshesky, A., & Kortecamp, K. (2003). Webquest: An instructional tool that engages adult learners, promotes higher level thinking and deepens content knowledge. In C. Crawford et al. (Eds.), Proceedings of Society for Information Technology& Teacher Education International Conference (pp. 19511954).
  6. Bresciani, M.J., Zelna, C.L., & Anderson, J.A. (2004). Techniques for assessing student learning and development: A handbook for practitioners. Washington, DC: NASPA.
  7. XianNingColton, D.A., Gao, X., Harris, D.J., Kolen, M.J., Martinovich-Barhite, D., Wang, T., & Welch, L. (1997). Reliability issues with performance assessments: A collection of papers. ACT Research Report Series 97-3.
  8. Cortina, J.M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98–104.
  9. Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Orlando, FL: Harcourt Brace Jovanovich.
  10. Dodge, B. (1995). Some thoughts about webquests. Retrieved July 11, 2011, from http://edweb.sdsu.edu/courses/edtec596/aboutwebquests.htmlDodge,B.(1997).Arubricforevaluatingwebquests.RetrievedJuly11,2011,fromhttp://webquest.sdsu.edu/webquestrubric.html
  11. Dodge, B. (1999). Webquest taxonomy: A taxonomy of tasks. Retrieved July 11, 2011 from http://edweb.sdsu.edu/WebQuest/taskonomy.html
  12. Dodge, B. (2001). FOCUS: Five rules for writing a great webquest. Learning and Leading with Technology, 28(8), 6 9, 58.
  13. Glass, G.V., & Hopkins, K.H. (1996). Statistical methods in education and psychology. Boston: Allyn and Bacon.
  14. Gorrow, T., Bing, J., & Royer, R. (2004). Going in circles: The effects of a webquest on the achievement and attitudes of prospective teacher candidates in education foundations. Paper presented at the Society for Information Technology and Teacher Education International Conference 2004, Atlanta, GA.
  15. Johnson, R.L., Penny, J., & Gordon, B. (2000). The relation between score resolution methods and interrater reliability: An empirical study of an analytic scoring rubric. Applied Measurement in Education, 13, 121–138.
  16. Laborda, J.G. (2009). Using webquests for oral communication in English as aforeign language for tourism studies. Educational Technology& Society, 12(1), 258–270.
  17. Lim, S., & Hernandez, P. (2007). The webquest: An illustration of instructional technology implementation in MFT training. Contemporary Family Therapy, 29, 163-175.
  18. Maddux, C.D., & Cummings, R. (2007). Webquests: Are they developmentally appropriate? The Educational Forum, 71(2), 117-127.
  19. March, T. (2003). The learning power of webquests. Educational Leadership, 61(4), 42-47.
  20. March, T. (2004). Criteria for assessing best webquests. Retrieved July 11, 2011, from http://bestwebquests.com/bwq/matrix.asp Moskal, B.M. (2000). Scoring rubrics: What, when, and how? Practical Assessment Research and Evaluation, 7(3). Retrieved July 11, 2011, from http://pareonline.net/getvn.asp?v=7&n=3
  21. Moskal, B., & Leydens, J.A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research& Evaluation, 7(10). Retrieved July 11, 2011, from http://pareonline.net/getvn.asp?v=7 & N=10
  22. Oliver, D. (2010). The effect and value of a WebQuest activity on weather in a 5th grade classroom. ProQuest Dissertations and Theses; Thesis (Ed.D.)--Idaho State University, 2010; Publication Number: AAI3405042; ISBN: 9781109700404; Source: Dissertation Abstracts International, Volume: 71-04, Section: A, page: 1274.
  23. Perkins, R., & McKnight, M.L. (2005). Teachers’ attitudes toward webquests as a method of teaching. Computers in the Schools, 22(1/2), 123-133.
  24. Peterson, C.L., & Koeck, D.C. (2001). When students create their own webquests. Learning and Leading with Technology, 29(1), 10–15.
  25. Stemler, S.E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research& Evaluation, 9.
  26. Tran, T. (2010). Using webquest in teaching environmental education in Vietnam. In D. Gibson& B. Dodge (Eds.), Proceedings of Society for Information Technology& Teacher Education International Conference 2010 (pp. 3740-3744). Chesapeake, VA: AACE.
  27. Tsai, S. (2006). Students' perceptions of English learning through EFL webquest. Paper presented at The World Conference on Educational Multimedia, Hypermedia and Telecommunications 2006, Orlando, FL.
  28. Unal, Z., & Leung, C. (2010). Identifying preservice and inservice teachers’ conceptions of using webquests for classroom instruction via ZUNAL WebQuest Maker. In J. Sanchez& K. Zhang (Eds.), Proceedings of World Conference on E-Learning inCorporate, Government, Healthcare, and Higher Education 2010 (pp. 2748-2757). Chesapeake, VA: AACE.

These references have been extracted automatically and may have some errors. If you see a mistake in the references above, please contact info@learntechlib.org.