You are here:

Do Rubrics Help? An Exploratory Study of Teachers’ Perception and Use of Rubrics for Evaluating Open Educational Resource Quality
PROCEEDINGS

, , , Utah State University, United States

Society for Information Technology & Teacher Education International Conference, in Las Vegas, NV, United States ISBN 978-1-939797-13-1 Publisher: Association for the Advancement of Computing in Education (AACE), Chesapeake, VA

Abstract

Many rubrics have been developed to help teachers evaluate the quality of Open Educational Resources (OER). This study examines the utility of different rubrics based on teachers’ perception and use. In particular, five pre-service teachers evaluated twenty OER using the quality indicators contained in four published rubrics. The teachers were also asked to rate their perception of the utility of these indicators and rubrics. Results showed wide disparity in teachers’ levels of agreement when applying different quality indicators in the rubrics, suggesting that certain quality indicators and rubrics better facilitated evaluation than others. In contrast, teachers uniformly reported positive attitudes about the utility of quality indicators in all four rubrics, expressing the belief that these rubrics could help them in evaluating resources for classroom use.

Citation

Yuan, M., Recker, M. & Diekema, A.R. (2015). Do Rubrics Help? An Exploratory Study of Teachers’ Perception and Use of Rubrics for Evaluating Open Educational Resource Quality. In D. Rutledge & D. Slykhuis (Eds.), Proceedings of SITE 2015--Society for Information Technology & Teacher Education International Conference (pp. 1424-1431). Las Vegas, NV, United States: Association for the Advancement of Computing in Education (AACE). Retrieved February 21, 2019 from .

Keywords

View References & Citations Map

References

  1. Abramovich, S., & Schunn, C. (2012). Studying teacher selection of resources in an ultra-large scale interactive system: Does metadata guide the way? Computers& Education, 58(1), 551-559.
  2. Achieve, A. (2011). Rubrics for evaluating open education resource (OER) objects. Washington, D.C.: Achieve, Inc. Retrieved January 9, 2013 from http://www.achieve.org/achieve-oer-rubrics
  3. Ash, K. (2012). Common core drives interest in open education resources. EdWeek, 6(1), 42-45.
  4. Atkins, D.E., Brown, J.S., & Hammond, A.L. (2007). A review of the open educational resources (OER) movement: Achievements, challenges, and new opportunities. Retrieved January 9, 2013 from http://www.hewlett.org/Programs/Education/OER/OpenContent/Hewlett+OER+Report.htm
  5. Brand-Gruwel, S., & Gerjets, P. (2008). Instructional support for enhancing students’ information problem solving ability. Computers in Human Behavior, 24(3), 615-622.
  6. Fitzgerald, M.A., & Byers, A. (2002). A rubric for selecting inquiry-based activities. Science Scope, 26(1), 22-25.
  7. Fitzgerald, M., Lovin, V., & Branch, R.M. (2003). The gateway to educational materials: An evaluation of an online resource for teachers and an exploration of user behavior. Journal of Technology and Teacher Education, 11(1), 21–51.
  8. Fleiss, J.L., & Cohen, J.J. (1973). The equivalence of weighed kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33, 613–619.
  9. Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational research review, 2(2), 130-144.
  10. Kimberlin, C.L., & Winetrstein, A.G. (2008). Validity and reliability of measurement instruments used in research. American Journal of Health-System Pharmacy, 65(23). Moskal, Barbara M. (2000). Scoring rubrics: what, when and how?. Practical Assessment, Research& Evaluation, 7(3). Retrieved from http://PAREonline.net/getvn.asp?v=7&n=3.
  11. Nesbit, J., Belfer, K., & Leacock, T. (2007). Learning Object Review Instrument (LORI), Version 1.5. E-Learning Research and Assessment (eLera) and the Portal for Online Objects in Learning (POOL).
  12. Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in health sciences education, 15(5), 625-632.
  13. Popham, W.J. (1997). What's wrong-and what's right-with rubrics. Educational leadership, 55, 72-75.
  14. Recker, M. (2006). Perspectives on teachers as digital library users: Consumers, contributors, and designers. D-Lib Magazine, 12(9), 2.
  15. Recker, M., Leary, H., Walker, A., Diekema, A., Wetzler, P., Sumner, T., Martin, J. (2011). Modeling teacher ratings of online resources: A human-machine approach to quality. Paper presented at the American Educational Research Association annual meeting, New Orleans, LA.
  16. Taylor (2010). An introduction to intraclass correlation that resolves some common confusions. Retrieved January 5, 2015 from http://www.faculty.umb.edu/peter_taylor/09b.pdf
  17. Wetzler, P., Bethard, S., Leary, H., Butcher, K., Zhao, D., Martin, J.S., & Sumner, T. (2013).Characterizing and predicting the multi-faceted nature of quality in educational web resources. Transactions on Interactive Intelligent Systems, 3(3), 15:1-25.
  18. Wood, D., Bruner, J.S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of child psychology and psychiatry, 17(2), 89-100.

These references have been extracted automatically and may have some errors. If you see a mistake in the references above, please contact info@learntechlib.org.