You are here:

Observer ratings of instructional quality: Do they fulfill what they promise?
ARTICLE

, ,

Learning and Instruction Volume 22, Number 6, ISSN 0959-4752 Publisher: Elsevier Ltd

Abstract

Despite considerable interest in the topic of instructional quality in research as well as practice, little is known about the quality of its assessment. Using generalizability analysis as well as content analysis, the present study investigates how reliably and validly instructional quality is measured by observer ratings. Twelve trained raters judged 57 videotaped lesson sequences with regard to aspects of domain-independent instructional quality. Additionally, 3 of these sequences were judged by 390 untrained raters (i.e., student teachers and teachers). Depending on scale level and dimension, 16–44% of the variance in ratings could be attributed to instructional quality, whereas rater bias accounted for 12–40% of the variance. Although the trained raters referred more often to aspects considered essential for instructional quality, this was not reflected in the reliability of their ratings. The results indicate that observer ratings should be treated in a more differentiated manner in the future.

Citation

Praetorius, A.K., Lenske, G. & Helmke, A. (2012). Observer ratings of instructional quality: Do they fulfill what they promise?. Learning and Instruction, 22(6), 387-400. Elsevier Ltd. Retrieved August 12, 2024 from .

This record was imported from Learning and Instruction on January 29, 2019. Learning and Instruction is a publication of Elsevier.

Full text is availabe on Science Direct: http://dx.doi.org/10.1016/j.learninstruc.2012.03.002

Keywords