A comparison of human and computer marking of short free-text student responses
Computers & Education Volume 55, Number 2, ISSN 0360-1315 Publisher: Elsevier Ltd
The computer marking of short-answer free-text responses of around a sentence in length has been found to be at least as good as that of six human markers. The marking accuracy of three separate computerised systems has been compared, one system (Intelligent Assessment Technologies FreeText Author) is based on computational linguistics whilst two (Regular Expressions and OpenMark) are based on the algorithmic manipulation of keywords. In all three cases, the development of high-quality response matching has been achieved by the use of real student responses to developmental versions of the questions and FreeText Author and OpenMark have been found to produce marking of broadly similar accuracy. Reasons for lack of accuracy in human marking and in each of the computer systems are discussed.
Butcher, P.G. & Jordan, S.E. (2010). A comparison of human and computer marking of short free-text student responses. Computers & Education, 55(2), 489-499. Elsevier Ltd.
Cited ByView References & Citations Map
An empirically-based, tutorial dialogue system: design, implementation and evaluation in a first year health sciences course.
Jenny McDonald, Alistair Knott, Sarah Stein & Richard Zeng, University of Otago
ASCILITE - Australian Society for Computers in Learning in Tertiary Education Annual Conference 2013 (2013) pp. 562–572
These links are based on references which have been extracted automatically and may have some errors. If you see a mistake, please contact firstname.lastname@example.org.