You are here:

A comparison of human and computer marking of short free-text student responses
ARTICLE

,

Computers & Education Volume 55, Number 2, ISSN 0360-1315 Publisher: Elsevier Ltd

Abstract

The computer marking of short-answer free-text responses of around a sentence in length has been found to be at least as good as that of six human markers. The marking accuracy of three separate computerised systems has been compared, one system (Intelligent Assessment Technologies FreeText Author) is based on computational linguistics whilst two (Regular Expressions and OpenMark) are based on the algorithmic manipulation of keywords. In all three cases, the development of high-quality response matching has been achieved by the use of real student responses to developmental versions of the questions and FreeText Author and OpenMark have been found to produce marking of broadly similar accuracy. Reasons for lack of accuracy in human marking and in each of the computer systems are discussed.

Citation

Butcher, P.G. & Jordan, S.E. (2010). A comparison of human and computer marking of short free-text student responses. Computers & Education, 55(2), 489-499. Elsevier Ltd. Retrieved December 11, 2019 from .

This record was imported from Computers & Education on January 29, 2019. Computers & Education is a publication of Elsevier.

Full text is availabe on Science Direct: http://dx.doi.org/10.1016/j.compedu.2010.02.012

Keywords

Cited By

View References & Citations Map

These links are based on references which have been extracted automatically and may have some errors. If you see a mistake, please contact info@learntechlib.org.