You are here:

Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice Tests
ARTICLE

, ,

Journal of Educational Technology & Society Volume 10, Number 4, ISSN 1176-3647 e-ISSN 1176-3647

Abstract

This study investigates differences in the partial scoring performance of examinees in elimination testing and conventional dichotomous scoring of multiple-choice tests implemented on a computer-based system. Elimination testing that uses the same set of multiple-choice items rewards examinees with partial knowledge over those who are simply guessing. This study provides a computer-based test and item analysis system to reduce the difficulty of grading and item analysis following elimination tests. The Rasch model, based on item response theory for dichotomous scoring, and the partial credit model, based on graded item response for elimination testing, are the kernel of the test-diagnosis subsystem to estimate examinee ability and item-difficulty parameters. This study draws the following conclusions: (1) examinees taking computer-based tests (CBTs) have the same performance as those taking paper-and-pencil tests (PPTs); (2) conventional scoring does not measure the same knowledge as partial scoring; (3) the partial scoring of multiple choice lowers the number of unexpected responses from examinees; and (4) the different question topics and types do not influence the performance of examinees in either PPTs or CBTs. (Contains 2 figures and 9 tables.)

Citation

Chang, S.H., Lin, P.C. & Lin, Z.C. (2007). Measures of Partial Knowledge and Unexpected Responses in Multiple-Choice Tests. Journal of Educational Technology & Society, 10(4), 95-109. Retrieved August 23, 2019 from .

This record was imported from ERIC on April 18, 2013. [Original Record]

ERIC is sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.

Copyright for this record is held by the content creator. For more details see ERIC's copyright policy.

Keywords