The Effect of Different Scoring Rules on Responding Multiple-Choice Items
Guessing and associated scoring rules are potential sources of construct-irrelevant variance in multiple-choice tests due to the introduction of random variation and factors unrelated to the trait that the test intends to measure. In this work, we present a new theoretical model, within the framework of Classical Test Theory, which combines these elements. To test this model, we manipulated the scoring rule associated with the multiple-choice item, applying different penalty levels for wrong responses, while omissions always resulted in a zero score. Subsequently, we investigated how the scoring rule affects the uncertainty regarding responding or omitting multiple-choice items as well as the final decision made by the respondent. We used a mixedeffects logistic regression model for the data analysis. As a major finding, the proportion of responses declined significantly when the scoring rule penalized errors, even with low penalties and even when the respondent expressed low levels of uncertainty.
Published in: Canada International Conference on Education, 2017
- Date of Conference: 26-29 June, 2017
- DOI: 10.2053/CICE.2017.0160
- Electronic ISBN: 978-1-908320-83-4
- Conference Location: University of Toronto Mississauga, Canada