Author
Abstract
Multiple choice tests are much, but not exclusively, used in the British public examinations system. The analysis of results from such tests has been subject to much debate, particularly concerning the appropriateness of latent trait models. In this paper I adopt an entirely subjectivist approach. I believe the purpose of a public examination is not to measure in some objective sense the performances of candidates, but rather to report the judgements of examiners as to those performances. It is the examiners’ judgements that are modelled by marks and grades, not something directly about the candidates themselves. Adopting this viewpoint, I make two groups of comments pertinent to multiple choice tests. First, if one is to use latent trait models to analyse candidate responses, then one must be clear as to the meaning of parameters within the models. I argue that latent trait variables are technical devices which encode certain expectations about the data, but other than that they have no physical meaning. Because of this view, I shall argue that latent trait models are appropriate for critically evaluating assumptions about examination data, but are inappropriate for the purpose of ranking candidates’ work to report and grade individual performances. Second, one should consider in what form to elicit responses from the candidates. De Finetti suggested that candidates should respond with their probability of the correctness of each possible answer to an item and that these responses should be assessed by means of a scoring rule. However, such schemes have many problems: the difficulty of getting candidates, still at school, to accept the inevitability of uncertainty in their lives; the problem of calibration, because they are unlikely to be equally good probability assessors. Perhaps more serious is the difficulty that a scoring rule which encourages a candidate to honestly reveal his beliefs may not reflect the manner in which the examiners wish to judge the candidate.
Suggested Citation
Simon French, 1987.
"The Analysis of Multiple Choice Tests in Educational Assessment,"
Springer Books, in: R. Viertl (ed.), Probability and Bayesian Statistics, pages 175-182,
Springer.
Handle:
RePEc:spr:sprchp:978-1-4613-1885-9_18
DOI: 10.1007/978-1-4613-1885-9_18
Download full text from publisher
To our knowledge, this item is not available for
download. To find whether it is available, there are three
options:
1. Check below whether another version of this item is available online.
2. Check on the provider's
web page
whether it is in fact available.
3. Perform a
for a similarly titled item that would be
available.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:sprchp:978-1-4613-1885-9_18. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.