IDEAS home Printed from https://ideas.repec.org/p/hhs/oruesi/2025_009.html

Cutoff Point in Multiple Choice Examinations using Negative Marking or Number of Correct Scoring - An Analysis of Statistical Power

Author

Listed:

Abstract

Given the presence of a cutoff score in a multiple-choice questions test, a challenge for the test maker is to choose a scoring method maximizing the probability of a passing score for those with adequate knowledge given a prescribed risk of passing those with insufficient understanding. Within the environment of a true-false choice test, we analyze the statistical power of the standard method - one point if the correct answer is marked and zero otherwise – with that of the negative marking method - no answer results in zero points, a correct answer generates one point, and an incorrect answer is penalized by one point. Our comparison of power between the two methods indicates that the power is about equal when test taker exhibits a small variance in terms of her degree of confidence across the questions. For larger variance, the negative marking method is superior to the standard method. However, the more the test taker fails to capture her level of confidence, i.e., mis-calibration of knowledge, the lower statistical power of the negative marking. Which method has the highest power depends on the magnitude of mis-calibration. Underrating does not affect the power of NM as much as overrating

Suggested Citation

  • Karlsson, Niklas & Lunander, Anders, 2025. "Cutoff Point in Multiple Choice Examinations using Negative Marking or Number of Correct Scoring - An Analysis of Statistical Power," Working Papers 2025:9, Örebro University, School of Business.
  • Handle: RePEc:hhs:oruesi:2025_009
    as

    Download full text from publisher

    File URL: https://www.oru.se/globalassets/oru-sv/institutioner/hh/workingpapers/workingpapers2025/wp-9-2025.pdf
    File Function: Full text
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Hong, Yili, 2013. "On computing the distribution function for the Poisson binomial distribution," Computational Statistics & Data Analysis, Elsevier, vol. 59(C), pages 41-51.
    2. David Budescu & Yuanchao Bo, 2015. "Analyzing Test-Taking Behavior: Decision Theory Meets Psychometric Theory," Psychometrika, Springer;The Psychometric Society, vol. 80(4), pages 1105-1122, December.
    3. Pelin Akyol & James Key & Kala Krishna, 2022. "Hit or Miss? Test Taking Behavior in Multiple Choice Exams," Annals of Economics and Statistics, GENES, issue 147, pages 3-50.
    4. Zapechelnyuk, Andriy, 2015. "An axiomatization of multiple-choice test scoring," Economics Letters, Elsevier, vol. 132(C), pages 24-27.
    5. Frederic Lord, 1953. "An application of confidence intervals and of maximum likelihood to the estimation of an examinee's ability," Psychometrika, Springer;The Psychometric Society, vol. 18(1), pages 57-76, March.
    6. Maya Bar-Hillel & David Budescu & Yigal Attali, 2005. "Scoring and keying multiple choice tests: A case study in irrationality," Mind & Society: Cognitive Studies in Economics and Social Sciences, Springer;Fondazione Rosselli, vol. 4(1), pages 3-12, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Fu, Jingcheng & Zhang, Xing & Zhong, Songfa, 2025. "Hedging-based scoring rules for multiple-choice questions," Journal of Economic Behavior & Organization, Elsevier, vol. 237(C).
    2. Jef Vanderoost & Rianne Janssen & Jan Eggermont & Riet Callens & Tinne De Laet, 2018. "Elimination testing with adapted scoring reduces guessing and anxiety in multiple-choice assessments, but does not increase grade average in comparison with negative marking," PLOS ONE, Public Library of Science, vol. 13(10), pages 1-27, October.
    3. Qian Wu & Monique Vanerum & Anouk Agten & Andrés Christiansen & Frank Vandenabeele & Jean-Michel Rigo & Rianne Janssen, 2021. "Certainty-Based Marking on Multiple-Choice Items: Psychometrics Meets Decision Theory," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 518-543, June.
    4. Pau Balart & Lara Ezquerra & Iñigo Hernandez-Arenaz, 2022. "Framing effects on risk-taking behavior: evidence from a field experiment in multiple-choice tests," Experimental Economics, Springer;Economic Science Association, vol. 25(4), pages 1268-1297, September.
    5. Espinosa Maria Paz & Gardeazabal Javier, 2020. "The Gender-bias Effect of Test Scoring and Framing: A Concern for Personnel Selection and College Admission," The B.E. Journal of Economic Analysis & Policy, De Gruyter, vol. 20(3), pages 1-23, July.
    6. Alexis Direr, 2025. "Ecient Scoring of Multiple-Choice tests [Notation efficace des questions à choix multiples]," Post-Print hal-05384180, HAL.
    7. Alexis DIRER, 2020. "Efficient scoring of multiple-choice tests," LEO Working Papers / DR LEO 2752, Orleans Economics Laboratory / Laboratoire d'Economie d'Orleans (LEO), University of Orleans.
    8. Merkle, Edgar C. & Steyvers, Mark & Mellers, Barbara & Tetlock, Philip E., 2017. "A neglected dimension of good forecasting judgment: The questions we choose also matter," International Journal of Forecasting, Elsevier, vol. 33(4), pages 817-832.
    9. J. Ignacio Conde-Ruiz & Juan José Ganuza & Manuel García, 2020. "Gender Gap and Multiple Choice Exams in Public Selection Processes," Hacienda Pública Española / Review of Public Economics, IEF, vol. 235(4), pages 11-28, December.
    10. Arun Chandrasekhar & Robert Townsend & Juan Pablo Pablo Xandri, 2019. "Financial Centrality and the Value of Key Players," Working Papers 2019-26, Princeton University. Economics Department..
    11. Bos, Hayo & Baas, Stef & Boucherie, Richard J. & Hans, Erwin W. & Leeftink, Gréanne, 2025. "Bed census prediction combining expert opinion and patient statistics," Omega, Elsevier, vol. 133(C).
    12. Mauricio Romero & Ã lvaro Riascos & Diego Jara, 2015. "On the Optimality of Answer-Copying Indices," Journal of Educational and Behavioral Statistics, , vol. 40(5), pages 435-453, October.
    13. Musa Çağlar & Sinan Gürel, 2024. "Public R &D project portfolio selection under expenditure uncertainty," Annals of Operations Research, Springer, vol. 341(1), pages 375-399, October.
    14. Anaya, Lina & Iriberri, Nagore & Rey-Biel, Pedro & Zamarro, Gema, 2022. "Understanding performance in test taking: The role of question difficulty order," Economics of Education Review, Elsevier, vol. 90(C).
    15. Arun G. Chandrasekhar & Robert Townsend & Juan Pablo Xandri, 2018. "Financial Centrality and Liquidity Provision," NBER Working Papers 24406, National Bureau of Economic Research, Inc.
    16. Deligiannis, Michalis & Liberopoulos, George, 2023. "Dynamic ordering and buyer selection policies when service affects future demand," Omega, Elsevier, vol. 118(C).
    17. Heiko Karle & Dirk Engelmann & Martin Peitz, 2022. "Student performance and loss aversion," Scandinavian Journal of Economics, Wiley Blackwell, vol. 124(2), pages 420-456, April.
    18. Iriberri, Nagore & Rey-Biel, Pedro, 2021. "Brave boys and play-it-safe girls: Gender differences in willingness to guess in a large scale natural field experiment," European Economic Review, Elsevier, vol. 131(C).
    19. Neal, Zachary & Domagalski, Rachel & Yan, Xiaoqin, 2020. "Party Control as a Context for Homophily in Collaborations among US House Representatives, 1981 -- 2015," OSF Preprints qwdxs, Center for Open Science.
    20. Rasmus A. X. Persson, 2023. "Theoretical evaluation of partial credit scoring of the multiple-choice test item," METRON, Springer;Sapienza Università di Roma, vol. 81(2), pages 143-161, August.

    More about this item

    Keywords

    ;
    ;
    ;

    JEL classification:

    • A22 - General Economics and Teaching - - Economic Education and Teaching of Economics - - - Undergraduate
    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hhs:oruesi:2025_009. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/ieoruse.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.