IDEAS home Printed from https://ideas.repec.org/p/hhs/oruesi/2025_009.html
   My bibliography  Save this paper

Cutoff Point in Multiple Choice Examinations using Negative Marking or Number of Correct Scoring - An Analysis of Statistical Power

Author

Listed:

Abstract

Given the presence of a cutoff score in a multiple-choice questions test, a challenge for the test maker is to choose a scoring method maximizing the probability of a passing score for those with adequate knowledge given a prescribed risk of passing those with insufficient understanding. Within the environment of a true-false choice test, we analyze the statistical power of the standard method - one point if the correct answer is marked and zero otherwise – with that of the negative marking method - no answer results in zero points, a correct answer generates one point, and an incorrect answer is penalized by one point. Our comparison of power between the two methods indicates that the power is about equal when test taker exhibits a small variance in terms of her degree of confidence across the questions. For larger variance, the negative marking method is superior to the standard method. However, the more the test taker fails to capture her level of confidence, i.e., mis-calibration of knowledge, the lower statistical power of the negative marking. Which method has the highest power depends on the magnitude of mis-calibration. Underrating does not affect the power of NM as much as overrating

Suggested Citation

  • Karlsson, Niklas & Lunander, Anders, 2025. "Cutoff Point in Multiple Choice Examinations using Negative Marking or Number of Correct Scoring - An Analysis of Statistical Power," Working Papers 2025:9, Örebro University, School of Business.
  • Handle: RePEc:hhs:oruesi:2025_009
    as

    Download full text from publisher

    File URL: https://www.oru.se/globalassets/oru-sv/institutioner/hh/workingpapers/workingpapers2025/wp-9-2025.pdf
    File Function: Full text
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Pelin Akyol & James Key & Kala Krishna, 2022. "Hit or Miss? Test Taking Behavior in Multiple Choice Exams," Annals of Economics and Statistics, GENES, issue 147, pages 3-50.
    2. Zapechelnyuk, Andriy, 2015. "An axiomatization of multiple-choice test scoring," Economics Letters, Elsevier, vol. 132(C), pages 24-27.
    3. Hong, Yili, 2013. "On computing the distribution function for the Poisson binomial distribution," Computational Statistics & Data Analysis, Elsevier, vol. 59(C), pages 41-51.
    4. Frederic Lord, 1953. "An application of confidence intervals and of maximum likelihood to the estimation of an examinee's ability," Psychometrika, Springer;The Psychometric Society, vol. 18(1), pages 57-76, March.
    5. Maya Bar-Hillel & David Budescu & Yigal Attali, 2005. "Scoring and keying multiple choice tests: A case study in irrationality," Mind & Society: Cognitive Studies in Economics and Social Sciences, Springer;Fondazione Rosselli, vol. 4(1), pages 3-12, June.
    6. David Budescu & Yuanchao Bo, 2015. "Analyzing Test-Taking Behavior: Decision Theory Meets Psychometric Theory," Psychometrika, Springer;The Psychometric Society, vol. 80(4), pages 1105-1122, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Pau Balart & Lara Ezquerra & Iñigo Hernandez-Arenaz, 2022. "Framing effects on risk-taking behavior: evidence from a field experiment in multiple-choice tests," Experimental Economics, Springer;Economic Science Association, vol. 25(4), pages 1268-1297, September.
    2. Espinosa Maria Paz & Gardeazabal Javier, 2020. "The Gender-bias Effect of Test Scoring and Framing: A Concern for Personnel Selection and College Admission," The B.E. Journal of Economic Analysis & Policy, De Gruyter, vol. 20(3), pages 1-23, July.
    3. Alexis DIRER, 2020. "Efficient scoring of multiple-choice tests," LEO Working Papers / DR LEO 2752, Orleans Economics Laboratory / Laboratoire d'Economie d'Orleans (LEO), University of Orleans.
    4. Jef Vanderoost & Rianne Janssen & Jan Eggermont & Riet Callens & Tinne De Laet, 2018. "Elimination testing with adapted scoring reduces guessing and anxiety in multiple-choice assessments, but does not increase grade average in comparison with negative marking," PLOS ONE, Public Library of Science, vol. 13(10), pages 1-27, October.
    5. Qian Wu & Monique Vanerum & Anouk Agten & Andrés Christiansen & Frank Vandenabeele & Jean-Michel Rigo & Rianne Janssen, 2021. "Certainty-Based Marking on Multiple-Choice Items: Psychometrics Meets Decision Theory," Psychometrika, Springer;The Psychometric Society, vol. 86(2), pages 518-543, June.
    6. Merkle, Edgar C. & Steyvers, Mark & Mellers, Barbara & Tetlock, Philip E., 2017. "A neglected dimension of good forecasting judgment: The questions we choose also matter," International Journal of Forecasting, Elsevier, vol. 33(4), pages 817-832.
    7. Bos, Hayo & Baas, Stef & Boucherie, Richard J. & Hans, Erwin W. & Leeftink, Gréanne, 2025. "Bed census prediction combining expert opinion and patient statistics," Omega, Elsevier, vol. 133(C).
    8. Anaya, Lina & Iriberri, Nagore & Rey-Biel, Pedro & Zamarro, Gema, 2022. "Understanding performance in test taking: The role of question difficulty order," Economics of Education Review, Elsevier, vol. 90(C).
    9. Arun G. Chandrasekhar & Robert Townsend & Juan Pablo Xandri, 2018. "Financial Centrality and Liquidity Provision," NBER Working Papers 24406, National Bureau of Economic Research, Inc.
    10. Deligiannis, Michalis & Liberopoulos, George, 2023. "Dynamic ordering and buyer selection policies when service affects future demand," Omega, Elsevier, vol. 118(C).
    11. Iriberri, Nagore & Rey-Biel, Pedro, 2021. "Brave boys and play-it-safe girls: Gender differences in willingness to guess in a large scale natural field experiment," European Economic Review, Elsevier, vol. 131(C).
    12. Neal, Zachary & Domagalski, Rachel & Yan, Xiaoqin, 2020. "Party Control as a Context for Homophily in Collaborations among US House Representatives, 1981 -- 2015," OSF Preprints qwdxs, Center for Open Science.
    13. Róbert Pethes & Levente Kovács, 2023. "An Exact and an Approximation Method to Compute the Degree Distribution of Inhomogeneous Random Graph Using Poisson Binomial Distribution," Mathematics, MDPI, vol. 11(6), pages 1-24, March.
    14. Van der Auweraer, Sarah & Boute, Robert, 2019. "Forecasting spare part demand using service maintenance information," International Journal of Production Economics, Elsevier, vol. 213(C), pages 138-149.
    15. Ellen Sewell, 2017. "Should I guess?," Applied Economics Letters, Taylor & Francis Journals, vol. 24(17), pages 1214-1217, October.
    16. Bahar Cennet Okumuşoğlu & Beste Basciftci & Burak Kocuk, 2024. "An Integrated Predictive Maintenance and Operations Scheduling Framework for Power Systems Under Failure Uncertainty," INFORMS Journal on Computing, INFORMS, vol. 36(5), pages 1335-1358, September.
    17. Sylvain Béal & Sylvain Ferrières, 2019. "Examination design : an axiomatic approach," Working Papers hal-04771390, HAL.
    18. Frederic Lord, 1971. "A theoretical study of two-stage testing," Psychometrika, Springer;The Psychometric Society, vol. 36(3), pages 227-242, September.
    19. Dandan Chen & Jinming Zhang, 2020. "A Review of “A Course in Item Response Theory and Modeling with Stata” by Raykov and Marcoulides," Psychometrika, Springer;The Psychometric Society, vol. 85(3), pages 837-840, September.
    20. repec:cup:judgdm:v:7:y:2012:i:2:p:165-172 is not listed on IDEAS
    21. Ogasawara, Haruhiko, 2013. "Asymptotic cumulants of ability estimators using fallible item parameters," Journal of Multivariate Analysis, Elsevier, vol. 119(C), pages 144-162.

    More about this item

    Keywords

    ;
    ;
    ;

    JEL classification:

    • A22 - General Economics and Teaching - - Economic Education and Teaching of Economics - - - Undergraduate
    • C12 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Hypothesis Testing: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hhs:oruesi:2025_009. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/ieoruse.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.