IDEAS home Printed from https://ideas.repec.org/a/eee/intell/v73y2019icp16-29.html
   My bibliography  Save this article

Do individual differences in test-takers' appraisal of admission testing compromise measurement fairness?

Author

Listed:
  • Sommer, Markus
  • Arendasy, Martin E.
  • Punter, Joachim Fritz
  • Feldhammer-Kahr, Martina
  • Rieder, Anita

Abstract

Due to the increased use of cognitive tests in admission testing there have been renewed concerns that individual differences in test anxiety induces measurement bias in cognitive tests and therefore call their valid and fair use in admission testing into question. Prior studies examining measurement invariance across individual differences in test anxiety yielded mixed results. While some studies indicated measurement bias due to test anxiety, others failed to confirm this hypothesis. Some researchers hypothesized that the extent to which test anxiety induces measurement bias in cognitive tests by reducing attentional resources available to the test-taker while solving the test items, depends on their cognitive appraisal of the test-taking situation in terms of their confidence in their ability to handle the task at hand (testing problem efficacy), the relevance of the goal to do well in the admission test (goal relevance), and their perceived level of control (agency). The present study was conducted to test this hypothesis. A large sample (N = 1628) of medical school applicants was tested in the real-life admission testing situation. Using latent profile analysis, we identified four groups of test-takers differing in their appraisal of goal relevance, testing problem efficacy, worry, task-irrelevant thinking, and agency. Contrary to our predictions item response theory analyses indicated measurement invariance across the four latent profiles for all four cognitive ability tests and all four knowledge tests administered in the present study. This finding contradicts theoretical models, which postulate that individual differences in test-takers appraisal of the admission testing situation and their emotional reactions to it, compromise the measurement fairness of cognitive admission tests.

Suggested Citation

  • Sommer, Markus & Arendasy, Martin E. & Punter, Joachim Fritz & Feldhammer-Kahr, Martina & Rieder, Anita, 2019. "Do individual differences in test-takers' appraisal of admission testing compromise measurement fairness?," Intelligence, Elsevier, vol. 73(C), pages 16-29.
  • Handle: RePEc:eee:intell:v:73:y:2019:i:c:p:16-29
    DOI: 10.1016/j.intell.2019.01.006
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0160289618302125
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.intell.2019.01.006?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Erling Andersen, 1973. "A goodness of fit test for the rasch model," Psychometrika, Springer;The Psychometric Society, vol. 38(1), pages 123-140, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Alberto Maydeu-Olivares & Rosa Montaño, 2013. "How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-Fit Statistics in Categorical Data Analysis," Psychometrika, Springer;The Psychometric Society, vol. 78(1), pages 116-133, January.
    2. Lionel WILNER, 2019. "The Dynamics of Individual Happiness," Working Papers 2019-18, Center for Research in Economics and Statistics.
    3. C. Glas & Anna Dagohoy, 2007. "A Person Fit Test For Irt Models For Polytomous Items," Psychometrika, Springer;The Psychometric Society, vol. 72(2), pages 159-180, June.
    4. Nina Singer & Ludwig Kreuzpointner & Monika Sommer & Stefan Wüst & Brigitte M Kudielka, 2019. "Decision-making in everyday moral conflict situations: Development and validation of a new measure," PLOS ONE, Public Library of Science, vol. 14(4), pages 1-19, April.
    5. Haruhiko Ogasawara, 2013. "Asymptotic properties of the Bayes modal estimators of item parameters in item response theory," Computational Statistics, Springer, vol. 28(6), pages 2559-2583, December.
    6. Timo Bechger & Gunter Maris, 2015. "A Statistical Test for Differential Item Pair Functioning," Psychometrika, Springer;The Psychometric Society, vol. 80(2), pages 317-340, June.
    7. Clemens Draxler, 2018. "Bayesian conditional inference for Rasch models," AStA Advances in Statistical Analysis, Springer;German Statistical Society, vol. 102(2), pages 245-262, April.
    8. Gerhard Tutz & Gunther Schauberger, 2015. "A Penalty Approach to Differential Item Functioning in Rasch Models," Psychometrika, Springer;The Psychometric Society, vol. 80(1), pages 21-43, March.
    9. Maiken Pontoppidan & Tine Nielsen & Ingeborg Hedegaard Kristensen, 2018. "Psychometric properties of the Danish Parental Stress Scale: Rasch analysis in a sample of mothers with infants," PLOS ONE, Public Library of Science, vol. 13(11), pages 1-20, November.
    10. Ivo Poncny, 2000. "Exact person fit indexes for the rasch model for arbitrary alternatives," Psychometrika, Springer;The Psychometric Society, vol. 65(1), pages 29-42, March.
    11. Mair, Patrick & Hatzinger, Reinhold, 2007. "Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 20(i09).
    12. Luz Dary Upegui-Arango & Thomas Forkmann & Tine Nielsen & Nina Hallensleben & Heide Glaesmer & Lena Spangenberg & Tobias Teismann & Georg Juckel & Maren Boecker, 2020. "Psychometric evaluation of the Interpersonal Needs Questionnaire (INQ) using item analysis according to the Rasch model," PLOS ONE, Public Library of Science, vol. 15(8), pages 1-21, August.
    13. Herbert Hoijtink & Ivo Molenaar, 1992. "Testing for DIF in a model with single peaked item characteristic curves: The parella model," Psychometrika, Springer;The Psychometric Society, vol. 57(3), pages 383-397, September.
    14. Georg Gittler & Gerhard Fischer, 2011. "IRT-Based Measurement of Short-Term Changes of Ability, With an Application to Assessing the “Mozart Effectâ€," Journal of Educational and Behavioral Statistics, , vol. 36(1), pages 33-75, February.
    15. Robert Zwitser & Gunter Maris, 2015. "Conditional Statistical Inference with Multistage Testing Designs," Psychometrika, Springer;The Psychometric Society, vol. 80(1), pages 65-84, March.
    16. Mark Appelbaum, 1986. "Statistics, data analysis and Psychometrika: Major developments," Psychometrika, Springer;The Psychometric Society, vol. 51(1), pages 53-56, March.
    17. Svend Kreiner & Karl Christensen, 2014. "Analyses of Model Fit and Robustness. A New Look at the PISA Scaling Model Underlying Ranking of Countries According to Reading Literacy," Psychometrika, Springer;The Psychometric Society, vol. 79(2), pages 210-231, April.
    18. Betina Ristorp Andersen & Maria Birkvad Rasmussen & Karl Bang Christensen & Kirsten G Engel & Charlotte Ringsted & Ellen Løkkegaard & Martin G Tolsgaard, 2020. "Making the best of the worst: Care quality during emergency cesarean sections," PLOS ONE, Public Library of Science, vol. 15(2), pages 1-13, February.
    19. Debora Chiusole & Luca Stefanutti & Pasquale Anselmi & Egidio Robusto, 2013. "Assessing Parameter Invariance in the BLIM: Bipartition Models," Psychometrika, Springer;The Psychometric Society, vol. 78(4), pages 710-724, October.
    20. Herbert Hoijtink & Ivo Molenaar, 1994. "An item response model with single peaked item characteristic curves: The PARELLA model," Quality & Quantity: International Journal of Methodology, Springer, vol. 28(1), pages 99-116, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:intell:v:73:y:2019:i:c:p:16-29. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: https://www.journals.elsevier.com/intelligence .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/intelligence .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.