IDEAS home Printed from https://ideas.repec.org/a/sae/jedbes/v43y2018i2p135-158.html
   My bibliography  Save this article

Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing

Author

Listed:
  • Edison M. Choe

    (Graduate Management Admission Council)

  • Justin L. Kern

    (University of California, Merced)

  • Hua-Hua Chang

    (University of Illinois at Urbana-Champaign)

Abstract

Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response time (RT), which was shown to effectively reduce the average completion time for a fixed-length test with minimal decrease in the accuracy of ability estimation. As this method also resulted in extremely unbalanced exposure of items, however, a -stratification with b -blocking was recommended as a means for counterbalancing. Although exceptionally effective in this regard, it comes at substantial costs of attenuating the reduction in average testing time, increasing the variance of testing times, and further decreasing estimation accuracy. Therefore, this article investigated several alternative methods for item exposure control, of which the most promising was a simple modification of maximizing Fisher information per unit of centered expected RT. The key advantage of the proposed method is the flexibility in choosing a centering value according to a desired distribution of testing times and level of exposure control. Moreover, the centered expected RT can be exponentially weighted to calibrate the degree of measurement precision. The results of extensive simulations, with item pools and examinees that are both simulated and real, demonstrate that optimally chosen centering and weighting values can markedly reduce the mean and variance of both testing times and test overlap, all without much compromise in estimation accuracy.

Suggested Citation

  • Edison M. Choe & Justin L. Kern & Hua-Hua Chang, 2018. "Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing," Journal of Educational and Behavioral Statistics, , vol. 43(2), pages 135-158, April.
  • Handle: RePEc:sae:jedbes:v:43:y:2018:i:2:p:135-158
    DOI: 10.3102/1076998617723642
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.3102/1076998617723642
    Download Restriction: no

    File URL: https://libkey.io/10.3102/1076998617723642?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Hua-Hua Chang & Zhiliang Ying, 2008. "To Weight or Not to Weight? Balancing Influence of Initial Items in Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 73(3), pages 441-450, September.
    2. Chun Wang & Yi Zheng & Hua-Hua Chang, 2014. "Does Standard Deviation Matter? Using “Standard Deviation” to Quantify Security of Multistage Testing," Psychometrika, Springer;The Psychometric Society, vol. 79(1), pages 154-174, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Edison M. Choe & Hua-Hua Chang, 2019. "The Asymptotic Distribution of Average Test Overlap Rate in Computerized Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 84(4), pages 1129-1151, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chun Wang & David J. Weiss & Zhuoran Shang, 2019. "Variable-Length Stopping Rules for Multidimensional Computerized Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 84(3), pages 749-771, September.
    2. Edison M. Choe & Jinming Zhang & Hua-Hua Chang, 2018. "Sequential Detection of Compromised Items Using Response Times in Computerized Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 83(3), pages 650-673, September.
    3. Chun Wang & Hua-Hua Chang & Keith Boughton, 2011. "Kullback–Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 76(1), pages 13-39, January.
    4. Chun Wang & Hua-Hua Chang, 2011. "Item Selection in Multidimensional Computerized Adaptive Testing—Gaining Information from Different Angles," Psychometrika, Springer;The Psychometric Society, vol. 76(3), pages 363-384, July.
    5. Onur Demirkaya & Ummugul Bezirhan & Jinming Zhang, 2023. "Detecting Item Preknowledge Using Revisits With Speed and Accuracy," Journal of Educational and Behavioral Statistics, , vol. 48(4), pages 521-542, August.
    6. Hua-Hua Chang, 2015. "Psychometrics Behind Computerized Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 80(1), pages 1-20, March.
    7. Steven Andrew Culpepper, 2017. "The Prevalence and Implications of Slipping on Low-Stakes, Large-Scale Assessments," Journal of Educational and Behavioral Statistics, , vol. 42(6), pages 706-725, December.
    8. Steven Andrew Culpepper, 2016. "Revisiting the 4-Parameter Item Response Model: Bayesian Estimation and Application," Psychometrika, Springer;The Psychometric Society, vol. 81(4), pages 1142-1163, December.
    9. Chun Wang & Gongjun Xu & Zhuoran Shang & Nathan Kuncel, 2018. "Detecting Aberrant Behavior and Item Preknowledge: A Comparison of Mixture Modeling Method and Residual Method," Journal of Educational and Behavioral Statistics, , vol. 43(4), pages 469-501, August.
    10. Edison M. Choe & Hua-Hua Chang, 2019. "The Asymptotic Distribution of Average Test Overlap Rate in Computerized Adaptive Testing," Psychometrika, Springer;The Psychometric Society, vol. 84(4), pages 1129-1151, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:jedbes:v:43:y:2018:i:2:p:135-158. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.