IDEAS home Printed from https://ideas.repec.org/a/jss/jstsof/v056i10.html
   My bibliography  Save this article

Classification Accuracy and Consistency under Item Response Theory Models Using the Package classify

Author

Listed:
  • Wheadon, Chris

Abstract

The R package classify presents a number of useful functions which can be used to estimate the classification accuracy and consistency of assessments. Classification accuracy refers to the probability that an examinee’s achieved grade classification on an assessment reflects their true grade. Classification consistency refers to the probability that an examinee will be classified into the same grade classification under repeated administrations of an assessment. Understanding the classification accuracy and consistency of assessments is important where key decisions are being taken on the basis of grades or classifications. The study of classification accuracy can help to improve the design of assessments and aid public understanding and confidence in those assessments.

Suggested Citation

  • Wheadon, Chris, 2014. "Classification Accuracy and Consistency under Item Response Theory Models Using the Package classify," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 56(i10).
  • Handle: RePEc:jss:jstsof:v:056:i10
    DOI: http://hdl.handle.net/10.18637/jss.v056.i10
    as

    Download full text from publisher

    File URL: https://www.jstatsoft.org/index.php/jss/article/view/v056i10/v56i10.pdf
    Download Restriction: no

    File URL: https://www.jstatsoft.org/index.php/jss/article/downloadSuppFile/v056i10/classify_1.2.tar.gz
    Download Restriction: no

    File URL: https://www.jstatsoft.org/index.php/jss/article/downloadSuppFile/v056i10/v56i10.R
    Download Restriction: no

    File URL: https://libkey.io/http://hdl.handle.net/10.18637/jss.v056.i10?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Karen M. Douglas & Robert J. Mislevy, 2010. "Estimating Classification Accuracy for Complex Decision Rules Based on Multiple Scores," Journal of Educational and Behavioral Statistics, , vol. 35(3), pages 280-306, June.
    2. Curtis, S. McKay, 2010. "BUGS Code for Item Response Theory," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 36(c01).
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Songul Cinaroglu, 2017. "A Fresh Look at Out-of-Pocket Health Expenditures after More than a Decade Health Reform Experience in Turkey: A Data Mining Application," International Journal of Economics and Financial Issues, Econjournals, vol. 7(5), pages 33-40.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chun Wang & Steven W. Nydick, 2020. "On Longitudinal Item Response Theory Models: A Didactic," Journal of Educational and Behavioral Statistics, , vol. 45(3), pages 339-368, June.
    2. Martin Hernani Merino & Enver Gerald Tarazona Vargas & Antonieta Hamann Pastorino & José Afonso Mazzon, 2014. "Validation of Sustainable Development Practices Scale Using the Bayesian Approach to Item Response Theory," Tržište/Market, Faculty of Economics and Business, University of Zagreb, vol. 26(2), pages 147-162.
    3. Yang Liu & Jan Hannig, 2017. "Generalized Fiducial Inference for Logistic Graded Response Models," Psychometrika, Springer;The Psychometric Society, vol. 82(4), pages 1097-1125, December.
    4. Peida Zhan & Hong Jiao & Kaiwen Man & Lijun Wang, 2019. "Using JAGS for Bayesian Cognitive Diagnosis Modeling: A Tutorial," Journal of Educational and Behavioral Statistics, , vol. 44(4), pages 473-503, August.
    5. Pascal Jordan & Meike C Shedden-Mora & Bernd Löwe, 2017. "Psychometric analysis of the Generalized Anxiety Disorder scale (GAD-7) in primary care using modern item response theory," PLOS ONE, Public Library of Science, vol. 12(8), pages 1-14, August.
    6. Lei Guo & Wenjie Zhou & Xiao Li, 2024. "Cognitive Diagnosis Testlet Model for Multiple-Choice Items," Journal of Educational and Behavioral Statistics, , vol. 49(1), pages 32-60, February.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:jss:jstsof:v:056:i10. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Christopher F. Baum (email available below). General contact details of provider: http://www.jstatsoft.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.