IDEAS home Printed from https://ideas.repec.org/a/inm/orijoc/v35y2023i1p5-13.html
   My bibliography  Save this article

py-irt : A Scalable Item Response Theory Library for Python

Author

Listed:
  • John Patrick Lalor

    (IT, Analytics, and Operations, University of Notre Dame, Notre Dame, Indiana 46556)

  • Pedro Rodriguez

    (Computer Science, University of Maryland, College Park, Maryland 20742)

Abstract

py-irt is a Python library for fitting Bayesian item response theory (IRT) models. At present, there is no Python package for fitting large-scale IRT models. py-irt estimates latent traits of subjects and items, making it appropriate for use in IRT tasks as well as in ideal point models. py-irt is built on top of the Pyro and PyTorch frameworks and uses GPU-accelerated training to scale to large data sets. It is the first Python package for large-scale IRT model fitting. py-irt is easy to use for practitioners and also allows for researchers to build and fit custom IRT models. py-irt is available as open-source software and can be installed from GitHub or the Python Package Index.

Suggested Citation

  • John Patrick Lalor & Pedro Rodriguez, 2023. "py-irt : A Scalable Item Response Theory Library for Python," INFORMS Journal on Computing, INFORMS, vol. 35(1), pages 5-13, January.
  • Handle: RePEc:inm:orijoc:v:35:y:2023:i:1:p:5-13
    DOI: 10.1287/ijoc.2022.1250
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/ijoc.2022.1250
    Download Restriction: no

    File URL: https://libkey.io/10.1287/ijoc.2022.1250?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Yoav Bergner & Peter Halpin & Jill-Jênn Vie, 2022. "Multidimensional Item Response Theory in the Style of Collaborative Filtering," Psychometrika, Springer;The Psychometric Society, vol. 87(1), pages 266-288, March.
    2. R. Bock & Murray Aitkin, 1981. "Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm," Psychometrika, Springer;The Psychometric Society, vol. 46(4), pages 443-459, December.
    3. Martijn G. de Jong & Jan-Benedict E. M. Steenkamp & Bernard P. Veldkamp, 2009. "A Model for the Construction of Country-Specific Yet Internationally Comparable Short-Form Marketing Scales," Marketing Science, INFORMS, vol. 28(4), pages 674-689, 07-08.
    4. Rizopoulos, Dimitris, 2006. "ltm: An R Package for Latent Variable Modeling and Item Response Analysis," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 17(i05).
    5. Jinshu Cui & Heather Rosoff & Richard S. John, 2017. "A Polytomous Item Response Theory Model for Measuring Near-Miss Appraisal as a Psychological Trait," Decision Analysis, INFORMS, vol. 14(2), pages 75-86, June.
    6. Christopher J. Urban & Daniel J. Bauer, 2021. "A Deep Learning Algorithm for High-Dimensional Exploratory Item Factor Analysis," Psychometrika, Springer;The Psychometric Society, vol. 86(1), pages 1-29, March.
    7. Asim Roy & Shiban Qureshi & Kartikeya Pande & Divitha Nair & Kartik Gairola & Pooja Jain & Suraj Singh & Kirti Sharma & Akshay Jagadale & Yi-Yang Lin & Shashank Sharma & Ramya Gotety & Yuexin Zhang & , 2019. "Performance Comparison of Machine Learning Platforms," INFORMS Journal on Computing, INFORMS, vol. 31(2), pages 207-225, April.
    8. Ville A. Satopää & Marat Salikhov & Philip E. Tetlock & Barbara Mellers, 2021. "Bias, Information, Noise: The BIN Model of Forecasting," Management Science, INFORMS, vol. 67(12), pages 7599-7618, December.
    9. Chalmers, R. Philip, 2012. "mirt: A Multidimensional Item Response Theory Package for the R Environment," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 48(i06).
    10. Battauz, Michela, 2015. "equateIRT: An R Package for IRT Test Equating," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 68(i07).
    11. Yunxiao Chen & Xiaoou Li & Siliang Zhang, 2019. "Joint Maximum Likelihood Estimation for High-Dimensional Exploratory Item Factor Analysis," Psychometrika, Springer;The Psychometric Society, vol. 84(1), pages 124-146, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yoav Bergner & Peter Halpin & Jill-Jênn Vie, 2022. "Multidimensional Item Response Theory in the Style of Collaborative Filtering," Psychometrika, Springer;The Psychometric Society, vol. 87(1), pages 266-288, March.
    2. Alexander Robitzsch, 2021. "A Comprehensive Simulation Study of Estimation Methods for the Rasch Model," Stats, MDPI, vol. 4(4), pages 1-23, October.
    3. Yang Liu & Ji Seung Yang, 2018. "Bootstrap-Calibrated Interval Estimates for Latent Variable Scores in Item Response Theory," Psychometrika, Springer;The Psychometric Society, vol. 83(2), pages 333-354, June.
    4. Björn Andersson & Marie Wiberg, 2017. "Item Response Theory Observed-Score Kernel Equating," Psychometrika, Springer;The Psychometric Society, vol. 82(1), pages 48-66, March.
    5. Christopher J. Urban & Daniel J. Bauer, 2021. "A Deep Learning Algorithm for High-Dimensional Exploratory Item Factor Analysis," Psychometrika, Springer;The Psychometric Society, vol. 86(1), pages 1-29, March.
    6. Michela Battauz, 2023. "Testing for differences in chain equating," Statistica Neerlandica, Netherlands Society for Statistics and Operations Research, vol. 77(2), pages 134-145, May.
    7. Ping Chen & Chun Wang, 2021. "Using EM Algorithm for Finite Mixtures and Reformed Supplemented EM for MIRT Calibration," Psychometrika, Springer;The Psychometric Society, vol. 86(1), pages 299-326, March.
    8. Michela Battauz, 2019. "On Wald tests for differential item functioning detection," Statistical Methods & Applications, Springer;Società Italiana di Statistica, vol. 28(1), pages 103-118, March.
    9. Melissa Gladstone & Gillian Lancaster & Gareth McCray & Vanessa Cavallera & Claudia R. L. Alves & Limbika Maliwichi & Muneera A. Rasheed & Tarun Dua & Magdalena Janus & Patricia Kariger, 2021. "Validation of the Infant and Young Child Development (IYCD) Indicators in Three Countries: Brazil, Malawi and Pakistan," IJERPH, MDPI, vol. 18(11), pages 1-19, June.
    10. Alexander Robitzsch, 2023. "Linking Error in the 2PL Model," J, MDPI, vol. 6(1), pages 1-27, January.
    11. Björn Andersson & Tao Xin, 2021. "Estimation of Latent Regression Item Response Theory Models Using a Second-Order Laplace Approximation," Journal of Educational and Behavioral Statistics, , vol. 46(2), pages 244-265, April.
    12. Daniel L. Oberski, 2016. "A Review of Latent Variable Modeling With R," Journal of Educational and Behavioral Statistics, , vol. 41(2), pages 226-233, April.
    13. Cervantes, Víctor H., 2017. "DFIT: An R Package for Raju's Differential Functioning of Items and Tests Framework," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 76(i05).
    14. Jochen Ranger & Kay Brauer, 2022. "On the Generalized S − X 2 –Test of Item Fit: Some Variants, Residuals, and a Graphical Visualization," Journal of Educational and Behavioral Statistics, , vol. 47(2), pages 202-230, April.
    15. Chanjin Zheng & Shaoyang Guo & Justin L Kern, 2021. "Fast Bayesian Estimation for the Four-Parameter Logistic Model (4PLM)," SAGE Open, , vol. 11(4), pages 21582440211, October.
    16. Yang Liu & Jan Hannig, 2017. "Generalized Fiducial Inference for Logistic Graded Response Models," Psychometrika, Springer;The Psychometric Society, vol. 82(4), pages 1097-1125, December.
    17. Salim Moussa, 2016. "A two-step item response theory procedure for a better measurement of marketing constructs," Journal of Marketing Analytics, Palgrave Macmillan, vol. 4(1), pages 28-50, March.
    18. Isabel Gallego‐Alvarez & Eduardo Ortas & José Luis Vicente‐Villardón & Igor Álvarez Etxeberria, 2017. "Institutional Constraints, Stakeholder Pressure and Corporate Environmental Reporting Policies," Business Strategy and the Environment, Wiley Blackwell, vol. 26(6), pages 807-825, September.
    19. Zhang, Haoran & Chen, Yunxiao & Li, Xiaoou, 2020. "A note on exploratory item factor analysis by singular value decomposition," LSE Research Online Documents on Economics 104166, London School of Economics and Political Science, LSE Library.
    20. Francisco José Eiroa-Orosa & Laura Limiñana-Bravo, 2019. "An Instrument to Measure Mental Health Professionals’ Beliefs and Attitudes towards Service Users’ Rights," IJERPH, MDPI, vol. 16(2), pages 1-16, January.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:orijoc:v:35:y:2023:i:1:p:5-13. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.