Author
Listed:
- Peter Tran
- Wanyong Feng
- Stephen G. Sireci
- Hunter McNichols
- Andrew Lan
Abstract
Educational test items are typically calibrated onto a score scale using item response theory (IRT). This approach requires administering the items to hundreds of test takers to characterize their difficulty. For educational tests designed for criterion-referenced purposes, characterizing item difficulty in this way presents two problems: one theoretical, the other practical. Theoretically, tests designed to provide criterion-referenced information should report test takers’ performance with respect to the knowledge and skills they have mastered, rather than on how well they performed relative to others. The traditional IRT calibration approach expresses item difficulty on a scale determined solely by test takers’ performance on the items. Practically, the traditional IRT approach requires large numbers of test takers, who are not always available and who are not always motivated to do well. In this study, we use the construct-relevant features of test items to characterize their difficulty. In one approach, we code the item features; two other approaches are based on artificial intelligence (chain-of-thought prompting and LLM finetuning). The results indicate the coding and LLM finetuning approaches reflect the difficulty parameters calibrated using IRT, accounting for approximately 60% of the variation. These results suggest educational test items can be calibrated using construct-relevant features of the items, rather than only administering them to samples of test takers. Implications for future research and practice in this area are discussed.
Suggested Citation
Peter Tran & Wanyong Feng & Stephen G. Sireci & Hunter McNichols & Andrew Lan, 2025.
"Using item features to calibrate educational test items: Comparing artificial intelligence and classical approaches,"
American Journal of Education and Learning, Online Science Publishing, vol. 10(2), pages 178-189.
Handle:
RePEc:onl:ajoeal:v:10:y:2025:i:2:p:178-189:id:1543
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:onl:ajoeal:v:10:y:2025:i:2:p:178-189:id:1543. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Pacharapa Naka The email address of this maintainer does not seem to be valid anymore. Please ask Pacharapa Naka to update the entry or send us the correct address
(email available below). General contact details of provider: https://www.onlinesciencepublishing.com/index.php/ajel/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.