Test Scaling and Value-Added Measurement
Conventional value-added assessment requires that achievement be reported on an interval scale. While many metrics do not have this property, application of item response theory (IRT) is said to produce interval scales. However, it is difficult to confirm that the requisite conditions are met. Even when they are, the properties of the data that make a test IRT scalable may not be the properties we seek to represent in an achievement scale, as shown by the lack of surface plausibility of many scales resulting from the application of IRT. An alternative, ordinal data analysis, is presented. It is shown that value-added estimates are sensitive to the choice of ordinal methods over conventional techniques. Value-added practitioners should ask themselves whether they are so confident of the metric properties of these scales that they are willing to attribute differences to the superiority of the latter. © 2009 American Education Finance Association
Volume (Year): 4 (2009)
Issue (Month): 4 (October)
|Contact details of provider:|| Web page: http://mitpress.mit.edu/journals/ |
|Order Information:||Web: http://www.mitpressjournals.org/loi/edfp|
When requesting a correction, please mention this item's handle: RePEc:tpr:edfpol:v:4:y:2009:i:4:p:351-383. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Karie Kirkpatrick)
If references are entirely missing, you can add them using this form.