IDEAS home Printed from https://ideas.repec.org/a/tpr/edfpol/v4y2009i4p351-383.html
   My bibliography  Save this article

Test Scaling and Value-Added Measurement

Author

Listed:
  • Dale Ballou

    (Department of Leadership, Policy and Organizations, Peabody College, Vanderbilt University)

Abstract

Conventional value-added assessment requires that achievement be reported on an interval scale. While many metrics do not have this property, application of item response theory (IRT) is said to produce interval scales. However, it is difficult to confirm that the requisite conditions are met. Even when they are, the properties of the data that make a test IRT scalable may not be the properties we seek to represent in an achievement scale, as shown by the lack of surface plausibility of many scales resulting from the application of IRT. An alternative, ordinal data analysis, is presented. It is shown that value-added estimates are sensitive to the choice of ordinal methods over conventional techniques. Value-added practitioners should ask themselves whether they are so confident of the metric properties of these scales that they are willing to attribute differences to the superiority of the latter. © 2009 American Education Finance Association

Suggested Citation

  • Dale Ballou, 2009. "Test Scaling and Value-Added Measurement," Education Finance and Policy, MIT Press, vol. 4(4), pages 351-383, October.
  • Handle: RePEc:tpr:edfpol:v:4:y:2009:i:4:p:351-383
    as

    Download full text from publisher

    File URL: http://www.mitpressjournals.org/doi/pdf/10.1162/edfp.2009.4.4.351
    Download Restriction: Access to PDF is restricted to subscribers.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Seth Gershenson, 2016. "Performance Standards and Employee Effort: Evidence From Teacher Absences," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 35(3), pages 615-638, June.
    2. Barrett, Nathan & Toma, Eugenia F., 2013. "Reward or punishment? Class size and teacher quality," Economics of Education Review, Elsevier, vol. 35(C), pages 41-52.
    3. Koedel Cory & Leatherman Rebecca & Parsons Eric, 2012. "Test Measurement Error and Inference from Value-Added Models," The B.E. Journal of Economic Analysis & Policy, De Gruyter, vol. 12(1), pages 1-37, November.
    4. Cory Koedel & Mark Ehlert & Eric Parsons & Michael Podgursky, 2012. "Selecting Growth Measures for School and Teacher Evaluations," Working Papers 1210, Department of Economics, University of Missouri.
    5. Seth Gershenson & Diane Whitmore Schanzenbach, 2016. "Linking Teacher Quality, Student Attendance, and Student Achievement," Education Finance and Policy, MIT Press, vol. 11(2), pages 125-149, Spring.
    6. Alexander Robitzsch, 2021. "About the Equivalence of the Latent D-Scoring Model and the Two-Parameter Logistic Item Response Model," Mathematics, MDPI, vol. 9(13), pages 1-17, June.
    7. Seth Gershenson & Stephen B. Holt & Nicholas Papageorge, 2015. "Who Believes in Me? The Effect of Student-Teacher Demographic Match on Teacher Expectations," Upjohn Working Papers 15-231, W.E. Upjohn Institute for Employment Research.
    8. Benjamin R. Shear & Sean F. Reardon, 2021. "Using Pooled Heteroskedastic Ordered Probit Models to Improve Small-Sample Estimates of Latent Test Score Distributions," Journal of Educational and Behavioral Statistics, , vol. 46(1), pages 3-33, February.
    9. Gadi Barlevy & Derek Neal, 2012. "Pay for Percentile," American Economic Review, American Economic Association, vol. 102(5), pages 1805-1831, August.
    10. Daniel M. Bolt & Xiangyi Liao, 2022. "Item Complexity: A Neglected Psychometric Feature of Test Items?," Psychometrika, Springer;The Psychometric Society, vol. 87(4), pages 1195-1213, December.
    11. Derek C. Briggs & Ben Domingue, 2013. "The Gains From Vertical Scaling," Journal of Educational and Behavioral Statistics, , vol. 38(6), pages 551-576, December.
    12. Donald Boyd & Hamilton Lankford & Susanna Loeb & James Wyckoff, 2013. "Measuring Test Measurement Error," Journal of Educational and Behavioral Statistics, , vol. 38(6), pages 629-663, December.
    13. Brendan Houng & Moshe Justman, 2013. "Comparing Least-Squares Value-Added Analysis and Student Growth Percentile Analysis for Evaluating Student Progress and Estimating School Effects," Melbourne Institute Working Paper Series wp2013n07, Melbourne Institute of Applied Economic and Social Research, The University of Melbourne.
    14. David M. Quinn & Andrew D. Ho, 2021. "Ordinal Approaches to Decomposing Between-Group Test Score Disparities," Journal of Educational and Behavioral Statistics, , vol. 46(4), pages 466-500, August.
    15. Moshe Justman & Brendan Houng, 2013. "A Comparison Of Two Methods For Estimating School Effects And Tracking Student Progress From Standardized Test Scores," Working Papers 1316, Ben-Gurion University of the Negev, Department of Economics.
    16. Wiswall, Matthew, 2013. "The dynamics of teacher quality," Journal of Public Economics, Elsevier, vol. 100(C), pages 61-78.
    17. J. R. Lockwood & Daniel F. McCaffrey, 2014. "Correcting for Test Score Measurement Error in ANCOVA Models for Estimating Treatment Effects," Journal of Educational and Behavioral Statistics, , vol. 39(1), pages 22-52, February.

    More about this item

    Keywords

    value-added assessment; test scaling; item response theory;
    All these keywords.

    JEL classification:

    • I20 - Health, Education, and Welfare - - Education - - - General
    • I21 - Health, Education, and Welfare - - Education - - - Analysis of Education

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:tpr:edfpol:v:4:y:2009:i:4:p:351-383. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Kelly McDougall (email available below). General contact details of provider: https://direct.mit.edu/journals .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.