IDEAS home Printed from https://ideas.repec.org/a/sae/jedbes/v40y2015i1p35-68.html
   My bibliography  Save this article

Practical Differences Among Aggregate-Level Conditional Status Metrics

Author

Listed:
  • Katherine E. Castellano

    (University of California, Berkeley)

  • Andrew D. Ho

    (Harvard Graduate School of Education)

Abstract

Aggregate-level conditional status metrics (ACSMs) describe the status of a group by referencing current performance to expectations given past scores. This article provides a framework for these metrics, classifying them by aggregation function (mean or median), regression approach (linear mean and nonlinear quantile), and the scale that supports interpretations (percentile rank and score scale), among other factors. This study addresses the question “how different are these ACSMs?†in three ways. First, using simulated data, it evaluates how well each model recovers its respective parameters. Second, using both simulated and empirical data, it illustrates practical differences among ACSMs in terms of pairwise rank differences incurred by switching between metrics. Third, it ranks ACSMs in terms of their robustness under scale transformations. The results consistently show that choices between mean- and median-based metrics lead to more substantial differences than choices between fixed- and random-effects or linear mean and nonlinear quantile regression. The findings set expectations for cross-metric comparability in realistic data scenarios.

Suggested Citation

  • Katherine E. Castellano & Andrew D. Ho, 2015. "Practical Differences Among Aggregate-Level Conditional Status Metrics," Journal of Educational and Behavioral Statistics, , vol. 40(1), pages 35-68, February.
  • Handle: RePEc:sae:jedbes:v:40:y:2015:i:1:p:35-68
    DOI: 10.3102/1076998614548485
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.3102/1076998614548485
    Download Restriction: no

    File URL: https://libkey.io/10.3102/1076998614548485?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Daniel F. McCaffrey & Tim R. Sass & J. R. Lockwood & Kata Mihaly, 2009. "The Intertemporal Variability of Teacher Effect Estimates," Education Finance and Policy, MIT Press, vol. 4(4), pages 572-606, October.
    2. David Rogosa & John Willett, 1985. "Understanding correlates of change by modeling individual differences in growth," Psychometrika, Springer;The Psychometric Society, vol. 50(2), pages 203-228, June.
    3. Cassandra M. Guarino & Mark D. Reckase & Jeffrey M. Woolrdige, 2014. "Can Value-Added Measures of Teacher Performance Be Trusted?," Education Finance and Policy, MIT Press, vol. 10(1), pages 117-156, November.
    4. Brendan Houng & Moshe Justman, 2013. "Comparing Least-Squares Value-Added Analysis and Student Growth Percentile Analysis for Evaluating Student Progress and Estimating School Effects," Melbourne Institute Working Paper Series wp2013n07, Melbourne Institute of Applied Economic and Social Research, The University of Melbourne.
    5. Guarino, Cassandra M. & Reckase, Mark D. & Stacy, Brian & Wooldridge, Jeffrey M., 2014. "A Comparison of Growth Percentile and Value-Added Models of Teacher Performance," IZA Discussion Papers 7973, Institute of Labor Economics (IZA).
    6. Sophia Rabe-Hesketh & Anders Skrondal, 2012. "Multilevel and Longitudinal Modeling Using Stata, 3rd Edition," Stata Press books, StataCorp LP, edition 3, number mimus2, March.
    7. Cory Koedel & Mark Ehlert & Eric Parsons & Michael Podgursky, 2012. "Selecting Growth Measures for School and Teacher Evaluations," Working Papers 1210, Department of Economics, University of Missouri.
    8. Sean F. Reardon & Stephen W. Raudenbush, 2009. "Assumptions of Value-Added Models for Estimating School Effects," Education Finance and Policy, MIT Press, vol. 4(4), pages 492-519, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Andrew McEachin & Allison Atteberry, 2017. "The Impact of Summer Learning Loss on Measures of School Performance," Education Finance and Policy, MIT Press, vol. 12(4), pages 468-491, Fall.
    2. Stacy, Brian & Guarino, Cassandra & Wooldridge, Jeffrey, 2018. "Does the precision and stability of value-added estimates of teacher performance depend on the types of students they serve?," Economics of Education Review, Elsevier, vol. 64(C), pages 50-74.
    3. Eric Parsons & Cory Koedel & Li Tan, 2019. "Accounting for Student Disadvantage in Value-Added Models," Journal of Educational and Behavioral Statistics, , vol. 44(2), pages 144-179, April.
    4. Condie, Scott & Lefgren, Lars & Sims, David, 2014. "Teacher heterogeneity, value-added and education policy," Economics of Education Review, Elsevier, vol. 40(C), pages 76-92.
    5. Canales, Andrea & Maldonado, Luis, 2018. "Teacher quality and student achievement in Chile: Linking teachers' contribution and observable characteristics," International Journal of Educational Development, Elsevier, vol. 60(C), pages 33-50.
    6. Nirav Mehta, 2019. "Measuring quality for use in incentive schemes: The case of “shrinkage” estimators," Quantitative Economics, Econometric Society, vol. 10(4), pages 1537-1577, November.
    7. Cory Koedel & Jiaxi Li, 2016. "The Efficiency Implications Of Using Proportional Evaluations To Shape The Teaching Workforce," Contemporary Economic Policy, Western Economic Association International, vol. 34(1), pages 47-62, January.
    8. Koedel, Cory & Mihaly, Kata & Rockoff, Jonah E., 2015. "Value-added modeling: A review," Economics of Education Review, Elsevier, vol. 47(C), pages 180-195.
    9. Gary Henry & Roderick Rose & Doug Lauen, 2014. "Are value-added models good enough for teacher evaluations? Assessing commonly used models with simulated and actual data," Investigaciones de Economía de la Educación volume 9, in: Adela García Aracil & Isabel Neira Gómez (ed.), Investigaciones de Economía de la Educación 9, edition 1, volume 9, chapter 20, pages 383-405, Asociación de Economía de la Educación.
    10. Nirav Mehta, 2019. "Measuring quality for use in incentive schemes: The case of “shrinkage” estimators," Quantitative Economics, Econometric Society, vol. 10(4), pages 1537-1577, November.
    11. P. Givord & M. Suarez Castillo, 2019. "Excellence for all? Heterogeneity in high-schools’ value-added," Documents de Travail de l'Insee - INSEE Working Papers g2019-14, Institut National de la Statistique et des Etudes Economiques.
    12. Matthew Johnson & Stephen Lipscomb & Brian Gill, 2013. "Sensitivity of Teacher Value-Added Estimates to Student and Peer Control Variables," Mathematica Policy Research Reports 3f875df699534c72b9e57c39d, Mathematica Policy Research.
    13. Braun, Martin & Verdier, Valentin, 2023. "Estimation of spillover effects with matched data or longitudinal network data," Journal of Econometrics, Elsevier, vol. 233(2), pages 689-714.
    14. Susanna Loeb & Michael S. Christian & Heather Hough & Robert H. Meyer & Andrew B. Rice & Martin R. West, 2019. "School Differences in Social–Emotional Learning Gains: Findings From the First Large-Scale Panel Survey of Students," Journal of Educational and Behavioral Statistics, , vol. 44(5), pages 507-542, October.
    15. Brendan Houng & Moshe Justman, 2013. "Comparing Least-Squares Value-Added Analysis and Student Growth Percentile Analysis for Evaluating Student Progress and Estimating School Effects," Melbourne Institute Working Paper Series wp2013n07, Melbourne Institute of Applied Economic and Social Research, The University of Melbourne.
    16. Moshe Justman & Brendan Houng, 2013. "A Comparison Of Two Methods For Estimating School Effects And Tracking Student Progress From Standardized Test Scores," Working Papers 1316, Ben-Gurion University of the Negev, Department of Economics.
    17. Nirav Mehta, 2014. "Targeting the Wrong Teachers: Estimating Teacher Quality for Use in Accountability Regimes," University of Western Ontario, Centre for Human Capital and Productivity (CHCP) Working Papers 20143, University of Western Ontario, Centre for Human Capital and Productivity (CHCP).
    18. Harris, Douglas N. & Sass, Tim R., 2014. "Skills, productivity and the evaluation of teacher performance," Economics of Education Review, Elsevier, vol. 40(C), pages 183-204.
    19. Li Feng & Tim R. Sass, 2017. "Teacher Quality and Teacher Mobility," Education Finance and Policy, MIT Press, vol. 12(3), pages 396-418, Summer.
    20. Cassandra M. Guarino & Michelle Maxfield & Mark D. Reckase & Paul N. Thompson & Jeffrey M. Wooldridge, 2015. "An Evaluation of Empirical Bayes’s Estimation of Value-Added Teacher Performance Measures," Journal of Educational and Behavioral Statistics, , vol. 40(2), pages 190-222, April.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:jedbes:v:40:y:2015:i:1:p:35-68. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.