IDEAS home Printed from https://ideas.repec.org/a/sae/evarev/v43y2019i3-4p152-188.html
   My bibliography  Save this article

Can Nonexperimental Methods Provide Unbiased Estimates of a Breastfeeding Intervention? A Within-Study Comparison of Peer Counseling in Oregon

Author

Listed:
  • Onur Altindag
  • Theodore J. Joyce
  • Julie A. Reeder

Abstract

Between July 2005 and July 2007, the Oregon Supplemental Nutrition Program for Women, Infants and Children program conducted the largest randomized field experiment (RFE) ever in the United States to assess the effectiveness of a low-cost peer counseling intervention to promote exclusive breastfeeding. We undertook a within-study comparison of the intervention using unique administrative data between July 2005 and July 2010. We found no difference between experimental and nonexperimental estimates but failed to determine correspondence based on more stringent criteria. We show that tests for nonconsent bias in the benchmark RFE might provide an important signal as to confounding in the nonexperimental estimates.

Suggested Citation

  • Onur Altindag & Theodore J. Joyce & Julie A. Reeder, 2019. "Can Nonexperimental Methods Provide Unbiased Estimates of a Breastfeeding Intervention? A Within-Study Comparison of Peer Counseling in Oregon," Evaluation Review, , vol. 43(3-4), pages 152-188, June.
  • Handle: RePEc:sae:evarev:v:43:y:2019:i:3-4:p:152-188
    DOI: 10.1177/0193841X19865963
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0193841X19865963
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0193841X19865963?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    2. A. Smith, Jeffrey & E. Todd, Petra, 2005. "Does matching overcome LaLonde's critique of nonexperimental estimators?," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 305-353.
    3. James J. Heckman & Hidehiko Ichimura & Petra E. Todd, 1997. "Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 605-654.
    4. Long, Qi & Little, Roderick J. & Lin, Xihong, 2008. "Causal Inference in Hybrid Intervention Trials Involving Treatment Choice," Journal of the American Statistical Association, American Statistical Association, vol. 103, pages 474-484, June.
    5. James J. Heckman & Jeffrey A. Smith, 1995. "Assessing the Case for Social Experiments," Journal of Economic Perspectives, American Economic Association, vol. 9(2), pages 85-110, Spring.
    6. Shadish, William R. & Clark, M. H. & Steiner, Peter M., 2008. "Can Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments," Journal of the American Statistical Association, American Statistical Association, vol. 103(484), pages 1334-1344.
    7. Burt S. Barnow & Coady Wing & M. H. Clark, 2017. "What Can We Learn From A Doubly Randomized Preference Trial?—An Instrumental Variables Perspective," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 36(2), pages 418-437, March.
    8. Thomas D. Cook & William R. Shadish & Vivian C. Wong, 2008. "Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 27(4), pages 724-750.
    9. Dehejia, Rajeev, 2005. "Practical propensity score matching: a reply to Smith and Todd," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 355-364.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    2. Lechner, Michael & Wunsch, Conny, 2013. "Sensitivity of matching-based program evaluations to the availability of control variables," Labour Economics, Elsevier, vol. 21(C), pages 111-121.
    3. Yonatan Eyal, 2020. "Self-Assessment Variables as a Source of Information in the Evaluation of Intervention Programs: A Theoretical and Methodological Framework," SAGE Open, , vol. 10(1), pages 21582440198, January.
    4. Wichman, Casey J. & Ferraro, Paul J., 2017. "A cautionary tale on using panel data estimators to measure program impacts," Economics Letters, Elsevier, vol. 151(C), pages 82-90.
    5. Ravallion, Martin, 2008. "Evaluating Anti-Poverty Programs," Handbook of Development Economics, in: T. Paul Schultz & John A. Strauss (ed.), Handbook of Development Economics, edition 1, volume 4, chapter 59, pages 3787-3846, Elsevier.
    6. Vivian C. Wong & Peter M. Steiner & Kylie L. Anglin, 2018. "What Can Be Learned From Empirical Evaluations of Nonexperimental Methods?," Evaluation Review, , vol. 42(2), pages 147-175, April.
    7. Andrew P. Jaciw, 2016. "Assessing the Accuracy of Generalized Inferences From Comparison Group Studies Using a Within-Study Comparison Approach," Evaluation Review, , vol. 40(3), pages 199-240, June.
    8. Iacus, Stefano M. & Porro, Giuseppe, 2007. "Missing data imputation, matching and other applications of random recursive partitioning," Computational Statistics & Data Analysis, Elsevier, vol. 52(2), pages 773-789, October.
    9. David McKenzie & John Gibson & Steven Stillman, 2010. "How Important Is Selection? Experimental vs. Non-Experimental Measures of the Income Gains from Migration," Journal of the European Economic Association, MIT Press, vol. 8(4), pages 913-945, June.
    10. Katherine Baicker & Theodore Svoronos, 2019. "Testing the Validity of the Single Interrupted Time Series Design," NBER Working Papers 26080, National Bureau of Economic Research, Inc.
    11. McKenzie, David & Gibson, John & Stillman, Steven, 2006. "How important is selection ? Experimental versus non-experimental measures of the income gains from migration," Policy Research Working Paper Series 3906, The World Bank.
    12. Flores, Carlos A. & Mitnik, Oscar A., 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," IZA Discussion Papers 4451, Institute of Labor Economics (IZA).
    13. Jochen Kluve & Boris Augurzky, 2007. "Assessing the performance of matching algorithms when selection into treatment is strong," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 22(3), pages 533-557.
    14. Peter R. Mueser & Kenneth R. Troske & Alexey Gorislavsky, 2007. "Using State Administrative Data to Measure Program Performance," The Review of Economics and Statistics, MIT Press, vol. 89(4), pages 761-783, November.
    15. Hugo Ñopo, 2008. "Matching as a Tool to Decompose Wage Gaps," The Review of Economics and Statistics, MIT Press, vol. 90(2), pages 290-299, May.
    16. Giuseppe Porro & Stefano Maria Iacus, 2009. "Random Recursive Partitioning: a matching method for the estimation of the average treatment effect," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 24(1), pages 163-185.
    17. Ferraro, Paul J. & Miranda, Juan José, 2014. "The performance of non-experimental designs in the evaluation of environmental programs: A design-replication study using a large-scale randomized experiment as a benchmark," Journal of Economic Behavior & Organization, Elsevier, vol. 107(PA), pages 344-365.
    18. Iacus, Stefano & Porro, Giuseppe, 2008. "Invariant and Metric Free Proximities for Data Matching: An R Package," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 25(i11).
    19. David McKenzie & John Gibson & Steven Stillman, 2006. "How Important is Selection? Experimental vs Non-experimental Measures of the Income Gains of Migration," Working Papers 06_02, Motu Economic and Public Policy Research.
    20. Ben Weidmann & Luke Miratrix, 2021. "Lurking Inferential Monsters? Quantifying Selection Bias In Evaluations Of School Programs," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 40(3), pages 964-986, June.

    More about this item

    Keywords

    breasfeeding; RCT; WIC; WSC;
    All these keywords.

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:evarev:v:43:y:2019:i:3-4:p:152-188. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.