IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/22595.html
   My bibliography  Save this paper

Understanding and Misunderstanding Randomized Controlled Trials

Author

Listed:
  • Angus Deaton
  • Nancy Cartwright

Abstract

RCTs would be more useful if there were more realistic expectations of them and if their pitfalls were better recognized. For example, and contrary to many claims in the applied literature, randomization does not equalize everything but the treatment across treatments and controls, it does not automatically deliver a precise estimate of the average treatment effect (ATE), and it does not relieve us of the need to think about (observed or unobserved) confounders. Estimates apply to the trial sample only, sometimes a convenience sample, and usually selected; justification is required to extend them to other groups, including any population to which the trial sample belongs. Demanding “external validity” is unhelpful because it expects too much of an RCT while undervaluing its contribution. Statistical inference on ATEs involves hazards that are not always recognized. RCTs do indeed require minimal assumptions and can operate with little prior knowledge. This is an advantage when persuading distrustful audiences, but it is a disadvantage for cumulative scientific progress, where prior knowledge should be built upon and not discarded. RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not “what works,” but “why things work”.

Suggested Citation

  • Angus Deaton & Nancy Cartwright, 2016. "Understanding and Misunderstanding Randomized Controlled Trials," NBER Working Papers 22595, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:22595
    Note: AG DEV HC HE LS
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w22595.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Charles F. Manski, 2013. "Response to the Review of ‘Public Policy in an Uncertain World’," Economic Journal, Royal Economic Society, vol. 0, pages 412-415, August.
    2. Abhijit Banerjee & Sylvain Chassang & Sergio Montero & Erik Snowberg, 2017. "A Theory of Experimenters," NBER Working Papers 23867, National Bureau of Economic Research, Inc.
    3. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    4. Angus Deaton, 2010. "Instruments, Randomization, and Learning about Development," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 424-455, June.
    5. Marianne P. Bitler & Jonah B. Gelbach & Hilary W. Hoynes, 2006. "What Mean Impacts Miss: Distributional Effects of Welfare Reform Experiments," American Economic Review, American Economic Association, vol. 96(4), pages 988-1012, September.
    6. Guido W. Imbens & Michal Kolesár, 2016. "Robust Standard Errors in Small Samples: Some Practical Advice," The Review of Economics and Statistics, MIT Press, vol. 98(4), pages 701-712, October.
    7. Orazio Attanasio & Sarah Cattan & Emla Fitzsimons & Costas Meghir & Marta Rubio-Codina, 2015. "Estimating the Production Function for Human Capital: Results from a Randomized Control Trial in Colombia," Cowles Foundation Discussion Papers 1987, Cowles Foundation for Research in Economics, Yale University.
    8. James J. Heckman & Hidehiko Ichimura & Petra E. Todd, 1997. "Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," Review of Economic Studies, Oxford University Press, vol. 64(4), pages 605-654.
    9. Aviva Aron-Dine & Liran Einav & Amy Finkelstein, 2013. "The RAND Health Insurance Experiment, Three Decades Later," Journal of Economic Perspectives, American Economic Association, vol. 27(1), pages 197-222, Winter.
    10. Manning, Willard G, et al, 1987. "Health Insurance and the Demand for Medical Care: Evidence from a Randomized Experiment," American Economic Review, American Economic Association, vol. 77(3), pages 251-277, June.
    11. Orazio Attanasio & Sarah Cattan & Emla Fitzsimons & Costas Meghir & Marta Rubio-Codina, 2020. "Estimating the Production Function for Human Capital: Results from a Randomized Controlled Trial in Colombia," American Economic Review, American Economic Association, vol. 110(1), pages 48-85, January.
    12. Esther Duflo & Rema Hanna & Stephen P. Ryan, 2012. "Incentives Work: Getting Teachers to Come to School," American Economic Review, American Economic Association, vol. 102(4), pages 1241-1278, June.
    13. Thomas D. Cook, 2014. "Generalizing Causal Knowledge In The Policy Sciences: External Validity As A Task Of Both Multiattribute Representation And Multiattribute Extrapolation," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 33(2), pages 527-536, March.
    14. Robert A. Moffitt, 1979. "The Labor Supply Response in the Gary Experiment," Journal of Human Resources, University of Wisconsin Press, vol. 14(4), pages 477-487.
    15. Kasy, Maximilian, 2016. "Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead," Political Analysis, Cambridge University Press, vol. 24(3), pages 324-338, July.
    16. Paul J. Gertler & Sebastian Martinez & Patrick Premand & Laura B. Rawlings & Christel M. J. Vermeersch, 2016. "Impact Evaluation in Practice, Second Edition," World Bank Publications, The World Bank, number 25030, July.
    17. Orazio P. Attanasio & Costas Meghir & Ana Santiago, 2012. "Education Choices in Mexico: Using a Structural Model and a Randomized Experiment to Evaluate PROGRESA," Review of Economic Studies, Oxford University Press, vol. 79(1), pages 37-66.
    18. Joshua D. Angrist, 2004. "Treatment effect heterogeneity in theory and practice," Economic Journal, Royal Economic Society, vol. 114(494), pages 52-83, March.
    19. Michael Kremer & Alaka Holla, 2009. "Improving Education in the Developing World: What Have We Learned from Randomized Evaluations?," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 513-545, May.
    20. James Heckman & Neil Hohmann & Jeffrey Smith & Michael Khoo, 2000. "Substitution and Dropout Bias in Social Experiments: A Study of an Influential Social Experiment," The Quarterly Journal of Economics, Oxford University Press, vol. 115(2), pages 651-694.
    21. Kenneth I. Wolpin & Petra E. Todd, 2006. "Assessing the Impact of a School Subsidy Program in Mexico: Using a Social Experiment to Validate a Dynamic Behavioral Model of Child Schooling and Fertility," American Economic Review, American Economic Association, vol. 96(5), pages 1384-1417, December.
    22. Wolpin, Kenneth I., 2013. "The Limits of Inference without Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262019086.
    23. Joshua Angrist & Eric Bettinger & Erik Bloom & Elizabeth King & Michael Kremer, 2002. "Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment," American Economic Review, American Economic Association, vol. 92(5), pages 1535-1558, December.
    24. James J. Heckman & Vytlacil, Edward J., 2007. "Econometric Evaluation of Social Programs, Part I: Causal Models, Structural Models and Econometric Policy Evaluation," Handbook of Econometrics, in: J.J. Heckman & E.E. Leamer (ed.), Handbook of Econometrics, edition 1, volume 6, chapter 70, Elsevier.
    25. Heckman, James J. & Lalonde, Robert J. & Smith, Jeffrey A., 1999. "The economics and econometrics of active labor market programs," Handbook of Labor Economics, in: O. Ashenfelter & D. Card (ed.), Handbook of Labor Economics, edition 1, volume 3, chapter 31, pages 1865-2097, Elsevier.
    26. James Heckman & Rodrigo Pinto & Peter Savelyev, 2013. "Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes," American Economic Review, American Economic Association, vol. 103(6), pages 2052-2086, October.
    27. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    28. Donald B. Rubin, 2005. "Causal Inference Using Potential Outcomes: Design, Modeling, Decisions," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 322-331, March.
    29. Abhijit V. Banerjee & Esther Duflo, 2009. "The Experimental Approach to Development Economics," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 151-178, May.
    30. Deaton,Angus & Muellbauer,John, 1980. "Economics and Consumer Behavior," Cambridge Books, Cambridge University Press, number 9780521296762, October.
    31. Elizabeth A. Stuart & Stephen R. Cole & Catherine P. Bradshaw & Philip J. Leaf, 2011. "The use of propensity scores to assess the generalizability of results from randomized trials," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(2), pages 369-386, April.
    32. Joseph Hotz, V. & Imbens, Guido W. & Mortimer, Julie H., 2005. "Predicting the efficacy of future training programs using past experiences at other locations," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 241-270.
    33. Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
    34. Petra E. Todd & Kenneth I. Wolpin, 2008. "Ex Ante Evaluation of Social Programs," Annals of Economics and Statistics, GENES, issue 91-92, pages 263-291.
    35. David Greenberg & Mark Shroder & Matthew Onstott, 1999. "The Social Experiment Market," Journal of Economic Perspectives, American Economic Association, vol. 13(3), pages 157-172, Summer.
    36. Glenn W. Harrison, 2013. "Field experiments and methodological intolerance," Journal of Economic Methodology, Taylor & Francis Journals, vol. 20(2), pages 103-117, June.
    37. Cartwright, Nancy, 1994. "Nature's Capacities and Their Measurement," OUP Catalogue, Oxford University Press, number 9780198235071.
    38. Bauchet, Jonathan & Morduch, Jonathan & Ravi, Shamika, 2015. "Failure vs. displacement: Why an innovative anti-poverty program showed no net impact in South India," Journal of Development Economics, Elsevier, vol. 116(C), pages 1-16.
    39. Manski, Charles F., 2013. "Public Policy in an Uncertain World: Analysis and Decisions," Economics Books, Harvard University Press, number 9780674066892, Spring.
    40. repec:adr:anecst:y:2008:i:91-92:p:13 is not listed on IDEAS
    41. Bhattacharya, Debopam & Dupas, Pascaline, 2012. "Inferring welfare maximizing treatment assignment under budget constraints," Journal of Econometrics, Elsevier, vol. 167(1), pages 168-196.
    42. Dennis Aigner, 1985. "The Residential Electricity Time-of-Use Pricing Experiments: What Have We Learned?," NBER Chapters, in: Social Experimentation, pages 11-54, National Bureau of Economic Research, Inc.
    43. Seán M. Muller, 2015. "Causal Interaction and External Validity: Obstacles to the Policy Relevance of Randomized Evaluations," World Bank Economic Review, World Bank Group, vol. 29(suppl_1), pages 217-225.
    44. Charles F. Manski, 2004. "Statistical Treatment Rules for Heterogeneous Populations," Econometrica, Econometric Society, vol. 72(4), pages 1221-1246, July.
    45. Glenn W Harrison, 2014. "Impact Evaluation and Welfare Evaluation," The European Journal of Development Research, Palgrave Macmillan;European Association of Development Research and Training Institutes (EADI), vol. 26(1), pages 39-45, January.
    46. Hsieh, Chang-Tai & Urquiola, Miguel, 2006. "The effects of generalized school choice on achievement and stratification: Evidence from Chile's voucher program," Journal of Public Economics, Elsevier, vol. 90(8-9), pages 1477-1503, September.
    47. Abhijit Banerjee & Dean Karlan & Jonathan Zinman, 2015. "Six Randomized Evaluations of Microcredit: Introduction and Further Steps," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 1-21, January.
    48. Conlisk, John, 1973. "Choice of Response Functional Form in Designing Subsidy Experiments," Econometrica, Econometric Society, vol. 41(4), pages 643-656, July.
    49. Metcalf, Charles E, 1973. "Making Inferences from Controlled Income Maintenance Experiments," American Economic Review, American Economic Association, vol. 63(3), pages 478-483, June.
    50. Glenn W. Harrison, 2014. "Cautionary notes on the use of field experiments to address policy issues," Oxford Review of Economic Policy, Oxford University Press, vol. 30(4), pages 753-763.
    51. Ziliak, Stephen T., 2014. "Balanced versus Randomized Field Experiments in Economics: Why W. S. Gosset aka "Student" Matters," Review of Behavioral Economics, now publishers, vol. 1(1-2), pages 167-208, January.
    52. Abhijit Banerjee & Sylvain Chassang & Erik Snowberg, 2016. "Decision Theoretic Approaches to Experiment Design and External Validity," NBER Working Papers 22167, National Bureau of Economic Research, Inc.
    53. repec:adr:anecst:y:2008:i:91-92 is not listed on IDEAS
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jeffrey Smith & Arthur Sweetman, 2016. "Viewpoint: Estimating the causal effects of policies and programs," Canadian Journal of Economics, Canadian Economics Association, vol. 49(3), pages 871-905, August.
    2. Susan Athey & Guido W. Imbens, 2017. "The State of Applied Econometrics: Causality and Policy Evaluation," Journal of Economic Perspectives, American Economic Association, vol. 31(2), pages 3-32, Spring.
    3. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    4. Justman, Moshe, 2018. "Randomized controlled trials informing public policy: Lessons from project STAR and class size reduction," European Journal of Political Economy, Elsevier, vol. 54(C), pages 167-174.
    5. Hunt Allcott, 2012. "Site Selection Bias in Program Evaluation," NBER Working Papers 18373, National Bureau of Economic Research, Inc.
    6. Sebastian Galiani & Juan Pantano, 2021. "Structural Models: Inception and Frontier," NBER Working Papers 28698, National Bureau of Economic Research, Inc.
    7. Burt S. Barnow & Jeffrey Smith, 2015. "Employment and Training Programs," NBER Chapters, in: Economics of Means-Tested Transfer Programs in the United States, Volume 2, pages 127-234, National Bureau of Economic Research, Inc.
    8. Karthik Muralidharan & Paul Niehaus, 2017. "Experimentation at Scale," Journal of Economic Perspectives, American Economic Association, vol. 31(4), pages 103-124, Fall.
    9. Rothstein, Jesse & von Wachter, Till, 2016. "Social Experiments in the Labor Market," Institute for Research on Labor and Employment, Working Paper Series qt6605k20b, Institute of Industrial Relations, UC Berkeley.
    10. Hao Bo & Sebastian Galiani, 2019. "Assessing External Validity," NBER Working Papers 26422, National Bureau of Economic Research, Inc.
    11. Rothstein, Jesse & von Wachter, Till, 2016. "Social Experiments in the Labor Market," Department of Economics, Working Paper Series qt6605k20b, Department of Economics, Institute for Business and Economic Research, UC Berkeley.
    12. Rothstein, J & von Wachter, T, 2016. "Social Experiments in the Labor Market," Department of Economics, Working Paper Series qt7957p9g6, Department of Economics, Institute for Business and Economic Research, UC Berkeley.
    13. Ferraro, Paul J. & Miranda, Juan José, 2013. "Heterogeneous treatment effects and mechanisms in information-based environmental policies: Evidence from a large-scale field experiment," Resource and Energy Economics, Elsevier, vol. 35(3), pages 356-379.
    14. Dettmann, E. & Becker, C. & Schmeißer, C., 2011. "Distance functions for matching in small samples," Computational Statistics & Data Analysis, Elsevier, vol. 55(5), pages 1942-1960, May.
    15. James J. Heckman, 1991. "Randomization and Social Policy Evaluation Revisited," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
    16. Pedro Carneiro & Sokbae Lee & Daniel Wilhelm, 2020. "Optimal data collection for randomized control trials [Microcredit impacts: Evidence from a randomized microcredit program placement experiment by Compartamos Banco]," Econometrics Journal, Royal Economic Society, vol. 23(1), pages 1-31.
    17. James J. Heckman, 1991. "Randomization and Social Policy Evaluation Revisited," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
    18. Susan Athey & Raj Chetty & Guido Imbens, 2020. "Combining Experimental and Observational Data to Estimate Treatment Effects on Long Term Outcomes," Papers 2006.09676, arXiv.org.
    19. David McKenzie & John Gibson & Steven Stillman, 2010. "How Important Is Selection? Experimental vs. Non-Experimental Measures of the Income Gains from Migration," Journal of the European Economic Association, MIT Press, vol. 8(4), pages 913-945, June.
    20. McKenzie, David & Gibson, John & Stillman, Steven, 2006. "How important is selection ? Experimental versus non-experimental measures of the income gains from migration," Policy Research Working Paper Series 3906, The World Bank.

    More about this item

    JEL classification:

    • C10 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - General
    • C26 - Mathematical and Quantitative Methods - - Single Equation Models; Single Variables - - - Instrumental Variables (IV) Estimation
    • C93 - Mathematical and Quantitative Methods - - Design of Experiments - - - Field Experiments
    • O22 - Economic Development, Innovation, Technological Change, and Growth - - Development Planning and Policy - - - Project Analysis

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:22595. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: . General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (email available below). General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.