IDEAS home Printed from https://ideas.repec.org/p/pri/rpdevs/august_25.pdf.html
   My bibliography  Save this paper

Understanding and Misunderstanding Randomized Controlled Trials

Author

Listed:
  • Angus Deaton

    (Princeton University)

  • Nancy Cartwright

    (Durham University and University of California San Diego)

Abstract

RCTs are valuable tools whose use is spreading in economics and in other social sciences. They are seen as desirable aids in scientific discovery and for generating evidence for policy. Yet some of the enthusiasm for RCTs appears to be based on misunderstandings: that randomization provides a fair test by equalizing everything but the treatment and so allows a precise estimate of the treatment alone; that randomization is required to solve selection problems; that lack of blinding does little to compromise inference; and that statistical inference in RCTs is straightforward, because it requires only the comparison of two means. None of these statements is true. RCTs do indeed require minimal assumptions and can operate with little prior knowledge, an advantage when persuading distrustful audiences, but a crucial disadvantage for cumulative scientific progress, where randomization adds noise and undermines precision. The lack of connection between RCTs and other scientific knowledge makes it hard to use them outside of the exact context in which they are conducted. Yet, once they are seen as part of a cumulative program, they can play a role in building general knowledge and useful predictions, provided they are combined with other methods, including conceptual and theoretical development, to discover not "what works," but why things work. Unless we are prepared to make assumptions, and to stand on what we know, making statements that will be incredible to some, all the credibility of RCTs is for naught.

Suggested Citation

  • Angus Deaton & Nancy Cartwright, 2016. "Understanding and Misunderstanding Randomized Controlled Trials," Working Papers august_25.pdf, Princeton University, Woodrow Wilson School of Public and International Affairs, Research Program in Development Studies..
  • Handle: RePEc:pri:rpdevs:august_25.pdf
    as

    Download full text from publisher

    File URL: http://www.princeton.edu/~deaton/downloads/Deaton_Cartwright_RCTs_with_ABSTRACT_August_25.pdf
    Download Restriction: no

    Other versions of this item:

    References listed on IDEAS

    as
    1. Petra E. Todd & Kenneth I. Wolpin, 2008. "Ex Ante Evaluation of Social Programs," Annals of Economics and Statistics, GENES, issue 91-92, pages 263-291.
    2. Charles F. Manski, 2013. "Response to the Review of ‘Public Policy in an Uncertain World’," Economic Journal, Royal Economic Society, vol. 0, pages 412-415, August.
    3. Abhijit Banerjee & Sylvain Chassang & Sergio Montero & Erik Snowberg, 2017. "A Theory of Experimenters," CESifo Working Paper Series 6678, CESifo Group Munich.
    4. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    5. Angus Deaton, 2010. "Instruments, Randomization, and Learning about Development," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 424-455, June.
    6. James Heckman & Rodrigo Pinto & Peter Savelyev, 2013. "Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes," American Economic Review, American Economic Association, vol. 103(6), pages 2052-2086, October.
    7. David Greenberg & Mark Shroder & Matthew Onstott, 1999. "The Social Experiment Market," Journal of Economic Perspectives, American Economic Association, vol. 13(3), pages 157-172, Summer.
    8. Marianne P. Bitler & Jonah B. Gelbach & Hilary W. Hoynes, 2006. "What Mean Impacts Miss: Distributional Effects of Welfare Reform Experiments," American Economic Review, American Economic Association, vol. 96(4), pages 988-1012, September.
    9. Guido W. Imbens & Michal Kolesár, 2016. "Robust Standard Errors in Small Samples: Some Practical Advice," The Review of Economics and Statistics, MIT Press, vol. 98(4), pages 701-712, October.
    10. Aviva Aron-Dine & Liran Einav & Amy Finkelstein, 2013. "The RAND Health Insurance Experiment, Three Decades Later," Journal of Economic Perspectives, American Economic Association, vol. 27(1), pages 197-222, Winter.
    11. Orazio Attanasio & Sarah Cattan & Emla Fitzsimons & Costas Meghir & Marta Rubio-Codina, 2015. "Estimating the Production Function for Human Capital: Results from a Randomized Control Trial in Colombia," Cowles Foundation Discussion Papers 1987, Cowles Foundation for Research in Economics, Yale University.
    12. Glenn W. Harrison, 2013. "Field experiments and methodological intolerance," Journal of Economic Methodology, Taylor & Francis Journals, vol. 20(2), pages 103-117, June.
    13. James J. Heckman & Hidehiko Ichimura & Petra E. Todd, 1997. "Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," Review of Economic Studies, Oxford University Press, vol. 64(4), pages 605-654.
    14. Cartwright, Nancy, 1994. "Nature's Capacities and Their Measurement," OUP Catalogue, Oxford University Press, number 9780198235071.
    15. Manning, Willard G, et al, 1987. "Health Insurance and the Demand for Medical Care: Evidence from a Randomized Experiment," American Economic Review, American Economic Association, vol. 77(3), pages 251-277, June.
    16. Bauchet, Jonathan & Morduch, Jonathan & Ravi, Shamika, 2015. "Failure vs. displacement: Why an innovative anti-poverty program showed no net impact in South India," Journal of Development Economics, Elsevier, vol. 116(C), pages 1-16.
    17. Orazio Attanasio & Sarah Cattan & Emla Fitzsimons & Costas Meghir & Marta Rubio Codina, 2015. "Estimating the production function for human capital: results from a randomized controlled trial in Colombia," IFS Working Papers W15/06, Institute for Fiscal Studies.
    18. Esther Duflo & Rema Hanna & Stephen P. Ryan, 2012. "Incentives Work: Getting Teachers to Come to School," American Economic Review, American Economic Association, vol. 102(4), pages 1241-1278, June.
    19. Thomas D. Cook, 2014. "Generalizing Causal Knowledge In The Policy Sciences: External Validity As A Task Of Both Multiattribute Representation And Multiattribute Extrapolation," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 33(2), pages 527-536, March.
    20. Joshua Angrist & Eric Bettinger & Erik Bloom & Elizabeth King & Michael Kremer, 2002. "Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment," American Economic Review, American Economic Association, vol. 92(5), pages 1535-1558, December.
    21. Orazio P. Attanasio & Costas Meghir & Ana Santiago, 2012. "Education Choices in Mexico: Using a Structural Model and a Randomized Experiment to Evaluate PROGRESA," Review of Economic Studies, Oxford University Press, vol. 79(1), pages 37-66.
    22. Robert A. Moffitt, 1979. "The Labor Supply Response in the Gary Experiment," Journal of Human Resources, University of Wisconsin Press, vol. 14(4), pages 477-487.
    23. Manski, Charles F., 2013. "Public Policy in an Uncertain World: Analysis and Decisions," Economics Books, Harvard University Press, number 9780674066892, December.
    24. Tessa Bold & Mwangi Kimenyi & Germano Mwabu & Alice Ng'ang'a & Justin Sandefur, 2013. "Scaling-up What Works: Experimental Evidence on External Validity in Kenyan Education," CSAE Working Paper Series 2013-04, Centre for the Study of African Economies, University of Oxford.
    25. repec:adr:anecst:y:2008:i:91-92:p:13 is not listed on IDEAS
    26. Abhijit V. Banerjee & Esther Duflo, 2009. "The Experimental Approach to Development Economics," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 151-178, May.
    27. Bhattacharya, Debopam & Dupas, Pascaline, 2012. "Inferring welfare maximizing treatment assignment under budget constraints," Journal of Econometrics, Elsevier, vol. 167(1), pages 168-196.
    28. Dennis Aigner, 1985. "The Residential Electricity Time-of-Use Pricing Experiments: What Have We Learned?," NBER Chapters,in: Social Experimentation, pages 11-54 National Bureau of Economic Research, Inc.
    29. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    30. Seán M. Muller, 2015. "Causal Interaction and External Validity: Obstacles to the Policy Relevance of Randomized Evaluations," World Bank Economic Review, World Bank Group, vol. 29(suppl_1), pages 217-225.
    31. Charles F. Manski, 2004. "Statistical Treatment Rules for Heterogeneous Populations," Econometrica, Econometric Society, vol. 72(4), pages 1221-1246, July.
    32. Kasy, Maximilian, 2016. "Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead," Political Analysis, Cambridge University Press, vol. 24(03), pages 324-338, June.
    33. Paul J. Gertler & Sebastian Martinez & Patrick Premand & Laura B. Rawlings & Christel M. J. Vermeersch, 2016. "Impact Evaluation in Practice, Second Edition," World Bank Publications, The World Bank, number 25030.
    34. Glenn W Harrison, 2014. "Impact Evaluation and Welfare Evaluation," The European Journal of Development Research, Palgrave Macmillan;European Association of Development Research and Training Institutes (EADI), vol. 26(1), pages 39-45, January.
    35. Orazio Attanasio & Sarah Cattan & Emla Fitzsimons & Costas Meghir & Marta Rubio-Codina, 2015. "Estimating the Production Function for Human Capital: Results from a Randomized Control Trial in Colombia," Cowles Foundation Discussion Papers 1987, Cowles Foundation for Research in Economics, Yale University.
    36. Joshua D. Angrist, 2004. "Treatment effect heterogeneity in theory and practice," Economic Journal, Royal Economic Society, vol. 114(494), pages 52-83, March.
    37. Michael Kremer & Alaka Holla, 2009. "Improving Education in the Developing World: What Have We Learned from Randomized Evaluations?," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 513-545, May.
    38. James Heckman & Neil Hohmann & Jeffrey Smith & Michael Khoo, 2000. "Substitution and Dropout Bias in Social Experiments: A Study of an Influential Social Experiment," The Quarterly Journal of Economics, Oxford University Press, vol. 115(2), pages 651-694.
    39. Hsieh, Chang-Tai & Urquiola, Miguel, 2006. "The effects of generalized school choice on achievement and stratification: Evidence from Chile's voucher program," Journal of Public Economics, Elsevier, vol. 90(8-9), pages 1477-1503, September.
    40. Kenneth I. Wolpin & Petra E. Todd, 2006. "Assessing the Impact of a School Subsidy Program in Mexico: Using a Social Experiment to Validate a Dynamic Behavioral Model of Child Schooling and Fertility," American Economic Review, American Economic Association, vol. 96(5), pages 1384-1417, December.
    41. Abhijit Banerjee & Dean Karlan & Jonathan Zinman, 2015. "Six Randomized Evaluations of Microcredit: Introduction and Further Steps," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 1-21, January.
    42. Wolpin, Kenneth I., 2013. "The Limits of Inference without Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262019086, January.
    43. Conlisk, John, 1973. "Choice of Response Functional Form in Designing Subsidy Experiments," Econometrica, Econometric Society, vol. 41(4), pages 643-656, July.
    44. James J. Heckman & Vytlacil, Edward J., 2007. "Econometric Evaluation of Social Programs, Part I: Causal Models, Structural Models and Econometric Policy Evaluation," Handbook of Econometrics,in: J.J. Heckman & E.E. Leamer (ed.), Handbook of Econometrics, edition 1, volume 6, chapter 70 Elsevier.
    45. Heckman, James J. & Lalonde, Robert J. & Smith, Jeffrey A., 1999. "The economics and econometrics of active labor market programs," Handbook of Labor Economics,in: O. Ashenfelter & D. Card (ed.), Handbook of Labor Economics, edition 1, volume 3, chapter 31, pages 1865-2097 Elsevier.
    46. Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
    47. Metcalf, Charles E, 1973. "Making Inferences from Controlled Income Maintenance Experiments," American Economic Review, American Economic Association, vol. 63(3), pages 478-483, June.
    48. Glenn W. Harrison, 2014. "Cautionary notes on the use of field experiments to address policy issues," Oxford Review of Economic Policy, Oxford University Press, vol. 30(4), pages 753-763.
    49. Ziliak, Stephen T., 2014. "Balanced versus Randomized Field Experiments in Economics: Why W. S. Gosset aka "Student" Matters," Review of Behavioral Economics, now publishers, vol. 1(1-2), pages 167-208, January.
    50. Donald B. Rubin, 2005. "Causal Inference Using Potential Outcomes: Design, Modeling, Decisions," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 322-331, March.
    51. Abhijit Banerjee & Sylvain Chassang & Sergio Montero & Erik Snowberg, 2017. "A Theory of Experimenters," NBER Working Papers 23867, National Bureau of Economic Research, Inc.
    52. Abhijit Banerjee & Sylvain Chassang & Erik Snowberg, 2016. "Decision Theoretic Approaches to Experiment Design and External Validity," NBER Working Papers 22167, National Bureau of Economic Research, Inc.
    53. Deaton,Angus & Muellbauer,John, 1980. "Economics and Consumer Behavior," Cambridge Books, Cambridge University Press, number 9780521296762, April.
    54. Elizabeth A. Stuart & Stephen R. Cole & Catherine P. Bradshaw & Philip J. Leaf, 2011. "The use of propensity scores to assess the generalizability of results from randomized trials," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(2), pages 369-386, April.
    55. Joseph Hotz, V. & Imbens, Guido W. & Mortimer, Julie H., 2005. "Predicting the efficacy of future training programs using past experiences at other locations," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 241-270.
    56. repec:adr:anecst:y:2008:i:91-92 is not listed on IDEAS
    Full references (including those not matched with items on IDEAS)

    More about this item

    JEL classification:

    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:pri:rpdevs:august_25.pdf. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Bobray Bordelon). General contact details of provider: http://edirc.repec.org/data/rpprius.html .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.