IDEAS home Printed from https://ideas.repec.org/p/pri/cheawb/2017-10.html
   My bibliography  Save this paper

Understanding and misunderstanding randomized controlled trials

Author

Listed:
  • Angus Deaton

    (Princeton University)

  • Nancy Cartwright

    (Durham University and University of California San Diego)

Abstract

RCTs are valuable tools whose use is spreading in economics and in other social sciences. They are seen as desirable aids in scientific discovery and for generating evidence for policy. Yet some of the enthusiasm for RCTs appears to be based on misunderstandings: that randomization provides a fair test by equalizing everything but the treatment and so allows a precise estimate of the treatment alone; that randomization is required to solve selection problems; that lack of blinding does little to compromise inference; and that statistical inference in RCTs is straightforward, because it requires only the comparison of two means. None of these statements is true. RCTs do indeed require minimal assumptions and can operate with little prior knowledge, an advantage when persuading distrustful audiences, but a crucial disadvantage for cumulative scientific progress, where randomization adds noise and undermines precision. The lack of connection between RCTs and other scientific knowledge makes it hard to use them outside of the exact context in which they are conducted. Yet, once they are seen as part of a cumulative program, they can play a role in building general knowledge and useful predictions, provided they are combined with other methods, including conceptual and theoretical development, to discover not "what works," but why things work. Unless we are prepared to make assumptions, and to stand on what we know, making statements that will be incredible to some, all the credibility of RCTs is for naught.

Suggested Citation

  • Angus Deaton & Nancy Cartwright, 2017. "Understanding and misunderstanding randomized controlled trials," Working Papers 2017-10, Princeton University, Woodrow Wilson School of Public and International Affairs, Center for Health and Wellbeing..
  • Handle: RePEc:pri:cheawb:2017-10
    as

    Download full text from publisher

    File URL: https://drive.google.com/file/d/0BwjFN4HbBrDBZVZqbGltR1ZISEk/view
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Guido W. Imbens & Michal Kolesár, 2016. "Robust Standard Errors in Small Samples: Some Practical Advice," The Review of Economics and Statistics, MIT Press, vol. 98(4), pages 701-712, October.
    2. Attanasio, Orazio & Cattan, Sarah & Fitzsimons, Emla & Meghir, Costas & Rubio-Codina, Marta, 2015. "Estimating the Production Function for Human Capital: Results from a Randomized Control Trial in Colombia," IZA Discussion Papers 8856, Institute of Labor Economics (IZA).
    3. Robert A. Moffitt, 1979. "The Labor Supply Response in the Gary Experiment," Journal of Human Resources, University of Wisconsin Press, vol. 14(4), pages 477-487.
    4. Orazio Attanasio & Sarah Cattan & Emla Fitzsimons & Costas Meghir & Marta Rubio-Codina, 2020. "Estimating the Production Function for Human Capital: Results from a Randomized Controlled Trial in Colombia," American Economic Review, American Economic Association, vol. 110(1), pages 48-85, January.
    5. Joshua D. Angrist, 2004. "Treatment effect heterogeneity in theory and practice," Economic Journal, Royal Economic Society, vol. 114(494), pages 52-83, March.
    6. Wolpin, Kenneth I., 2013. "The Limits of Inference without Theory," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262019086, December.
    7. Joshua Angrist & Eric Bettinger & Erik Bloom & Elizabeth King & Michael Kremer, 2002. "Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment," American Economic Review, American Economic Association, vol. 92(5), pages 1535-1558, December.
    8. Petra E. Todd & Kenneth I. Wolpin, 2008. "Ex Ante Evaluation of Social Programs," Annals of Economics and Statistics, GENES, issue 91-92, pages 263-291.
    9. James Heckman & Rodrigo Pinto & Peter Savelyev, 2013. "Understanding the Mechanisms through Which an Influential Early Childhood Program Boosted Adult Outcomes," American Economic Review, American Economic Association, vol. 103(6), pages 2052-2086, October.
    10. Glenn W. Harrison, 2013. "Field experiments and methodological intolerance," Journal of Economic Methodology, Taylor & Francis Journals, vol. 20(2), pages 103-117, June.
    11. Cartwright, Nancy, 1994. "Nature's Capacities and Their Measurement," OUP Catalogue, Oxford University Press, number 9780198235071.
    12. Orazio P. Attanasio & Costas Meghir & Ana Santiago, 2012. "Education Choices in Mexico: Using a Structural Model and a Randomized Experiment to Evaluate PROGRESA," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 79(1), pages 37-66.
    13. Dennis Aigner, 1985. "The Residential Electricity Time-of-Use Pricing Experiments: What Have We Learned?," NBER Chapters, in: Social Experimentation, pages 11-54, National Bureau of Economic Research, Inc.
    14. Aviva Aron-Dine & Liran Einav & Amy Finkelstein, 2013. "The RAND Health Insurance Experiment, Three Decades Later," Journal of Economic Perspectives, American Economic Association, vol. 27(1), pages 197-222, Winter.
    15. Marianne P. Bitler & Jonah B. Gelbach & Hilary W. Hoynes, 2006. "What Mean Impacts Miss: Distributional Effects of Welfare Reform Experiments," American Economic Review, American Economic Association, vol. 96(4), pages 988-1012, September.
    16. Abhijit Banerjee & Dean Karlan & Jonathan Zinman, 2015. "Six Randomized Evaluations of Microcredit: Introduction and Further Steps," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 1-21, January.
    17. Conlisk, John, 1973. "Choice of Response Functional Form in Designing Subsidy Experiments," Econometrica, Econometric Society, vol. 41(4), pages 643-656, July.
    18. Glenn W. Harrison, 2014. "Cautionary notes on the use of field experiments to address policy issues," Oxford Review of Economic Policy, Oxford University Press and Oxford Review of Economic Policy Limited, vol. 30(4), pages 753-763.
    19. Kasy, Maximilian, 2016. "Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead," Political Analysis, Cambridge University Press, vol. 24(3), pages 324-338, July.
    20. Abhijit Banerjee & Sylvain Chassang & Erik Snowberg, 2016. "Decision Theoretic Approaches to Experiment Design and External Validity," NBER Working Papers 22167, National Bureau of Economic Research, Inc.
    21. LaLonde, Robert J, 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data," American Economic Review, American Economic Association, vol. 76(4), pages 604-620, September.
    22. James J. Heckman & Hidehiko Ichimura & Petra E. Todd, 1997. "Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Programme," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 605-654.
    23. Esther Duflo & Rema Hanna & Stephen P. Ryan, 2012. "Incentives Work: Getting Teachers to Come to School," American Economic Review, American Economic Association, vol. 102(4), pages 1241-1278, June.
    24. Eva Vivalt, 0. "How Much Can We Generalize From Impact Evaluations?," Journal of the European Economic Association, European Economic Association, vol. 18(6), pages 3045-3089.
    25. Michael Kremer & Alaka Holla, 2009. "Improving Education in the Developing World: What Have We Learned from Randomized Evaluations?," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 513-545, May.
    26. James Heckman & Neil Hohmann & Jeffrey Smith & Michael Khoo, 2000. "Substitution and Dropout Bias in Social Experiments: A Study of an Influential Social Experiment," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 115(2), pages 651-694.
    27. Kenneth I. Wolpin & Petra E. Todd, 2006. "Assessing the Impact of a School Subsidy Program in Mexico: Using a Social Experiment to Validate a Dynamic Behavioral Model of Child Schooling and Fertility," American Economic Review, American Economic Association, vol. 96(5), pages 1384-1417, December.
    28. Charles F. Manski, 2004. "Statistical Treatment Rules for Heterogeneous Populations," Econometrica, Econometric Society, vol. 72(4), pages 1221-1246, July.
    29. James J. Heckman & Vytlacil, Edward J., 2007. "Econometric Evaluation of Social Programs, Part I: Causal Models, Structural Models and Econometric Policy Evaluation," Handbook of Econometrics, in: J.J. Heckman & E.E. Leamer (ed.), Handbook of Econometrics, edition 1, volume 6, chapter 70, Elsevier.
    30. Heckman, James J. & Lalonde, Robert J. & Smith, Jeffrey A., 1999. "The economics and econometrics of active labor market programs," Handbook of Labor Economics, in: O. Ashenfelter & D. Card (ed.), Handbook of Labor Economics, edition 1, volume 3, chapter 31, pages 1865-2097, Elsevier.
    31. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    32. Donald B. Rubin, 2005. "Causal Inference Using Potential Outcomes: Design, Modeling, Decisions," Journal of the American Statistical Association, American Statistical Association, vol. 100, pages 322-331, March.
    33. Abhijit V. Banerjee & Esther Duflo, 2009. "The Experimental Approach to Development Economics," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 151-178, May.
    34. Elizabeth A. Stuart & Stephen R. Cole & Catherine P. Bradshaw & Philip J. Leaf, 2011. "The use of propensity scores to assess the generalizability of results from randomized trials," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 174(2), pages 369-386, April.
    35. Joseph Hotz, V. & Imbens, Guido W. & Mortimer, Julie H., 2005. "Predicting the efficacy of future training programs using past experiences at other locations," Journal of Econometrics, Elsevier, vol. 125(1-2), pages 241-270.
    36. Bauchet, Jonathan & Morduch, Jonathan & Ravi, Shamika, 2015. "Failure vs. displacement: Why an innovative anti-poverty program showed no net impact in South India," Journal of Development Economics, Elsevier, vol. 116(C), pages 1-16.
    37. Manski, Charles F., 2013. "Public Policy in an Uncertain World: Analysis and Decisions," Economics Books, Harvard University Press, number 9780674066892, Spring.
    38. repec:adr:anecst:y:2008:i:91-92:p:13 is not listed on IDEAS
    39. Bhattacharya, Debopam & Dupas, Pascaline, 2012. "Inferring welfare maximizing treatment assignment under budget constraints," Journal of Econometrics, Elsevier, vol. 167(1), pages 168-196.
    40. Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
    41. Ziliak, Stephen T., 2014. "Balanced versus Randomized Field Experiments in Economics: Why W. S. Gosset aka "Student" Matters," Review of Behavioral Economics, now publishers, vol. 1(1-2), pages 167-208, January.
    42. Abhijit Banerjee & Sylvain Chassang & Sergio Montero & Erik Snowberg, 2017. "A Theory of Experimenters," CESifo Working Paper Series 6678, CESifo.
    43. Charles F. Manski, 2013. "Response to the Review of ‘Public Policy in an Uncertain World’," Economic Journal, Royal Economic Society, vol. 0, pages 412-415, August.
    44. Angus Deaton, 2010. "Instruments, Randomization, and Learning about Development," Journal of Economic Literature, American Economic Association, vol. 48(2), pages 424-455, June.
    45. Paul J. Gertler & Sebastian Martinez & Patrick Premand & Laura B. Rawlings & Christel M. J. Vermeersch, 2016. "Impact Evaluation in Practice, Second Edition," World Bank Publications - Books, The World Bank Group, number 25030, December.
    46. Deaton,Angus & Muellbauer,John, 1980. "Economics and Consumer Behavior," Cambridge Books, Cambridge University Press, number 9780521296762.
    47. David Greenberg & Mark Shroder & Matthew Onstott, 1999. "The Social Experiment Market," Journal of Economic Perspectives, American Economic Association, vol. 13(3), pages 157-172, Summer.
    48. Seán M. Muller, 2015. "Causal Interaction and External Validity: Obstacles to the Policy Relevance of Randomized Evaluations," The World Bank Economic Review, World Bank, vol. 29(suppl_1), pages 217-225.
    49. Metcalf, Charles E, 1973. "Making Inferences from Controlled Income Maintenance Experiments," American Economic Review, American Economic Association, vol. 63(3), pages 478-483, June.
    50. Manning, Willard G, et al, 1987. "Health Insurance and the Demand for Medical Care: Evidence from a Randomized Experiment," American Economic Review, American Economic Association, vol. 77(3), pages 251-277, June.
    51. Thomas D. Cook, 2014. "Generalizing Causal Knowledge In The Policy Sciences: External Validity As A Task Of Both Multiattribute Representation And Multiattribute Extrapolation," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 33(2), pages 527-536, March.
    52. Glenn W Harrison, 2014. "Impact Evaluation and Welfare Evaluation," The European Journal of Development Research, Palgrave Macmillan;European Association of Development Research and Training Institutes (EADI), vol. 26(1), pages 39-45, January.
    53. Hsieh, Chang-Tai & Urquiola, Miguel, 2006. "The effects of generalized school choice on achievement and stratification: Evidence from Chile's voucher program," Journal of Public Economics, Elsevier, vol. 90(8-9), pages 1477-1503, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jeffrey Smith & Arthur Sweetman, 2016. "Viewpoint: Estimating the causal effects of policies and programs," Canadian Journal of Economics, Canadian Economics Association, vol. 49(3), pages 871-905, August.
    2. Susan Athey & Guido W. Imbens, 2017. "The State of Applied Econometrics: Causality and Policy Evaluation," Journal of Economic Perspectives, American Economic Association, vol. 31(2), pages 3-32, Spring.
    3. Guido W. Imbens & Jeffrey M. Wooldridge, 2009. "Recent Developments in the Econometrics of Program Evaluation," Journal of Economic Literature, American Economic Association, vol. 47(1), pages 5-86, March.
    4. Susan Athey & Guido Imbens, 2016. "The Econometrics of Randomized Experiments," Papers 1607.00698, arXiv.org.
    5. Karthik Muralidharan & Paul Niehaus, 2017. "Experimentation at Scale," Journal of Economic Perspectives, American Economic Association, vol. 31(4), pages 103-124, Fall.
    6. Burt S. Barnow & Jeffrey Smith, 2015. "Employment and Training Programs," NBER Chapters, in: Economics of Means-Tested Transfer Programs in the United States, Volume 2, pages 127-234, National Bureau of Economic Research, Inc.
    7. James J. Heckman, 1991. "Randomization and Social Policy Evaluation Revisited," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
    8. Denis Fougère & Nicolas Jacquemet, 2020. "Policy Evaluation Using Causal Inference Methods," SciencePo Working papers Main hal-03455978, HAL.
    9. Black, Dan A. & Joo, Joonhwi & LaLonde, Robert & Smith, Jeffrey A. & Taylor, Evan J., 2022. "Simple Tests for Selection: Learning More from Instrumental Variables," Labour Economics, Elsevier, vol. 79(C).
    10. Justman, Moshe, 2018. "Randomized controlled trials informing public policy: Lessons from project STAR and class size reduction," European Journal of Political Economy, Elsevier, vol. 54(C), pages 167-174.
    11. Moshe Justman, 2016. "Economic Research and Education Policy: Project STAR and Class Size Reduction," Melbourne Institute Working Paper Series wp2016n37, Melbourne Institute of Applied Economic and Social Research, The University of Melbourne.
    12. Hunt Allcott, 2012. "Site Selection Bias in Program Evaluation," NBER Working Papers 18373, National Bureau of Economic Research, Inc.
    13. Sebastian Galiani & Juan Pantano, 2021. "Structural Models: Inception and Frontier," NBER Working Papers 28698, National Bureau of Economic Research, Inc.
    14. James J. Heckman, 1991. "Randomization and Social Policy Evaluation Revisited," NBER Technical Working Papers 0107, National Bureau of Economic Research, Inc.
    15. Maibom, Jonas, 2021. "The Danish Labor Market Experiments: Methods and Findings," Nationaløkonomisk tidsskrift, Nationaløkonomisk Forening, vol. 2021(1), pages 1-21.
    16. Jörg Peters & Jörg Langbein & Gareth Roberts, 2018. "Generalization in the Tropics – Development Policy, Randomized Controlled Trials, and External Validity," The World Bank Research Observer, World Bank, vol. 33(1), pages 34-64.
    17. Jones A.M & Rice N, 2009. "Econometric Evaluation of Health Policies," Health, Econometrics and Data Group (HEDG) Working Papers 09/09, HEDG, c/o Department of Economics, University of York.
    18. Grossman, Guy & Humphreys, Macartan & Sacramone-Lutz, Gabriella, 2020. "Information Technology and Political Engagement: Mixed Evidence from Uganda," EconStor Open Access Articles and Book Chapters, ZBW - Leibniz Information Centre for Economics, vol. 82(4), pages 1321-1336.
    19. Carlos A. Flores & Oscar A. Mitnik, 2009. "Evaluating Nonexperimental Estimators for Multiple Treatments: Evidence from Experimental Data," Working Papers 2010-10, University of Miami, Department of Economics.
    20. Bo, Hao & Galiani, Sebastian, 2021. "Assessing external validity," Research in Economics, Elsevier, vol. 75(3), pages 274-285.

    More about this item

    JEL classification:

    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:pri:cheawb:2017-10. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Bobray Bordelon (email available below). General contact details of provider: https://edirc.repec.org/data/chprius.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.