IDEAS home Printed from https://ideas.repec.org/p/arx/papers/1603.03675.html
   My bibliography  Save this paper

Optimal Data Collection for Randomized Control Trials

Author

Listed:
  • Pedro Carneiro
  • Sokbae Lee
  • Daniel Wilhelm

Abstract

In a randomized control trial, the precision of an average treatment effect estimator can be improved either by collecting data on additional individuals, or by collecting additional covariates that predict the outcome variable. We propose the use of pre-experimental data such as a census, or a household survey, to inform the choice of both the sample size and the covariates to be collected. Our procedure seeks to minimize the resulting average treatment effect estimator's mean squared error, subject to the researcher's budget constraint. We rely on a modification of an orthogonal greedy algorithm that is conceptually simple and easy to implement in the presence of a large number of potential covariates, and does not require any tuning parameters. In two empirical applications, we show that our procedure can lead to substantial gains of up to 58%, measured either in terms of reductions in data collection costs or in terms of improvements in the precision of the treatment effect estimator.

Suggested Citation

  • Pedro Carneiro & Sokbae Lee & Daniel Wilhelm, 2016. "Optimal Data Collection for Randomized Control Trials," Papers 1603.03675, arXiv.org, revised Aug 2016.
  • Handle: RePEc:arx:papers:1603.03675
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/1603.03675
    File Function: Latest version
    Download Restriction: no

    Other versions of this item:

    References listed on IDEAS

    as
    1. Sylvie Moulin & Michael Kremer & Paul Glewwe, 2009. "Many Children Left Behind? Textbooks and Test Scores in Kenya," American Economic Journal: Applied Economics, American Economic Association, vol. 1(1), pages 112-135, January.
    2. Jinyong Hahn & Keisuke Hirano & Dean Karlan, 2011. "Adaptive Experimental Design Using the Propensity Score," Journal of Business & Economic Statistics, Taylor & Francis Journals, vol. 29(1), pages 96-108, January.
    3. Miriam Bruhn & David McKenzie, 2009. "In Pursuit of Balance: Randomization in Practice in Development Field Experiments," American Economic Journal: Applied Economics, American Economic Association, vol. 1(4), pages 200-232, October.
    4. Abhijit Banerjee & Sylvain Chassang & Sergio Montero & Erik Snowberg, 2017. "A Theory of Experimenters," NBER Working Papers 23867, National Bureau of Economic Research, Inc.
    5. Bruno Crépon & Florencia Devoto & Esther Duflo & William Parienté, 2015. "Estimating the Impact of Microcredit on Those Who Take It Up: Evidence from a Randomized Experiment in Morocco," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 123-150, January.
    6. Duflo, Esther & Glennerster, Rachel & Kremer, Michael, 2008. "Using Randomization in Development Economics Research: A Toolkit," Handbook of Development Economics, in: T. Paul Schultz & John A. Strauss (ed.),Handbook of Development Economics, edition 1, volume 4, chapter 61, pages 3895-3962, Elsevier.
    7. Jon Kleinberg & Jens Ludwig & Sendhil Mullainathan & Ziad Obermeyer, 2015. "Prediction Policy Problems," American Economic Review, American Economic Association, vol. 105(5), pages 491-495, May.
    8. McKenzie, David, 2012. "Beyond baseline and follow-up: The case for more T in experiments," Journal of Development Economics, Elsevier, vol. 99(2), pages 210-221.
    9. Alessandro Tarozzi & Jaikishan Desai & Kristin Johnson, 2015. "The Impacts of Microcredit: Evidence from Ethiopia," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 54-89, January.
    10. Victor Chernozhukov & Denis Chetverikov & Mert Demirer & Esther Duflo & Christian Hansen & Whitney Newey & James Robins, 2018. "Double/debiased machine learning for treatment and structural parameters," Econometrics Journal, Royal Economic Society, vol. 21(1), pages 1-68, February.
    11. Abhijit Banerjee & Dean Karlan & Jonathan Zinman, 2015. "Six Randomized Evaluations of Microcredit: Introduction and Further Steps," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 1-21, January.
    12. Lucas C. Coffman & Muriel Niederle, 2015. "Pre-analysis Plans Have Limited Upside, Especially Where Replications Are Feasible," Journal of Economic Perspectives, American Economic Association, vol. 29(3), pages 81-98, Summer.
    13. Britta Augsburg & Ralph De Haas & Heike Harmgart & Costas Meghir, 2015. "The Impacts of Microcredit: Evidence from Bosnia and Herzegovina," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 183-203, January.
    14. repec:feb:artefa:0110 is not listed on IDEAS
    15. John A. List, 2011. "Why Economists Should Conduct Field Experiments and 14 Tips for Pulling One Off," Journal of Economic Perspectives, American Economic Association, vol. 25(3), pages 3-16, Summer.
    16. Abhijit Banerjee & Esther Duflo & Rachel Glennerster & Cynthia Kinnan, 2015. "The Miracle of Microfinance? Evidence from a Randomized Evaluation," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 22-53, January.
    17. List, John A. & Rasul, Imran, 2011. "Field Experiments in Labor Economics," Handbook of Labor Economics, in: O. Ashenfelter & D. Card (ed.),Handbook of Labor Economics, edition 1, volume 4, chapter 2, pages 103-228, Elsevier.
    18. Aleksey Tetenov, 2016. "An economic theory of statistical testing," CeMMAP working papers CWP50/16, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    19. Pedro Carneiro & Oswald Koussihouèdé & Nathalie Lahire & Costas Meghir & Corina Mommaerts, 2015. "Decentralizing education resources: school grants in Senegal," CeMMAP working papers CWP15/15, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    20. Jeff Dominitz & Charles F. Manski, 2017. "More Data or Better Data? A Statistical Decision Problem," Review of Economic Studies, Oxford University Press, vol. 84(4), pages 1583-1605.
    21. Amy Finkelstein & Sarah Taubman & Bill Wright & Mira Bernstein & Jonathan Gruber & Joseph P. Newhouse & Heidi Allen & Katherine Baicker, 2012. "The Oregon Health Insurance Experiment: Evidence from the First Year," The Quarterly Journal of Economics, Oxford University Press, vol. 127(3), pages 1057-1106.
    22. Alexandre Belloni & Victor Chernozhukov & Christian Hansen, 2014. "High-Dimensional Methods and Inference on Structural and Treatment Effects," Journal of Economic Perspectives, American Economic Association, vol. 28(2), pages 29-50, Spring.
    23. John List & Sally Sadoff & Mathis Wagner, 2011. "So you want to run an experiment, now what? Some simple rules of thumb for optimal experimental design," Experimental Economics, Springer;Economic Science Association, vol. 14(4), pages 439-457, November.
    24. Daniel S. Hamermesh, 2013. "Six Decades of Top Economics Publishing: Who and How?," Journal of Economic Literature, American Economic Association, vol. 51(1), pages 162-172, March.
    25. Abhijit V. Banerjee & Esther Duflo, 2009. "The Experimental Approach to Development Economics," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 151-178, May.
    26. Bhattacharya, Debopam & Dupas, Pascaline, 2012. "Inferring welfare maximizing treatment assignment under budget constraints," Journal of Econometrics, Elsevier, vol. 167(1), pages 168-196.
    27. Imbens,Guido W. & Rubin,Donald B., 2015. "Causal Inference for Statistics, Social, and Biomedical Sciences," Cambridge Books, Cambridge University Press, number 9780521885881, April.
    28. Oriana Bandiera & Iwan Barankay & Imran Rasul, 2011. "Field Experiments with Firms," Journal of Economic Perspectives, American Economic Association, vol. 25(3), pages 63-82, Summer.
    29. Manuela Angelucci & Dean Karlan & Jonathan Zinman, 2015. "Microcredit Impacts: Evidence from a Randomized Microcredit Program Placement Experiment by Compartamos Banco," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 151-182, January.
    30. Edward Miguel & Michael Kremer, 2004. "Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities," Econometrica, Econometric Society, vol. 72(1), pages 159-217, January.
    31. Kasy, Maximilian, 2016. "Why Experimenters Might Not Always Want to Randomize, and What They Could Do Instead," Political Analysis, Cambridge University Press, vol. 24(3), pages 324-338, July.
    32. Brendon McConnell & Marcos Vera-Hernandez, 2015. "Going beyond simple sample size calculations: a practitioner's guide," IFS Working Papers W15/17, Institute for Fiscal Studies.
    33. Orazio Attanasio & Britta Augsburg & Ralph De Haas & Emla Fitzsimons & Heike Harmgart, 2015. "The Impacts of Microfinance: Evidence from Joint-Liability Lending in Mongolia," American Economic Journal: Applied Economics, American Economic Association, vol. 7(1), pages 90-122, January.
    34. Benjamin A. Olken, 2015. "Promises and Perils of Pre-analysis Plans," Journal of Economic Perspectives, American Economic Association, vol. 29(3), pages 61-80, Summer.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Pons Rotger, Gabriel & Rosholm, Michael, 2020. "The Role of Beliefs in Long Sickness Absence: Experimental Evidence from a Psychological Intervention," IZA Discussion Papers 13582, Institute of Labor Economics (IZA).
    2. Aufenanger, Tobias, 2018. "Treatment allocation for linear models," FAU Discussion Papers in Economics 14/2017, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    3. Eszter Czibor & David Jimenez‐Gomez & John A. List, 2019. "The Dozen Things Experimental Economists Should Do (More of)," Southern Economic Journal, John Wiley & Sons, vol. 86(2), pages 371-432, October.
    4. Max Tabord-Meehan, 2018. "Stratification Trees for Adaptive Randomization in Randomized Controlled Trials," Papers 1806.05127, arXiv.org, revised Jun 2020.

    More about this item

    JEL classification:

    • C55 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Large Data Sets: Modeling and Analysis
    • C81 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Methodology for Collecting, Estimating, and Organizing Microeconomic Data; Data Access

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1603.03675. See general information about how to correct material in RePEc.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators). General contact details of provider: http://arxiv.org/ .

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.