IDEAS home Printed from https://ideas.repec.org/a/taf/jnlasa/v109y2014i505p133-144.html
   My bibliography  Save this article

Clustered Treatment Assignments and Sensitivity to Unmeasured Biases in Observational Studies

Author

Listed:
  • Ben B. Hansen
  • Paul R. Rosenbaum
  • Dylan S. Small

Abstract

Clustered treatment assignment occurs when individuals are grouped into clusters prior to treatment and whole clusters, not individuals, are assigned to treatment or control. In randomized trials, clustered assignments may be required because the treatment must be applied to all children in a classroom, or to all patients at a clinic, or to all radio listeners in the same media market. The most common cluster randomized design pairs 2 S clusters into S pairs based on similar pretreatment covariates, then picks one cluster in each pair at random for treatment, the other cluster being assigned to control. Typically, group randomization increases sampling variability and so is less efficient, less powerful, than randomization at the individual level, but it may be unavoidable when it is impractical to treat just a few people within each cluster. Related issues arise in nonrandomized, observational studies of treatment effects, but in this case one must examine the sensitivity of conclusions to bias from nonrandom selection of clusters for treatment. Although clustered assignment increases sampling variability in observational studies, as it does in randomized experiments, it also tends to decrease sensitivity to unmeasured biases, and as the number of cluster pairs increases the latter effect overtakes the former, dominating it when allowance is made for nontrivial biases in treatment assignment. Intuitively, a given magnitude of departure from random assignment can do more harm if it acts on individual students than if it is restricted to act on whole classes, because the bias is unable to pick the strongest individual students for treatment, and this is especially true if a serious effort is made to pair clusters that appeared similar prior to treatment. We examine this issue using an asymptotic measure, the design sensitivity, some inequalities that exploit convexity, simulation, and an application concerned with the flooding of villages in Bangladesh.

Suggested Citation

  • Ben B. Hansen & Paul R. Rosenbaum & Dylan S. Small, 2014. "Clustered Treatment Assignments and Sensitivity to Unmeasured Biases in Observational Studies," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 109(505), pages 133-144, March.
  • Handle: RePEc:taf:jnlasa:v:109:y:2014:i:505:p:133-144
    DOI: 10.1080/01621459.2013.863157
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/01621459.2013.863157
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/01621459.2013.863157?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Guildo W. Imbens, 2003. "Sensitivity to Exogeneity Assumptions in Program Evaluation," American Economic Review, American Economic Association, vol. 93(2), pages 126-132, May.
    2. Hansen, Ben B. & Bowers, Jake, 2009. "Attributing Effects to a Cluster-Randomized Get-Out-the-Vote Campaign," Journal of the American Statistical Association, American Statistical Association, vol. 104(487), pages 873-885.
    3. Small, Dylan S., 2007. "Sensitivity Analysis for Instrumental Variables Regression With Overidentifying Restrictions," Journal of the American Statistical Association, American Statistical Association, vol. 102, pages 1049-1058, September.
    4. John Copas & Shinto Eguchi, 2001. "Local sensitivity approximations for selectivity bias," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 63(4), pages 871-895.
    5. Rosenbaum, Paul R. & Silber, Jeffrey H., 2009. "Amplification of Sensitivity Analysis in Matched Observational Studies," Journal of the American Statistical Association, American Statistical Association, vol. 104(488), pages 1398-1405.
    6. Sue M. Marcus, 1997. "Using Omitted Variable Bias to Assess Uncertainty in the Estimation of an AIDS Education Treatment Effect," Journal of Educational and Behavioral Statistics, , vol. 22(2), pages 193-201, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Siyu Heng & Hyunseung Kang & Dylan S. Small & Colin B. Fogarty, 2021. "Increasing power for observational studies of aberrant response: An adaptive approach," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 83(3), pages 482-504, July.
    2. Davide Proserpio & Georgios Zervas, 2017. "Online Reputation Management: Estimating the Impact of Management Responses on Consumer Reviews," Marketing Science, INFORMS, vol. 36(5), pages 645-665, September.
    3. Rotar Laura Južnik, 2018. "The Effects of Expenditures for Labour Market Policy on Unemployment Rate," Business Systems Research, Sciendo, vol. 9(1), pages 55-64, March.
    4. Bo Zhang & Dylan S. Small, 2020. "A calibrated sensitivity analysis for matched observational studies with application to the effect of second‐hand smoke exposure on blood lead levels in children," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 69(5), pages 1285-1305, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Paul R. Rosenbaum, 2011. "A New u-Statistic with Superior Design Sensitivity in Matched Observational Studies," Biometrics, The International Biometric Society, vol. 67(3), pages 1017-1027, September.
    2. Jesse Y. Hsu & Dylan S. Small, 2013. "Calibrating Sensitivity Analyses to Observed Covariates in Observational Studies," Biometrics, The International Biometric Society, vol. 69(4), pages 803-811, December.
    3. Paul R. Rosenbaum, 2015. "Bahadur Efficiency of Sensitivity Analyses in Observational Studies," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(509), pages 205-217, March.
    4. Doko Tchatoka, Firmin Sabro, 2012. "Specification Tests with Weak and Invalid Instruments," MPRA Paper 40185, University Library of Munich, Germany.
    5. Matthew A. Masten & Alexandre Poirier, 2020. "Inference on breakdown frontiers," Quantitative Economics, Econometric Society, vol. 11(1), pages 41-111, January.
    6. Vikström, Johan, 2009. "Cluster sample inference using sensitivity analysis: the case with few groups," Working Paper Series 2009:15, IFAU - Institute for Evaluation of Labour Market and Education Policy.
    7. de Luna, Xavier & Lundin, Mathias, 2009. "Sensitivity analysis of the unconfoundedness assumption in observational studies," Working Paper Series 2009:12, IFAU - Institute for Evaluation of Labour Market and Education Policy.
    8. Richard A. Ashley & Guo Li, 2013. "Re-Examining the Impact of Housing Wealth and Stock Wealth on Household Spending: Does Persistence in Wealth Changes Matter?," Working Papers e07-39, Virginia Polytechnic Institute and State University, Department of Economics.
    9. Paul R. Rosenbaum, 2007. "Sensitivity Analysis for m-Estimates, Tests, and Confidence Intervals in Matched Observational Studies," Biometrics, The International Biometric Society, vol. 63(2), pages 456-464, June.
    10. Xavier de Luna & Mathias Lundin, 2014. "Sensitivity analysis of the unconfoundedness assumption with an application to an evaluation of college choice effects on earnings," Journal of Applied Statistics, Taylor & Francis Journals, vol. 41(8), pages 1767-1784, August.
    11. Paul R. Rosenbaum, 2007. "Confidence Intervals for Uncommon but Dramatic Responses to Treatment," Biometrics, The International Biometric Society, vol. 63(4), pages 1164-1171, December.
    12. Matthew A. Masten & Alexandre Poirier, 2021. "Salvaging Falsified Instrumental Variable Models," Econometrica, Econometric Society, vol. 89(3), pages 1449-1469, May.
    13. Paul R. Rosenbaum, 2013. "Impact of Multiple Matched Controls on Design Sensitivity in Observational Studies," Biometrics, The International Biometric Society, vol. 69(1), pages 118-127, March.
    14. Bo Zhang & Eric J. Tchetgen Tchetgen, 2022. "A semi‐parametric approach to model‐based sensitivity analysis in observational studies," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 185(S2), pages 668-691, December.
    15. Paolo Naticchioni & Silvia Loriga, 2011. "Short and Long Term Evaluations of Public Employment Services in Italy," Applied Economics Quarterly (formerly: Konjunkturpolitik), Duncker & Humblot, Berlin, vol. 57(3), pages 201-229.
    16. Leonardo Becchetti & Pierluigi Conzo & Alessandro Romeo, 2014. "Violence, trust, and trustworthiness: evidence from a Nairobi slum," Oxford Economic Papers, Oxford University Press, vol. 66(1), pages 283-305, January.
    17. Becchetti, Leonardo & Ciciretti, Rocco & Hasan, Iftekhar, 2015. "Corporate social responsibility, stakeholder risk, and idiosyncratic volatility," Journal of Corporate Finance, Elsevier, vol. 35(C), pages 297-309.
    18. Michael A. Clemens & Claudio Montenegro & Lant Pritchett, 2016. "Bounding the Price Equivalent of Migration Barriers," CID Working Papers 316, Center for International Development at Harvard University.
    19. Timothy B. Armstrong & Michal Kolesár, 2021. "Sensitivity analysis using approximate moment condition models," Quantitative Economics, Econometric Society, vol. 12(1), pages 77-108, January.
    20. Adam C. Sales & Ben B. Hansen, 2020. "Limitless Regression Discontinuity," Journal of Educational and Behavioral Statistics, , vol. 45(2), pages 143-174, April.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:jnlasa:v:109:y:2014:i:505:p:133-144. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/UASA20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.