IDEAS home Printed from https://ideas.repec.org/a/sae/medema/v41y2021i3p340-353.html
   My bibliography  Save this article

Comparing the Performance of Statistical Adjustment Methods by Recovering the Experimental Benchmark from the REFLUX Trial

Author

Listed:
  • Luke Keele

    (University of Pennsylvania, Philadelphia, PA, USA)

  • Stephen O’Neill

    (Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, UK
    Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, UK)

  • Richard Grieve

    (Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, UK)

Abstract

Much evidence in comparative effectiveness research is based on observational studies. Researchers who conduct observational studies typically assume that there are no unobservable differences between the treatment groups under comparison. Treatment effectiveness is estimated after adjusting for observed differences between comparison groups. However, estimates of treatment effectiveness may be biased because of misspecification of the statistical model. That is, if the method of treatment effect estimation imposes unduly strong functional form assumptions, treatment effect estimates may be inaccurate, leading to inappropriate recommendations about treatment decisions. We compare the performance of a wide variety of treatment effect estimation methods for the average treatment effect. We do so within the context of the REFLUX study from the United Kingdom. In REFLUX, participants were enrolled in either an randomized controlled trial (RCT) or an observational study arm. In the RCT, patients were randomly assigned to either surgery or medical management. In the patient preference arm, participants selected to either have surgery or medical management. We attempt to recover the treatment effect estimate from the RCT using the data from the patient preference arms of the study. We vary the method of treatment effect estimation and record which methods are successful and which are not. We apply more than 20 different methods, including standard regression models as well as advanced machine learning methods. We find that simple propensity score matching methods provide the least accurate estimates versus the RCT benchmark. We find variation in performance across the other methods, with some, but not all recovering the experimental benchmark. We conclude that future studies should use multiple methods of estimation to fully represent uncertainty according to the choice of estimation approach.

Suggested Citation

  • Luke Keele & Stephen O’Neill & Richard Grieve, 2021. "Comparing the Performance of Statistical Adjustment Methods by Recovering the Experimental Benchmark from the REFLUX Trial," Medical Decision Making, , vol. 41(3), pages 340-353, April.
  • Handle: RePEc:sae:medema:v:41:y:2021:i:3:p:340-353
    DOI: 10.1177/0272989X20986545
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0272989X20986545
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0272989X20986545?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Rosenbaum, Paul R., 2010. "Design Sensitivity and Efficiency in Observational Studies," Journal of the American Statistical Association, American Statistical Association, vol. 105(490), pages 692-702.
    2. Guido W. Imbens, 2004. "Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review," The Review of Economics and Statistics, MIT Press, vol. 86(1), pages 4-29, February.
    3. King, Gary & Nielsen, Richard, 2019. "Why Propensity Scores Should Not Be Used for Matching," Political Analysis, Cambridge University Press, vol. 27(4), pages 435-454, October.
    4. Imbens,Guido W. & Rubin,Donald B., 2015. "Causal Inference for Statistics, Social, and Biomedical Sciences," Cambridge Books, Cambridge University Press, number 9780521885881.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Florian Gunsilius & Yuliang Xu, 2021. "Matching for causal effects via multimarginal unbalanced optimal transport," Papers 2112.04398, arXiv.org, revised Jul 2022.
    2. Guido W. Imbens, 2020. "Potential Outcome and Directed Acyclic Graph Approaches to Causality: Relevance for Empirical Practice in Economics," Journal of Economic Literature, American Economic Association, vol. 58(4), pages 1129-1179, December.
    3. Zhexiao Lin & Peng Ding & Fang Han, 2023. "Estimation Based on Nearest Neighbor Matching: From Density Ratio to Average Treatment Effect," Econometrica, Econometric Society, vol. 91(6), pages 2187-2217, November.
    4. Tenglong Li & Kenneth A. Frank & Mingming Chen, 2024. "A Conceptual Framework for Quantifying the Robustness of a Regression-Based Causal Inference in Observational Study," Mathematics, MDPI, vol. 12(3), pages 1-14, January.
    5. Tenglong Li & Jordan Lawson, 2021. "A generalized bootstrap procedure of the standard error and confidence interval estimation for inverse probability of treatment weighting," Papers 2109.00171, arXiv.org.
    6. Dridi, Ichrak & Boughrara, Adel, 2023. "Flexible inflation targeting and stock market volatility: Evidence from emerging market economies," Economic Modelling, Elsevier, vol. 126(C).
    7. Brian G. Vegetabile & Daniel L. Gillen & Hal S. Stern, 2020. "Optimally balanced Gaussian process propensity scores for estimating treatment effects," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 183(1), pages 355-377, January.
    8. Fukui Hideki, 2023. "Evaluating Different Covariate Balancing Methods: A Monte Carlo Simulation," Statistics, Politics and Policy, De Gruyter, vol. 14(2), pages 205-326, June.
    9. Alberto Caron & Gianluca Baio & Ioanna Manolopoulou, 2022. "Estimating individual treatment effects using non‐parametric regression models: A review," Journal of the Royal Statistical Society Series A, Royal Statistical Society, vol. 185(3), pages 1115-1149, July.
    10. Franz R. Hahn & Werner Hölzl & Claudia Kwapil, 2016. "The Credit Channel and the Role of Monetary Policy Before, During and After the Global Financial Crisis. A Micro Data Approach to the Analysis of Bank-firm Relationships," WIFO Studies, WIFO, number 59233, April.
    11. Tenglong Li & Kenneth A. Frank, 2020. "The probability of a robust inference for internal validity and its applications in regression models," Papers 2005.12784, arXiv.org.
    12. Shu Yang & Yunshu Zhang, 2023. "Multiply robust matching estimators of average and quantile treatment effects," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 50(1), pages 235-265, March.
    13. Tenglong Li & Ken Frank, 2022. "The probability of a robust inference for internal validity," Sociological Methods & Research, , vol. 51(4), pages 1947-1968, November.
    14. Tenglong Li & Kenneth A. Frank, 2019. "On the probability of a causal inference is robust for internal validity," Papers 1906.08726, arXiv.org.
    15. Zhexiao Lin & Peng Ding & Fang Han, 2021. "Estimation based on nearest neighbor matching: from density ratio to average treatment effect," Papers 2112.13506, arXiv.org.
    16. Alexandre Belloni & Victor Chernozhukov & Denis Chetverikov & Christian Hansen & Kengo Kato, 2018. "High-dimensional econometrics and regularized GMM," CeMMAP working papers CWP35/18, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    17. Caloffi, Annalisa & Freo, Marzia & Ghinoi, Stefano & Mariani, Marco & Rossi, Federica, 2022. "Assessing the effects of a deliberate policy mix: The case of technology and innovation advisory services and innovation vouchers," Research Policy, Elsevier, vol. 51(6).
    18. Shanike J. Smart & Solomon W. Polachek, 2024. "COVID-19 vaccine and risk-taking," Journal of Risk and Uncertainty, Springer, vol. 68(1), pages 25-49, February.
    19. Zhengyuan Zhou & Susan Athey & Stefan Wager, 2023. "Offline Multi-Action Policy Learning: Generalization and Optimization," Operations Research, INFORMS, vol. 71(1), pages 148-183, January.
    20. Plamen Nikolov & Hongjian Wang & Kevin Acker, 2020. "Wage premium of Communist Party membership: Evidence from China," Pacific Economic Review, Wiley Blackwell, vol. 25(3), pages 309-338, August.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:medema:v:41:y:2021:i:3:p:340-353. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.