IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/118233.html
   My bibliography  Save this paper

Optimizing pessimism in dynamic treatment regimes: a Bayesian learning approach

Author

Listed:
  • Zhou, Yunzhe
  • Qi, Zhengling
  • Shi, Chengchun
  • Li, Lexin

Abstract

In this article, we propose a novel pessimismbased Bayesian learning method for optimal dynamic treatment regimes in the offline setting. When the coverage condition does not hold, which is common for offline data, the existing solutions would produce sub-optimal policies. The pessimism principle addresses this issue by discouraging recommendation of actions that are less explored conditioning on the state. However, nearly all pessimism-based methods rely on a key hyper-parameter that quantifies the degree of pessimism, and the performance of the methods can be highly sensitive to the choice of this parameter. We propose to integrate the pessimism principle with Thompson sampling and Bayesian machine learning for optimizing the degree of pessimism. We derive a credible set whose boundary uniformly lower bounds the optimal Q-function, and thus we do not require additional tuning of the degree of pessimism. We develop a general Bayesian learning method that works with a range of models, from Bayesian linear basis model to Bayesian neural network model. We develop the computational algorithm based on variational inference, which is highly efficient and scalable. We establish the theoretical guarantees of the proposed method, and show empirically that it outperforms the existing state-of-theart solutions through both simulations and a real data example.

Suggested Citation

  • Zhou, Yunzhe & Qi, Zhengling & Shi, Chengchun & Li, Lexin, 2023. "Optimizing pessimism in dynamic treatment regimes: a Bayesian learning approach," LSE Research Online Documents on Economics 118233, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:118233
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/118233/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Guanhua Chen & Donglin Zeng & Michael R. Kosorok, 2016. "Personalized Dose Finding Using Outcome Weighted Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(516), pages 1509-1521, October.
    2. Hongxiang Qiu & Marco Carone & Ekaterina Sadikova & Maria Petukhova & Ronald C. Kessler & Alex Luedtke, 2021. "Optimal Individualized Decision Rules Using Instrumental Variable Methods," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(533), pages 174-191, March.
    3. Yixin Wang & David M. Blei, 2019. "Frequentist Consistency of Variational Bayes," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 114(527), pages 1147-1161, July.
    4. Zhengling Qi & Dacheng Liu & Haoda Fu & Yufeng Liu, 2020. "Multi-Armed Angle-Based Direct Learning for Estimating Optimal Individualized Treatment Rules With Various Outcomes," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 115(530), pages 678-691, April.
    5. S. A. Murphy, 2003. "Optimal dynamic treatment regimes," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 65(2), pages 331-355, May.
    6. Baqun Zhang & Anastasios A. Tsiatis & Eric B. Laber & Marie Davidian, 2013. "Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions," Biometrika, Biometrika Trust, vol. 100(3), pages 681-694.
    7. Chengchun Shi & Rui Song & Wenbin Lu & Bo Fu, 2018. "Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 80(4), pages 681-702, September.
    8. Hongxiang Qiu & Marco Carone & Ekaterina Sadikova & Maria Petukhova & Ronald C. Kessler & Alex Luedtke, 2021. "Rejoinder: Optimal Individualized Decision Rules Using Instrumental Variable Methods," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(533), pages 207-209, March.
    9. Ying-Qi Zhao & Donglin Zeng & Eric B. Laber & Michael R. Kosorok, 2015. "New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(510), pages 583-598, June.
    10. Shi, Chengchun & Song, Rui & Lu, Wenbin & Fu, Bo, 2018. "Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects," LSE Research Online Documents on Economics 102112, London School of Economics and Political Science, LSE Library.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shi, Chengchun & Luo, Shikai & Le, Yuan & Zhu, Hongtu & Song, Rui, 2022. "Statistically efficient advantage learning for offline reinforcement learning in infinite horizons," LSE Research Online Documents on Economics 115598, London School of Economics and Political Science, LSE Library.
    2. Chengchun Shi & Sheng Zhang & Wenbin Lu & Rui Song, 2022. "Statistical inference of the value function for reinforcement learning in infinite‐horizon settings," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(3), pages 765-793, July.
    3. Shi, Chengchun & Wan, Runzhe & Song, Ge & Luo, Shikai & Zhu, Hongtu & Song, Rui, 2023. "A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets," LSE Research Online Documents on Economics 117174, London School of Economics and Political Science, LSE Library.
    4. Gao, Yuhe & Shi, Chengchun & Song, Rui, 2023. "Deep spectral Q-learning with application to mobile health," LSE Research Online Documents on Economics 119445, London School of Economics and Political Science, LSE Library.
    5. Shi, Chengchun & Zhang, Shengxing & Lu, Wenbin & Song, Rui, 2022. "Statistical inference of the value function for reinforcement learning in infinite-horizon settings," LSE Research Online Documents on Economics 110882, London School of Economics and Political Science, LSE Library.
    6. Zhen Li & Jie Chen & Eric Laber & Fang Liu & Richard Baumgartner, 2023. "Optimal Treatment Regimes: A Review and Empirical Comparison," International Statistical Review, International Statistical Institute, vol. 91(3), pages 427-463, December.
    7. Q. Clairon & R. Henderson & N. J. Young & E. D. Wilson & C. J. Taylor, 2021. "Adaptive treatment and robust control," Biometrics, The International Biometric Society, vol. 77(1), pages 223-236, March.
    8. Jingxiang Chen & Yufeng Liu & Donglin Zeng & Rui Song & Yingqi Zhao & Michael R. Kosorok, 2016. "Comment," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(515), pages 942-947, July.
    9. Xin Qiu & Donglin Zeng & Yuanjia Wang, 2018. "Estimation and evaluation of linear individualized treatment rules to guarantee performance," Biometrics, The International Biometric Society, vol. 74(2), pages 517-528, June.
    10. Baqun Zhang & Min Zhang, 2018. "C‐learning: A new classification framework to estimate optimal dynamic treatment regimes," Biometrics, The International Biometric Society, vol. 74(3), pages 891-899, September.
    11. Kristin A. Linn & Eric B. Laber & Leonard A. Stefanski, 2017. "Interactive -Learning for Quantiles," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(518), pages 638-649, April.
    12. Cui, Yifan & Tchetgen Tchetgen, Eric, 2021. "On a necessary and sufficient identification condition of optimal treatment regimes with an instrumental variable," Statistics & Probability Letters, Elsevier, vol. 178(C).
    13. Rebecca Hager & Anastasios A. Tsiatis & Marie Davidian, 2018. "Optimal two‐stage dynamic treatment regimes from a classification perspective with censored survival data," Biometrics, The International Biometric Society, vol. 74(4), pages 1180-1192, December.
    14. Jin Wang & Donglin Zeng & D. Y. Lin, 2022. "Semiparametric single-index models for optimal treatment regimens with censored outcomes," Lifetime Data Analysis: An International Journal Devoted to Statistical Methods and Applications for Time-to-Event Data, Springer, vol. 28(4), pages 744-763, October.
    15. Michael Lechner, 2023. "Causal Machine Learning and its use for public policy," Swiss Journal of Economics and Statistics, Springer;Swiss Society of Economics and Statistics, vol. 159(1), pages 1-15, December.
    16. Ruoqing Zhu & Ying-Qi Zhao & Guanhua Chen & Shuangge Ma & Hongyu Zhao, 2017. "Greedy outcome weighted tree learning of optimal personalized treatment rules," Biometrics, The International Biometric Society, vol. 73(2), pages 391-400, June.
    17. Susan Athey & Stefan Wager, 2021. "Policy Learning With Observational Data," Econometrica, Econometric Society, vol. 89(1), pages 133-161, January.
    18. Shosei Sakaguchi, 2021. "Estimation of Optimal Dynamic Treatment Assignment Rules under Policy Constraints," Papers 2106.05031, arXiv.org, revised Apr 2024.
    19. Shosei Sakaguchi, 2024. "Robust Learning for Optimal Dynamic Treatment Regimes with Observational Data," Papers 2404.00221, arXiv.org.
    20. Qingxia Chen & Fan Zhang & Ming-Hui Chen & Xiuyu Julie Cong, 2020. "Estimation of treatment effects and model diagnostics with two-way time-varying treatment switching: an application to a head and neck study," Lifetime Data Analysis: An International Journal Devoted to Statistical Methods and Applications for Time-to-Event Data, Springer, vol. 26(4), pages 685-707, October.

    More about this item

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:118233. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.