IDEAS home Printed from https://ideas.repec.org/p/ehl/lserod/117174.html
   My bibliography  Save this paper

A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets

Author

Listed:
  • Shi, Chengchun
  • Wan, Runzhe
  • Song, Ge
  • Luo, Shikai
  • Zhu, Hongtu
  • Song, Rui

Abstract

The two-sided markets, such as ride-sharing companies, often involve a group of subjects who are making sequential decisions across time and/or location. With the rapid development of smart phones and internet of things, they have substantially transformed the transportation landscape of human beings. In this paper we consider large-scale fleet management in ride-sharing companies that involve multiple units in different areas receiving sequences of products (or treatments) over time. Major technical challenges, such as policy evaluation, arise in those studies because: (i) spatial and temporal proximities induce interference between locations and times, and (ii) the large number of locations results in the curse of dimensionality. To address both challenges simultaneously, we introduce a multiagent reinforcement learning (MARL) framework for carrying policy evaluation in these studies. We propose novel estimators for mean outcomes under different products that are consistent despite the high dimensionality of state-action space. The proposed estimator works favorably in simulation experiments. We further illustrate our method using a real dataset obtained from a two-sided marketplace company to evaluate the effects of applying different subsidizing policies. A Python implementation of our proposed method is available in the Supplementary Material and also at https://github.com/RunzheStat/CausalMARL.

Suggested Citation

  • Shi, Chengchun & Wan, Runzhe & Song, Ge & Luo, Shikai & Zhu, Hongtu & Song, Rui, 2023. "A multiagent reinforcement learning framework for off-policy evaluation in two-sided markets," LSE Research Online Documents on Economics 117174, London School of Economics and Political Science, LSE Library.
  • Handle: RePEc:ehl:lserod:117174
    as

    Download full text from publisher

    File URL: http://eprints.lse.ac.uk/117174/
    File Function: Open access version.
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Baqun Zhang & Anastasios A. Tsiatis & Eric B. Laber & Marie Davidian, 2012. "A Robust Method for Estimating Optimal Treatment Regimes," Biometrics, The International Biometric Society, vol. 68(4), pages 1010-1018, December.
    2. Iavor Bojinov & Neil Shephard, 2019. "Time Series Experiments and Causal Estimands: Exact Randomization Tests and Trading," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 114(528), pages 1665-1682, October.
    3. Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2003. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," Econometrica, Econometric Society, vol. 71(4), pages 1161-1189, July.
    4. Eric B. Laber & Nick J. Meyer & Brian J. Reich & Krishna Pacifici & Jaime A. Collazo & John M. Drake, 2018. "Optimal treatment allocations in space and time for on‐line control of an emerging infectious disease," Journal of the Royal Statistical Society Series C, Royal Statistical Society, vol. 67(4), pages 743-789, August.
    5. Marc Rysman, 2009. "The Economics of Two-Sided Markets," Journal of Economic Perspectives, American Economic Association, vol. 23(3), pages 125-143, Summer.
    6. S. A. Murphy, 2003. "Optimal dynamic treatment regimes," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 65(2), pages 331-355, May.
    7. Baqun Zhang & Anastasios A. Tsiatis & Eric B. Laber & Marie Davidian, 2013. "Robust estimation of optimal dynamic treatment regimes for sequential treatment decisions," Biometrika, Biometrika Trust, vol. 100(3), pages 681-694.
    8. Andrei Hagiu & Julian Wright, 2019. "The status of workers and platforms in the sharing economy," Journal of Economics & Management Strategy, Wiley Blackwell, vol. 28(1), pages 97-108, January.
    9. Lan Wang & Yu Zhou & Rui Song & Ben Sherwood, 2018. "Quantile-Optimal Treatment Regimes," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 113(523), pages 1243-1254, July.
    10. Chengchun Shi & Rui Song & Wenbin Lu & Bo Fu, 2018. "Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 80(4), pages 681-702, September.
    11. Stefan Wager & Susan Athey, 2018. "Estimation and Inference of Heterogeneous Treatment Effects using Random Forests," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 113(523), pages 1228-1242, July.
    12. Ying-Qi Zhao & Donglin Zeng & Eric B. Laber & Michael R. Kosorok, 2015. "New Statistical Learning Methods for Estimating Optimal Dynamic Treatment Regimes," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 110(510), pages 583-598, June.
    13. Audrey Boruvka & Daniel Almirall & Katie Witkiewitz & Susan A. Murphy, 2018. "Assessing Time-Varying Causal Effect Moderation in Mobile Health," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 113(523), pages 1112-1121, July.
    14. Hudgens, Michael G. & Halloran, M. Elizabeth, 2008. "Toward Causal Inference With Interference," Journal of the American Statistical Association, American Statistical Association, vol. 103, pages 832-842, June.
    15. Yingqi Zhao & Donglin Zeng & A. John Rush & Michael R. Kosorok, 2012. "Estimating Individualized Treatment Rules Using Outcome Weighted Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 107(499), pages 1106-1118, September.
    16. Yichi Zhang & Eric B. Laber & Anastasios Tsiatis & Marie Davidian, 2015. "Using decision lists to construct interpretable and parsimonious treatment regimes," Biometrics, The International Biometric Society, vol. 71(4), pages 895-904, December.
    17. Roland A. Matsouaka & Junlong Li & Tianxi Cai, 2014. "Evaluating marker-guided treatment selection strategies," Biometrics, The International Biometric Society, vol. 70(3), pages 489-499, September.
    18. A. Belloni & V. Chernozhukov & I. Fernández‐Val & C. Hansen, 2017. "Program Evaluation and Causal Inference With High‐Dimensional Data," Econometrica, Econometric Society, vol. 85, pages 233-298, January.
    19. Shi, Chengchun & Song, Rui & Lu, Wenbin & Fu, Bo, 2018. "Maximin projection learning for optimal treatment decision with heterogeneous individualized treatment effects," LSE Research Online Documents on Economics 102112, London School of Economics and Political Science, LSE Library.
    20. Shi, Chengchun & Wang, Xiaoyu & Luo, Shikai & Zhu, Hongtu & Ye, Jieping & Song, Rui, 2022. "Dynamic causal effects evaluation in A/B testing with a reinforcement learning framework," LSE Research Online Documents on Economics 113310, London School of Economics and Political Science, LSE Library.
    21. Imbens,Guido W. & Rubin,Donald B., 2015. "Causal Inference for Statistics, Social, and Biomedical Sciences," Cambridge Books, Cambridge University Press, number 9780521885881.
    22. Ruoqing Zhu & Ying-Qi Zhao & Guanhua Chen & Shuangge Ma & Hongyu Zhao, 2017. "Greedy outcome weighted tree learning of optimal personalized treatment rules," Biometrics, The International Biometric Society, vol. 73(2), pages 391-400, June.
    23. Chengchun Shi & Sheng Zhang & Wenbin Lu & Rui Song, 2022. "Statistical inference of the value function for reinforcement learning in infinite‐horizon settings," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(3), pages 765-793, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Zhen Li & Jie Chen & Eric Laber & Fang Liu & Richard Baumgartner, 2023. "Optimal Treatment Regimes: A Review and Empirical Comparison," International Statistical Review, International Statistical Institute, vol. 91(3), pages 427-463, December.
    2. Shi, Chengchun & Luo, Shikai & Le, Yuan & Zhu, Hongtu & Song, Rui, 2022. "Statistically efficient advantage learning for offline reinforcement learning in infinite horizons," LSE Research Online Documents on Economics 115598, London School of Economics and Political Science, LSE Library.
    3. Gao, Yuhe & Shi, Chengchun & Song, Rui, 2023. "Deep spectral Q-learning with application to mobile health," LSE Research Online Documents on Economics 119445, London School of Economics and Political Science, LSE Library.
    4. Yunan Wu & Lan Wang, 2021. "Resampling‐based confidence intervals for model‐free robust inference on optimal treatment regimes," Biometrics, The International Biometric Society, vol. 77(2), pages 465-476, June.
    5. Davide Viviano & Jelena Bradic, 2019. "Synthetic learner: model-free inference on treatments over time," Papers 1904.01490, arXiv.org, revised Aug 2022.
    6. Xin Qiu & Donglin Zeng & Yuanjia Wang, 2018. "Estimation and evaluation of linear individualized treatment rules to guarantee performance," Biometrics, The International Biometric Society, vol. 74(2), pages 517-528, June.
    7. Ruoqing Zhu & Ying-Qi Zhao & Guanhua Chen & Shuangge Ma & Hongyu Zhao, 2017. "Greedy outcome weighted tree learning of optimal personalized treatment rules," Biometrics, The International Biometric Society, vol. 73(2), pages 391-400, June.
    8. Michael C Knaus & Michael Lechner & Anthony Strittmatter, 2021. "Machine learning estimation of heterogeneous causal effects: Empirical Monte Carlo evidence," The Econometrics Journal, Royal Economic Society, vol. 24(1), pages 134-161.
    9. Michael C. Knaus & Michael Lechner & Anthony Strittmatter, 2022. "Heterogeneous Employment Effects of Job Search Programs: A Machine Learning Approach," Journal of Human Resources, University of Wisconsin Press, vol. 57(2), pages 597-636.
    10. Ashesh Rambachan & Neil Shephard, 2019. "Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function," Papers 1903.01637, arXiv.org, revised Feb 2020.
    11. Zhou, Yunzhe & Qi, Zhengling & Shi, Chengchun & Li, Lexin, 2023. "Optimizing pessimism in dynamic treatment regimes: a Bayesian learning approach," LSE Research Online Documents on Economics 118233, London School of Economics and Political Science, LSE Library.
    12. Hyung G. Park & Danni Wu & Eva Petkova & Thaddeus Tarpey & R. Todd Ogden, 2023. "Bayesian Index Models for Heterogeneous Treatment Effects on a Binary Outcome," Statistics in Biosciences, Springer;International Chinese Statistical Association, vol. 15(2), pages 397-418, July.
    13. Davide Viviano & Jelena Bradic, 2021. "Dynamic covariate balancing: estimating treatment effects over time with potential local projections," Papers 2103.01280, arXiv.org, revised Jan 2024.
    14. Qian Guan & Eric B. Laber & Brian J. Reich, 2016. "Comment," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 111(515), pages 936-942, July.
    15. Chengchun Shi & Sheng Zhang & Wenbin Lu & Rui Song, 2022. "Statistical inference of the value function for reinforcement learning in infinite‐horizon settings," Journal of the Royal Statistical Society Series B, Royal Statistical Society, vol. 84(3), pages 765-793, July.
    16. Cai, Hengrui & Shi, Chengchun & Song, Rui & Lu, Wenbin, 2023. "Jump interval-learning for individualized decision making with continuous treatments," LSE Research Online Documents on Economics 118231, London School of Economics and Political Science, LSE Library.
    17. Kristin A. Linn & Eric B. Laber & Leonard A. Stefanski, 2017. "Interactive -Learning for Quantiles," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(518), pages 638-649, April.
    18. Rebecca Hager & Anastasios A. Tsiatis & Marie Davidian, 2018. "Optimal two‐stage dynamic treatment regimes from a classification perspective with censored survival data," Biometrics, The International Biometric Society, vol. 74(4), pages 1180-1192, December.
    19. Baojiang Chen & Ao Yuan & Jing Qin, 2022. "Pool adjacent violators algorithm–assisted learning with application on estimating optimal individualized treatment regimes," Biometrics, The International Biometric Society, vol. 78(4), pages 1475-1488, December.
    20. Denis Fougère & Nicolas Jacquemet, 2020. "Policy Evaluation Using Causal Inference Methods," SciencePo Working papers Main hal-03455978, HAL.

    More about this item

    Keywords

    reinforcement learning; policy evaluation; multiagent system; spatiotemporal studies; DMS-2003637; EP/W014971/1;
    All these keywords.

    JEL classification:

    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ehl:lserod:117174. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: LSERO Manager (email available below). General contact details of provider: https://edirc.repec.org/data/lsepsuk.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.