IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2505.19058.html
   My bibliography  Save this paper

Distributionally Robust Deep Q-Learning

Author

Listed:
  • Chung I Lu
  • Julian Sester
  • Aijia Zhang

Abstract

We propose a novel distributionally robust $Q$-learning algorithm for the non-tabular case accounting for continuous state spaces where the state transition of the underlying Markov decision process is subject to model uncertainty. The uncertainty is taken into account by considering the worst-case transition from a ball around a reference probability measure. To determine the optimal policy under the worst-case state transition, we solve the associated non-linear Bellman equation by dualising and regularising the Bellman operator with the Sinkhorn distance, which is then parameterized with deep neural networks. This approach allows us to modify the Deep Q-Network algorithm to optimise for the worst case state transition. We illustrate the tractability and effectiveness of our approach through several applications, including a portfolio optimisation task based on S\&{P}~500 data.

Suggested Citation

  • Chung I Lu & Julian Sester & Aijia Zhang, 2025. "Distributionally Robust Deep Q-Learning," Papers 2505.19058, arXiv.org.
  • Handle: RePEc:arx:papers:2505.19058
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2505.19058
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ariel Neufeld & Julian Sester & Mario Šikić, 2023. "Markov decision processes under model uncertainty," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 618-665, July.
    2. Ariel Neufeld & Julian Sester, 2023. "Neural networks can detect model-free static arbitrage strategies," Papers 2306.16422, arXiv.org, revised Aug 2024.
    3. Wolfram Wiesemann & Daniel Kuhn & Berç Rustem, 2013. "Robust Markov Decision Processes," Mathematics of Operations Research, INFORMS, vol. 38(1), pages 153-183, February.
    4. R. Cont, 2001. "Empirical properties of asset returns: stylized facts and statistical issues," Quantitative Finance, Taylor & Francis Journals, vol. 1(2), pages 223-236.
    5. Daniel Bartl & Samuel Drapeau & Ludovic Tangpi, 2020. "Computational aspects of robust optimized certainty equivalents and option pricing," Mathematical Finance, Wiley Blackwell, vol. 30(1), pages 287-309, January.
    6. Garud N. Iyengar, 2005. "Robust Dynamic Programming," Mathematics of Operations Research, INFORMS, vol. 30(2), pages 257-280, May.
    7. Chung I Lu & Julian Sester, 2024. "Generative model for financial time series trained with MMD using a signature kernel," Papers 2407.19848, arXiv.org, revised Dec 2024.
    8. Shie Mannor & Ofir Mebel & Huan Xu, 2016. "Robust MDPs with k -Rectangular Uncertainty," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1484-1509, November.
    9. Nicole Bäuerle & Alexander Glauner, 2022. "Distributionally Robust Markov Decision Processes and Their Connection to Risk Measures," Mathematics of Operations Research, INFORMS, vol. 47(3), pages 1757-1780, August.
    10. Arnab Nilim & Laurent El Ghaoui, 2005. "Robust Control of Markov Decision Processes with Uncertain Transition Matrices," Operations Research, INFORMS, vol. 53(5), pages 780-798, October.
    11. Ariel Neufeld & Julian Sester & Mario v{S}iki'c, 2022. "Markov Decision Processes under Model Uncertainty," Papers 2206.06109, arXiv.org, revised Jan 2023.
    12. Vineet Goyal & Julien Grand-Clément, 2023. "Robust Markov Decision Processes: Beyond Rectangularity," Mathematics of Operations Research, INFORMS, vol. 48(1), pages 203-226, February.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust decision-making under risk and ambiguity," Papers 2104.12573, arXiv.org, revised Oct 2021.
    2. Andrew J. Keith & Darryl K. Ahner, 2021. "A survey of decision making and optimization under uncertainty," Annals of Operations Research, Springer, vol. 300(2), pages 319-353, May.
    3. Bakker, Hannah & Dunke, Fabian & Nickel, Stefan, 2020. "A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice," Omega, Elsevier, vol. 96(C).
    4. Maximilian Blesch & Philipp Eisenhauer, 2023. "Robust Decision-Making under Risk and Ambiguity," Rationality and Competition Discussion Paper Series 463, CRC TRR 190 Rationality and Competition.
    5. Zhu, Jin & Wan, Runzhe & Qi, Zhengling & Luo, Shikai & Shi, Chengchun, 2024. "Robust offline reinforcement learning with heavy-tailed rewards," LSE Research Online Documents on Economics 122740, London School of Economics and Political Science, LSE Library.
    6. Maximilian Blesch & Philipp Eisenhauer, 2021. "Robust Decision-Making Under Risk and Ambiguity," ECONtribute Discussion Papers Series 104, University of Bonn and University of Cologne, Germany.
    7. Varagapriya, V & Singh, Vikas Vikram & Lisser, Abdel, 2024. "Rank-1 transition uncertainties in constrained Markov decision processes," European Journal of Operational Research, Elsevier, vol. 318(1), pages 167-178.
    8. Shie Mannor & Ofir Mebel & Huan Xu, 2016. "Robust MDPs with k -Rectangular Uncertainty," Mathematics of Operations Research, INFORMS, vol. 41(4), pages 1484-1509, November.
    9. Arthur Flajolet & Sébastien Blandin & Patrick Jaillet, 2018. "Robust Adaptive Routing Under Uncertainty," Operations Research, INFORMS, vol. 66(1), pages 210-229, January.
    10. Saghafian, Soroush, 2018. "Ambiguous partially observable Markov decision processes: Structural results and applications," Journal of Economic Theory, Elsevier, vol. 178(C), pages 1-35.
    11. Bren, Austin & Saghafian, Soroush, 2018. "Data-Driven Percentile Optimization for Multi-Class Queueing Systems with Model Ambiguity: Theory and Application," Working Paper Series rwp18-008, Harvard University, John F. Kennedy School of Government.
    12. Michael Jong Kim, 2016. "Robust Control of Partially Observable Failing Systems," Operations Research, INFORMS, vol. 64(4), pages 999-1014, August.
    13. Nicole Bauerle & Alexander Glauner, 2020. "Distributionally Robust Markov Decision Processes and their Connection to Risk Measures," Papers 2007.13103, arXiv.org.
    14. Eli Gutin & Daniel Kuhn & Wolfram Wiesemann, 2015. "Interdiction Games on Markovian PERT Networks," Management Science, INFORMS, vol. 61(5), pages 999-1017, May.
    15. Xin, Linwei & Goldberg, David A., 2021. "Time (in)consistency of multistage distributionally robust inventory models with moment constraints," European Journal of Operational Research, Elsevier, vol. 289(3), pages 1127-1141.
    16. Boloori, Alireza & Saghafian, Soroush & Chakkera, Harini A. A. & Cook, Curtiss B., 2017. "Data-Driven Management of Post-transplant Medications: An APOMDP Approach," Working Paper Series rwp17-036, Harvard University, John F. Kennedy School of Government.
    17. V Varagapriya & Vikas Vikram Singh & Abdel Lisser, 2023. "Joint chance-constrained Markov decision processes," Annals of Operations Research, Springer, vol. 322(2), pages 1013-1035, March.
    18. Zhu, Zhicheng & Xiang, Yisha & Zhao, Ming & Shi, Yue, 2023. "Data-driven remanufacturing planning with parameter uncertainty," European Journal of Operational Research, Elsevier, vol. 309(1), pages 102-116.
    19. Peter Buchholz & Dimitri Scheftelowitsch, 2019. "Computation of weighted sums of rewards for concurrent MDPs," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 89(1), pages 1-42, February.
    20. Michael Jong Kim & Andrew E.B. Lim, 2016. "Robust Multiarmed Bandit Problems," Management Science, INFORMS, vol. 62(1), pages 264-285, January.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2505.19058. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.