IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2006.12686.html

Risk-Sensitive Reinforcement Learning: a Martingale Approach to Reward Uncertainty

Author

Listed:
  • Nelson Vadori
  • Sumitra Ganesh
  • Prashant Reddy
  • Manuela Veloso

Abstract

We introduce a novel framework to account for sensitivity to rewards uncertainty in sequential decision-making problems. While risk-sensitive formulations for Markov decision processes studied so far focus on the distribution of the cumulative reward as a whole, we aim at learning policies sensitive to the uncertain/stochastic nature of the rewards, which has the advantage of being conceptually more meaningful in some cases. To this end, we present a new decomposition of the randomness contained in the cumulative reward based on the Doob decomposition of a stochastic process, and introduce a new conceptual tool - the \textit{chaotic variation} - which can rigorously be interpreted as the risk measure of the martingale component associated to the cumulative reward process. We innovate on the reinforcement learning side by incorporating this new risk-sensitive approach into model-free algorithms, both policy gradient and value function based, and illustrate its relevance on grid world and portfolio optimization problems.

Suggested Citation

  • Nelson Vadori & Sumitra Ganesh & Prashant Reddy & Manuela Veloso, 2020. "Risk-Sensitive Reinforcement Learning: a Martingale Approach to Reward Uncertainty," Papers 2006.12686, arXiv.org, revised Sep 2020.
  • Handle: RePEc:arx:papers:2006.12686
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2006.12686
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Olivier Guéant & Iuliia Manziuk, 2019. "Deep Reinforcement Learning for Market Making in Corporate Bonds: Beating the Curse of Dimensionality," Applied Mathematical Finance, Taylor & Francis Journals, vol. 26(5), pages 387-452, September.
    2. V. S. Borkar, 2002. "Q-Learning for Risk-Sensitive Control," Mathematics of Operations Research, INFORMS, vol. 27(2), pages 294-311, May.
    3. Olivier Gu'eant & Iuliia Manziuk, 2019. "Deep reinforcement learning for market making in corporate bonds: beating the curse of dimensionality," Papers 1910.13205, arXiv.org.
    4. Kai Detlefsen & Giacomo Scandolo, 2005. "Conditional and dynamic convex risk measures," Finance and Stochastics, Springer, vol. 9(4), pages 539-561, October.
    5. Sumitra Ganesh & Nelson Vadori & Mengda Xu & Hua Zheng & Prashant Reddy & Manuela Veloso, 2019. "Reinforcement Learning for Market Making in a Multi-agent Dealer Market," Papers 1911.05892, arXiv.org.
    6. repec:hum:wpaper:sfb649dp2005-006 is not listed on IDEAS
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    2. Bruno Gav{s}perov & Zvonko Kostanjv{c}ar, 2022. "Deep Reinforcement Learning for Market Making Under a Hawkes Process-Based Limit Order Book Model," Papers 2207.09951, arXiv.org.
    3. Pankaj Kumar, 2021. "Deep Hawkes Process for High-Frequency Market Making," Papers 2109.15110, arXiv.org.
    4. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    5. Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
    6. Thomas Spooner & Rahul Savani, 2020. "Robust Market Making via Adversarial Reinforcement Learning," Papers 2003.01820, arXiv.org, revised Jul 2020.
    7. Adel Javanmard & Jingwei Ji & Renyuan Xu, 2024. "Multi-Task Dynamic Pricing in Credit Market with Contextual Information," Papers 2410.14839, arXiv.org, revised Dec 2025.
    8. Bastien Baldacci & Joffrey Derchu & Iuliia Manziuk, 2020. "An approximate solution for options market-making in high dimension," Papers 2009.00907, arXiv.org.
    9. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Gu'eant & Julien Guilbert, 2024. "Automated Market Making: the case of Pegged Assets," Papers 2411.08145, arXiv.org.
    10. Bastien Baldacci & Jerome Benveniste & Gordon Ritter, 2020. "Optimal trading without optimal control," Papers 2012.12945, arXiv.org.
    11. Hui Niu & Siyuan Li & Jiahao Zheng & Zhouchi Lin & Jian Li & Jian Guo & Bo An, 2023. "IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making," Papers 2308.08918, arXiv.org.
    12. Alexander Barzykin & Philippe Bergault & Olivier Guéant, 2023. "Algorithmic market making in dealer markets with hedging and market impact," Mathematical Finance, Wiley Blackwell, vol. 33(1), pages 41-79, January.
    13. Bastien Baude & Damien Challet & Ioane Muni Toke, 2025. "Optimal risk-aware interest rates for decentralized lending protocols," Working Papers hal-04971758, HAL.
    14. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Gu'eant, 2022. "Automated Market Makers: Mean-Variance Analysis of LPs Payoffs and Design of Pricing Functions," Papers 2212.00336, arXiv.org, revised Nov 2023.
    15. Olivier Guéant, 2022. "Computational methods for market making algorithms," Post-Print hal-04590381, HAL.
    16. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Guéant, 2024. "Automated market makers: mean-variance analysis of LPs payoffs and design of pricing functions," Digital Finance, Springer, vol. 6(2), pages 225-247, June.
    17. Laura Leal & Mathieu Lauri`ere & Charles-Albert Lehalle, 2020. "Learning a functional control for high-frequency finance," Papers 2006.09611, arXiv.org, revised Feb 2021.
    18. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    19. Philippe Bergault & Louis Bertucci & David Bouba & Olivier Gu'eant & Julien Guilbert, 2024. "Price-Aware Automated Market Makers: Models Beyond Brownian Prices and Static Liquidity," Papers 2405.03496, arXiv.org, revised May 2024.
    20. Bastien Baldacci & Iuliia Manziuk, 2020. "Adaptive trading strategies across liquidity pools," Papers 2008.07807, arXiv.org.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2006.12686. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.