IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2203.04579.html
   My bibliography  Save this paper

Multi-Objective reward generalization: Improving performance of Deep Reinforcement Learning for applications in single-asset trading

Author

Listed:
  • Federico Cornalba
  • Constantin Disselkamp
  • Davide Scassola
  • Christopher Helf

Abstract

We investigate the potential of Multi-Objective, Deep Reinforcement Learning for stock and cryptocurrency single-asset trading: in particular, we consider a Multi-Objective algorithm which generalizes the reward functions and discount factor (i.e., these components are not specified a priori, but incorporated in the learning process). Firstly, using several important assets (cryptocurrency pairs BTCUSD, ETHUSDT, XRPUSDT, and stock indexes AAPL, SPY, NIFTY50), we verify the reward generalization property of the proposed Multi-Objective algorithm, and provide preliminary statistical evidence showing increased predictive stability over the corresponding Single-Objective strategy. Secondly, we show that the Multi-Objective algorithm has a clear edge over the corresponding Single-Objective strategy when the reward mechanism is sparse (i.e., when non-null feedback is infrequent over time). Finally, we discuss the generalization properties with respect to the discount factor. The entirety of our code is provided in open source format.

Suggested Citation

  • Federico Cornalba & Constantin Disselkamp & Davide Scassola & Christopher Helf, 2022. "Multi-Objective reward generalization: Improving performance of Deep Reinforcement Learning for applications in single-asset trading," Papers 2203.04579, arXiv.org, revised Feb 2023.
  • Handle: RePEc:arx:papers:2203.04579
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2203.04579
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Weiguang Han & Boyi Zhang & Qianqian Xie & Min Peng & Yanzhao Lai & Jimin Huang, 2023. "Select and Trade: Towards Unified Pair Trading with Hierarchical Reinforcement Learning," Papers 2301.10724, arXiv.org, revised Feb 2023.
    2. Maximilian Wehrmann & Nico Zengeler & Uwe Handmann, 2021. "Observation Time Effects in Reinforcement Learning on Contracts for Difference," JRFM, MDPI, vol. 14(2), pages 1-15, January.
    3. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    4. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    5. Longbing Cao, 2021. "AI in Finance: Challenges, Techniques and Opportunities," Papers 2107.09051, arXiv.org.
    6. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    7. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    8. Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
    9. Jiwon Kim & Moon-Ju Kang & KangHun Lee & HyungJun Moon & Bo-Kwan Jeon, 2023. "Deep Reinforcement Learning for Asset Allocation: Reward Clipping," Papers 2301.05300, arXiv.org.
    10. MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.
    11. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    12. Weiguang Han & Jimin Huang & Qianqian Xie & Boyi Zhang & Yanzhao Lai & Min Peng, 2023. "Mastering Pair Trading with Risk-Aware Recurrent Reinforcement Learning," Papers 2304.00364, arXiv.org.
    13. Jonas Hanetho, 2023. "Commodities Trading through Deep Policy Gradient Methods," Papers 2309.00630, arXiv.org.
    14. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    15. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    16. Jingyuan Wang & Yang Zhang & Ke Tang & Junjie Wu & Zhang Xiong, 2019. "AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks," Papers 1908.02646, arXiv.org.
    17. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.
    18. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2203.04579. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.