IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2203.11318.html
   My bibliography  Save this paper

Deep Reinforcement Learning and Convex Mean-Variance Optimisation for Portfolio Management

Author

Listed:
  • Ruan Pretorius
  • Terence van Zyl

Abstract

Traditional portfolio management methods can incorporate specific investor preferences but rely on accurate forecasts of asset returns and covariances. Reinforcement learning (RL) methods do not rely on these explicit forecasts and are better suited for multi-stage decision processes. To address limitations of the evaluated research, experiments were conducted on three markets in different economies with different overall trends. By incorporating specific investor preferences into our RL models' reward functions, a more comprehensive comparison could be made to traditional methods in risk-return space. Transaction costs were also modelled more realistically by including nonlinear changes introduced by market volatility and trading volume. The results of this study suggest that there can be an advantage to using RL methods compared to traditional convex mean-variance optimisation methods under certain market conditions. Our RL models could significantly outperform traditional single-period optimisation (SPO) and multi-period optimisation (MPO) models in upward trending markets, but only up to specific risk limits. In sideways trending markets, the performance of SPO and MPO models can be closely matched by our RL models for the majority of the excess risk range tested. The specific market conditions under which these models could outperform each other highlight the importance of a more comprehensive comparison of Pareto optimal frontiers in risk-return space. These frontiers give investors a more granular view of which models might provide better performance for their specific risk tolerance or return targets.

Suggested Citation

  • Ruan Pretorius & Terence van Zyl, 2022. "Deep Reinforcement Learning and Convex Mean-Variance Optimisation for Portfolio Management," Papers 2203.11318, arXiv.org.
  • Handle: RePEc:arx:papers:2203.11318
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2203.11318
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    2. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    3. Stephen Boyd & Enzo Busseti & Steven Diamond & Ronald N. Kahn & Kwangmoo Koh & Peter Nystrup & Jan Speth, 2017. "Multi-Period Trading via Convex Optimization," Papers 1705.00109, arXiv.org.
    4. Angelos Filos, 2019. "Reinforcement Learning for Portfolio Management," Papers 1909.09571, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    2. Gang Huang & Xiaohua Zhou & Qingyang Song, 2020. "Deep reinforcement learning for portfolio management," Papers 2012.13773, arXiv.org, revised Apr 2022.
    3. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    4. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    5. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    6. Tian, Yuan & Han, Minghao & Kulkarni, Chetan & Fink, Olga, 2022. "A prescriptive Dirichlet power allocation policy with deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 224(C).
    7. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    8. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    9. Ariel Neufeld & Julian Sester & Mario v{S}iki'c, 2022. "Markov Decision Processes under Model Uncertainty," Papers 2206.06109, arXiv.org, revised Jan 2023.
    10. Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
    11. Ayman Chaouki & Stephen Hardiman & Christian Schmidt & Emmanuel S'eri'e & Joachim de Lataillade, 2020. "Deep Deterministic Portfolio Optimization," Papers 2003.06497, arXiv.org, revised Apr 2020.
    12. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
    13. Hyungjun Park & Min Kyu Sim & Dong Gu Choi, 2019. "An intelligent financial portfolio trading strategy using deep Q-learning," Papers 1907.03665, arXiv.org, revised Nov 2019.
    14. Brini, Alessio & Tantari, Daniele, 2023. "Deep reinforcement trading with predictable returns," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 622(C).
    15. Wonsup Shin & Seok-Jun Bu & Sung-Bae Cho, 2019. "Automatic Financial Trading Agent for Low-risk Portfolio Management using Deep Reinforcement Learning," Papers 1909.03278, arXiv.org.
    16. Yinheng Li & Junhao Wang & Yijie Cao, 2019. "A General Framework on Enhancing Portfolio Management with Reinforcement Learning," Papers 1911.11880, arXiv.org, revised Oct 2023.
    17. Zechu Li & Xiao-Yang Liu & Jiahao Zheng & Zhaoran Wang & Anwar Walid & Jian Guo, 2021. "FinRL-Podracer: High Performance and Scalable Deep Reinforcement Learning for Quantitative Finance," Papers 2111.05188, arXiv.org.
    18. MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.
    19. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.
    20. Tulika Saha & Sriparna Saha & Pushpak Bhattacharyya, 2020. "Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning," PLOS ONE, Public Library of Science, vol. 15(7), pages 1-28, July.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2203.11318. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.