IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2010.09108.html
   My bibliography  Save this paper

Bridging the gap between Markowitz planning and deep reinforcement learning

Author

Listed:
  • Eric Benhamou
  • David Saltiel
  • Sandrine Ungari
  • Abhishek Mukhopadhyay

Abstract

While researchers in the asset management industry have mostly focused on techniques based on financial and risk planning techniques like Markowitz efficient frontier, minimum variance, maximum diversification or equal risk parity, in parallel, another community in machine learning has started working on reinforcement learning and more particularly deep reinforcement learning to solve other decision making problems for challenging task like autonomous driving, robot learning, and on a more conceptual side games solving like Go. This paper aims to bridge the gap between these two approaches by showing Deep Reinforcement Learning (DRL) techniques can shed new lights on portfolio allocation thanks to a more general optimization setting that casts portfolio allocation as an optimal control problem that is not just a one-step optimization, but rather a continuous control optimization with a delayed reward. The advantages are numerous: (i) DRL maps directly market conditions to actions by design and hence should adapt to changing environment, (ii) DRL does not rely on any traditional financial risk assumptions like that risk is represented by variance, (iii) DRL can incorporate additional data and be a multi inputs method as opposed to more traditional optimization methods. We present on an experiment some encouraging results using convolution networks.

Suggested Citation

  • Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
  • Handle: RePEc:arx:papers:2010.09108
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2010.09108
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    2. Xinyi Li & Yinchuan Li & Yuancheng Zhan & Xiao-Yang Liu, 2019. "Optimistic Bull or Pessimistic Bear: Adaptive Deep Reinforcement Learning for Stock Portfolio Allocation," Papers 1907.01503, arXiv.org.
    3. T. Roncalli & G. Weisang, 2016. "Risk parity portfolios with risk factors," Quantitative Finance, Taylor & Francis Journals, vol. 16(3), pages 377-388, March.
    4. repec:dau:papers:123456789/4688 is not listed on IDEAS
    5. Souradeep Chakraborty, 2019. "Capturing Financial markets to apply Deep Reinforcement Learning," Papers 1907.04373, arXiv.org, revised Dec 2019.
    6. Christoffersen, Peter & Errunza, Vihang & Jacobs, Kris & Jin, Xisong, 2010. "Is the Potential for International Diversification Disappearing?," Working Papers 11-20, University of Pennsylvania, Wharton School, Weiss Center.
    7. Haoran Wang & Xun Yu Zhou, 2019. "Continuous-Time Mean-Variance Portfolio Selection: A Reinforcement Learning Framework," Papers 1904.11392, arXiv.org, revised May 2019.
    8. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    9. Wenhang Bao & Xiao-yang Liu, 2019. "Multi-Agent Deep Reinforcement Learning for Liquidation Strategy Analysis," Papers 1906.11046, arXiv.org.
    10. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    11. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    12. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    13. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    14. Thibaut Th'eate & Damien Ernst, 2020. "An Application of Deep Reinforcement Learning to Algorithmic Trading," Papers 2004.06627, arXiv.org, revised Oct 2020.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & François Chareyron, 2021. "Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models," Working Papers hal-03202431, HAL.
    2. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & Franc{c}ois Chareyron, 2021. "Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting," Papers 2104.10483, arXiv.org, revised Apr 2021.
    3. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    2. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & François Chareyron, 2021. "Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models," Working Papers hal-03202431, HAL.
    3. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & Franc{c}ois Chareyron, 2021. "Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting," Papers 2104.10483, arXiv.org, revised Apr 2021.
    4. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    5. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    6. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    7. Kumar Yashaswi, 2021. "Deep Reinforcement Learning for Portfolio Optimization using Latent Feature State Space (LFSS) Module," Papers 2102.06233, arXiv.org.
    8. Ricard Durall, 2022. "Asset Allocation: From Markowitz to Deep Reinforcement Learning," Papers 2208.07158, arXiv.org.
    9. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.
    10. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    11. Longbing Cao, 2021. "AI in Finance: Challenges, Techniques and Opportunities," Papers 2107.09051, arXiv.org.
    12. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    13. Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
    14. Jiwon Kim & Moon-Ju Kang & KangHun Lee & HyungJun Moon & Bo-Kwan Jeon, 2023. "Deep Reinforcement Learning for Asset Allocation: Reward Clipping," Papers 2301.05300, arXiv.org.
    15. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    16. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    17. Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
    18. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    19. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    20. MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2010.09108. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.