IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2012.06325.html
   My bibliography  Save this paper

Deep Reinforcement Learning for Stock Portfolio Optimization

Author

Listed:
  • Le Trung Hieu

Abstract

Stock portfolio optimization is the process of constant re-distribution of money to a pool of various stocks. In this paper, we will formulate the problem such that we can apply Reinforcement Learning for the task properly. To maintain a realistic assumption about the market, we will incorporate transaction cost and risk factor into the state as well. On top of that, we will apply various state-of-the-art Deep Reinforcement Learning algorithms for comparison. Since the action space is continuous, the realistic formulation were tested under a family of state-of-the-art continuous policy gradients algorithms: Deep Deterministic Policy Gradient (DDPG), Generalized Deterministic Policy Gradient (GDPG) and Proximal Policy Optimization (PPO), where the former two perform much better than the last one. Next, we will present the end-to-end solution for the task with Minimum Variance Portfolio Theory for stock subset selection, and Wavelet Transform for extracting multi-frequency data pattern. Observations and hypothesis were discussed about the results, as well as possible future research directions.1

Suggested Citation

  • Le Trung Hieu, 2020. "Deep Reinforcement Learning for Stock Portfolio Optimization," Papers 2012.06325, arXiv.org.
  • Handle: RePEc:arx:papers:2012.06325
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2012.06325
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jiahua Xu & Daniel Perez & Yebo Feng & Benjamin Livshits, 2023. "Auto.gov: Learning-based On-chain Governance for Decentralized Finance (DeFi)," Papers 2302.09551, arXiv.org, revised May 2023.
    2. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    3. Alexandre Carbonneau & Fr'ed'eric Godin, 2021. "Deep equal risk pricing of financial derivatives with non-translation invariant risk measures," Papers 2107.11340, arXiv.org.
    4. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    5. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    6. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    7. Martino Banchio & Giacomo Mantegazza, 2022. "Artificial Intelligence and Spontaneous Collusion," Papers 2202.05946, arXiv.org, revised Sep 2023.
    8. Miquel Noguer i Alonso & Sonam Srivastava, 2020. "Deep Reinforcement Learning for Asset Allocation in US Equities," Papers 2010.04404, arXiv.org.
    9. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    10. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    11. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    12. Nymisha Bandi & Theja Tulabandhula, 2020. "Off-Policy Optimization of Portfolio Allocation Policies under Constraints," Papers 2012.11715, arXiv.org.
    13. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    14. Carbonneau, Alexandre, 2021. "Deep hedging of long-term financial derivatives," Insurance: Mathematics and Economics, Elsevier, vol. 99(C), pages 327-340.
    15. Tian, Yuan & Han, Minghao & Kulkarni, Chetan & Fink, Olga, 2022. "A prescriptive Dirichlet power allocation policy with deep reinforcement learning," Reliability Engineering and System Safety, Elsevier, vol. 224(C).
    16. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    17. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    18. Hans Buhler & Lukas Gonon & Josef Teichmann & Ben Wood, 2018. "Deep Hedging," Papers 1802.03042, arXiv.org.
    19. Xing Wang & Yijun Wang & Bin Weng & Aleksandr Vinel, 2020. "Stock2Vec: A Hybrid Deep Learning Framework for Stock Market Prediction with Representation Learning and Temporal Convolutional Network," Papers 2010.01197, arXiv.org.
    20. Alexandre Carbonneau, 2020. "Deep Hedging of Long-Term Financial Derivatives," Papers 2007.15128, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2012.06325. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.