IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2103.11455.html
   My bibliography  Save this paper

A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management

Author

Listed:
  • Huanming Zhang
  • Zhengyong Jiang
  • Jionglong Su

Abstract

With the improvement of computer performance and the development of GPU-accelerated technology, trading with machine learning algorithms has attracted the attention of many researchers and practitioners. In this research, we propose a novel portfolio management strategy based on the framework of Deep Deterministic Policy Gradient, a policy-based reinforcement learning framework, and compare its performance to that of other trading strategies. In our framework, two Long Short-Term Memory neural networks and two fully connected neural networks are constructed. We also investigate the performance of our strategy with and without transaction costs. Experimentally, we choose eight US stocks consisting of four low-volatility stocks and four high-volatility stocks. We compare the compound annual return rate of our strategy against seven other strategies, e.g., Uniform Buy and Hold, Exponential Gradient and Universal Portfolios. In our case, the compound annual return rate is 14.12%, outperforming all other strategies. Furthermore, in terms of Sharpe Ratio (0.5988), our strategy is nearly 33% higher than that of the second-best performing strategy.

Suggested Citation

  • Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
  • Handle: RePEc:arx:papers:2103.11455
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2103.11455
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ziming Gao & Yuan Gao & Yi Hu & Zhengyong Jiang & Jionglong Su, 2020. "Application of Deep Q-Network in Portfolio Management," Papers 2003.06365, arXiv.org.
    2. David P. Helmbold & Robert E. Schapire & Yoram Singer & Manfred K. Warmuth, 1998. "On‐Line Portfolio Selection Using Multiplicative Updates," Mathematical Finance, Wiley Blackwell, vol. 8(4), pages 325-347, October.
    3. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    4. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    5. Angelos Filos, 2019. "Reinforcement Learning for Portfolio Management," Papers 1909.09571, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    2. Hengxi Zhang & Zhendong Shi & Yuanquan Hu & Wenbo Ding & Ercan E. Kuruoglu & Xiao-Ping Zhang, 2023. "Optimizing Trading Strategies in Quantitative Markets using Multi-Agent Reinforcement Learning," Papers 2303.11959, arXiv.org, revised Dec 2023.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    2. MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.
    3. Gang Huang & Xiaohua Zhou & Qingyang Song, 2020. "Deep reinforcement learning for portfolio management," Papers 2012.13773, arXiv.org, revised Apr 2022.
    4. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    5. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    6. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    7. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    8. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    9. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    10. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    11. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    12. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    13. Kinyua, Johnson D. & Mutigwe, Charles & Cushing, Daniel J. & Poggi, Michael, 2021. "An analysis of the impact of President Trump’s tweets on the DJIA and S&P 500 using machine learning and sentiment analysis," Journal of Behavioral and Experimental Finance, Elsevier, vol. 29(C).
    14. Wonsup Shin & Seok-Jun Bu & Sung-Bae Cho, 2019. "Automatic Financial Trading Agent for Low-risk Portfolio Management using Deep Reinforcement Learning," Papers 1909.03278, arXiv.org.
    15. Ziming Gao & Yuan Gao & Yi Hu & Zhengyong Jiang & Jionglong Su, 2020. "Application of Deep Q-Network in Portfolio Management," Papers 2003.06365, arXiv.org.
    16. Yinheng Li & Junhao Wang & Yijie Cao, 2019. "A General Framework on Enhancing Portfolio Management with Reinforcement Learning," Papers 1911.11880, arXiv.org, revised Oct 2023.
    17. Zhaolu Dong & Shan Huang & Simiao Ma & Yining Qian, 2021. "Factor Representation and Decision Making in Stock Markets Using Deep Reinforcement Learning," Papers 2108.01758, arXiv.org.
    18. Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
    19. Ruan Pretorius & Terence van Zyl, 2022. "Deep Reinforcement Learning and Convex Mean-Variance Optimisation for Portfolio Management," Papers 2203.11318, arXiv.org.
    20. Saeed Marzban & Erick Delage & Jonathan Yumeng Li & Jeremie Desgagne-Bouchard & Carl Dussault, 2021. "WaveCorr: Correlation-savvy Deep Reinforcement Learning for Portfolio Management," Papers 2109.07005, arXiv.org, revised Sep 2021.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2103.11455. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.