IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2212.14477.html
   My bibliography  Save this paper

A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management

Author

Listed:
  • MohammadAmin Fazli
  • Mahdi Lashkari
  • Hamed Taherkhani
  • Jafar Habibi

Abstract

Solving portfolio management problems using deep reinforcement learning has been getting much attention in finance for a few years. We have proposed a new method using experts signals and historical price data to feed into our reinforcement learning framework. Although experts signals have been used in previous works in the field of finance, as far as we know, it is the first time this method, in tandem with deep RL, is used to solve the financial portfolio management problem. Our proposed framework consists of a convolutional network for aggregating signals, another convolutional network for historical price data, and a vanilla network. We used the Proximal Policy Optimization algorithm as the agent to process the reward and take action in the environment. The results suggested that, on average, our framework could gain 90 percent of the profit earned by the best expert.

Suggested Citation

  • MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.
  • Handle: RePEc:arx:papers:2212.14477
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2212.14477
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Xingyu Yang & Jin'an He & Yong Zhang, 2022. "Aggregating exponential gradient expert advice for online portfolio selection," Journal of the Operational Research Society, Taylor & Francis Journals, vol. 73(3), pages 587-597, March.
    2. Sourav Bhattacharya & Arijit Mukherjee, 2013. "Strategic information revelation when experts compete to influence," RAND Journal of Economics, RAND Corporation, vol. 44(3), pages 522-544, September.
    3. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    4. Xingyu Yang & Jin’an He & Hong Lin & Yong Zhang, 2020. "Boosting Exponential Gradient Strategy for Online Portfolio Selection: An Aggregating Experts’ Advice Method," Computational Economics, Springer;Society for Computational Economics, vol. 55(1), pages 231-251, January.
    5. Gottschlich, Jörg & Hinz, Oliver, 2014. "A Decision Support System for Stock Investment Recommendations Using Collective Wisdom," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 69939, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    6. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    7. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    8. Sourav Bhattacharya & Arijit Mukherjee, 2011. "Strategic Information Revelation when Experts Compete to Influence," Working Paper 453, Department of Economics, University of Pittsburgh, revised Jan 2013.
    9. Angelos Filos, 2019. "Reinforcement Learning for Portfolio Management," Papers 1909.09571, arXiv.org.
    10. Yong Zhang & Xingyu Yang, 2017. "Online Portfolio Selection Strategy Based on Combining Experts’ Advice," Computational Economics, Springer;Society for Computational Economics, vol. 50(1), pages 141-159, June.
    11. Haugen, Robert A. & Senbet, Lemma W., 1988. "Bankruptcy and Agency Costs: Their Significance to the Theory of Optimal Capital Structure," Journal of Financial and Quantitative Analysis, Cambridge University Press, vol. 23(1), pages 27-38, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    2. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    3. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    4. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
    5. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    6. Gang Huang & Xiaohua Zhou & Qingyang Song, 2020. "Deep reinforcement learning for portfolio management," Papers 2012.13773, arXiv.org, revised Apr 2022.
    7. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.
    8. Gregor Martin, 2015. "To Invite or Not to Invite a Lobby, That Is the Question," The B.E. Journal of Theoretical Economics, De Gruyter, vol. 15(2), pages 143-166, July.
    9. Claude Fluet & Thomas Lanzi, 2021. "Cross-Examination," Working Papers of BETA 2021-40, Bureau d'Economie Théorique et Appliquée, UDS, Strasbourg.
    10. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    11. Ispano, Alessandro, 2016. "Persuasion and receiver’s news," Economics Letters, Elsevier, vol. 141(C), pages 60-63.
    12. Winand Emons & Claude Fluet, 2019. "Strategic communication with reporting costs," Theory and Decision, Springer, vol. 87(3), pages 341-363, October.
    13. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    14. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    15. Martin Gregor, 2014. "Receiver's access fee for a single sender," Working Papers IES 2014/17, Charles University Prague, Faculty of Social Sciences, Institute of Economic Studies, revised May 2014.
    16. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    17. Bhattacharya, Sourav & Goltsman, Maria & Mukherjee, Arijit, 2018. "On the optimality of diverse expert panels in persuasion games," Games and Economic Behavior, Elsevier, vol. 107(C), pages 345-363.
    18. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    19. Arnold Polanski & Mark Quement, 2023. "The battle of opinion: dynamic information revelation by ideological senders," International Journal of Game Theory, Springer;Game Theory Society, vol. 52(2), pages 463-483, June.
    20. Amorós, Pablo, 2023. "Evaluation and strategic manipulation," Journal of Mathematical Economics, Elsevier, vol. 106(C).

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2212.14477. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.