IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2102.03502.html
   My bibliography  Save this paper

MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management

Author

Listed:
  • Zhenhan Huang
  • Fumihide Tanaka

Abstract

Financial portfolio management (PM) is one of the most applicable problems in reinforcement learning (RL) owing to its sequential decision-making nature. However, existing RL-based approaches rarely focus on scalability or reusability to adapt to the ever-changing markets. These approaches are rigid and unscalable to accommodate the varying number of assets of portfolios and increasing need for heterogeneous data. Also, RL agents in the existing systems are ad-hoc trained and hardly reusable for different portfolios. To confront the above problems, a modular design is desired for the systems to be compatible with reusable asset-dedicated agents. In this paper, we propose a multi-agent RL-based system for PM (MSPM). MSPM involves two types of asynchronously-updated modules: Evolving Agent Module (EAM) and Strategic Agent Module (SAM). An EAM is an information-generating module with a DQN agent, and it receives heterogeneous data and generates signal-comprised information for a particular asset. An SAM is a decision-making module with a PPO agent for portfolio optimization, and it connects to EAMs to reallocate the assets in a portfolio. Trained EAMs can be connected to any SAM at will. With its modularized architecture, the multi-step condensation of volatile market information, and the reusable design of EAM, MSPM simultaneously addresses the two challenges in RL-based PM: scalability and reusability. Experiments on 8-year U.S. stock market data prove the effectiveness of MSPM in profit accumulation by its outperformance over five baselines in terms of accumulated rate of return (ARR), daily rate of return, and Sortino ratio. MSPM improves ARR by at least 186.5% compared to CRP, a widely-used PM strategy. To validate the indispensability of EAM, we back-test and compare MSPMs on four portfolios. EAM-enabled MSPMs improve ARR by at least 1341.8% compared to EAM-disabled MSPMs.

Suggested Citation

  • Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
  • Handle: RePEc:arx:papers:2102.03502
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2102.03502
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Mih�ly Ormos & Andr�s Urb�n, 2013. "Performance analysis of log-optimal portfolio strategies with transaction costs," Quantitative Finance, Taylor & Francis Journals, vol. 13(10), pages 1587-1597, October.
    2. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    3. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    4. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Zhenhan Huang & Fumihide Tanaka, 2023. "A Scalable Reinforcement Learning-based System Using On-Chain Data for Cryptocurrency Portfolio Management," Papers 2307.01599, arXiv.org.
    2. Hui Niu & Siyuan Li & Jian Li, 2022. "MetaTrader: An Reinforcement Learning Approach Integrating Diverse Policies for Portfolio Optimization," Papers 2210.01774, arXiv.org.
    3. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    2. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    3. Yinheng Li & Junhao Wang & Yijie Cao, 2019. "A General Framework on Enhancing Portfolio Management with Reinforcement Learning," Papers 1911.11880, arXiv.org, revised Oct 2023.
    4. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    5. Ricard Durall, 2022. "Asset Allocation: From Markowitz to Deep Reinforcement Learning," Papers 2208.07158, arXiv.org.
    6. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    7. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    8. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    9. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    10. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    11. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    12. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    13. Hui Niu & Siyuan Li & Jian Li, 2022. "MetaTrader: An Reinforcement Learning Approach Integrating Diverse Policies for Portfolio Optimization," Papers 2210.01774, arXiv.org.
    14. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    15. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    16. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
    17. Kinyua, Johnson D. & Mutigwe, Charles & Cushing, Daniel J. & Poggi, Michael, 2021. "An analysis of the impact of President Trump’s tweets on the DJIA and S&P 500 using machine learning and sentiment analysis," Journal of Behavioral and Experimental Finance, Elsevier, vol. 29(C).
    18. Zhaolu Dong & Shan Huang & Simiao Ma & Yining Qian, 2021. "Factor Representation and Decision Making in Stock Markets Using Deep Reinforcement Learning," Papers 2108.01758, arXiv.org.
    19. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & François Chareyron, 2021. "Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models," Working Papers hal-03202431, HAL.
    20. Saeed Marzban & Erick Delage & Jonathan Yumeng Li & Jeremie Desgagne-Bouchard & Carl Dussault, 2021. "WaveCorr: Correlation-savvy Deep Reinforcement Learning for Portfolio Management," Papers 2109.07005, arXiv.org, revised Sep 2021.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2102.03502. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.