IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2404.08935.html

Developing An Attention-Based Ensemble Learning Framework for Financial Portfolio Optimisation

Author

Listed:
  • Zhenglong Li
  • Vincent Tam

Abstract

In recent years, deep or reinforcement learning approaches have been applied to optimise investment portfolios through learning the spatial and temporal information under the dynamic financial market. Yet in most cases, the existing approaches may produce biased trading signals based on the conventional price data due to a lot of market noises, which possibly fails to balance the investment returns and risks. Accordingly, a multi-agent and self-adaptive portfolio optimisation framework integrated with attention mechanisms and time series, namely the MASAAT, is proposed in this work in which multiple trading agents are created to observe and analyse the price series and directional change data that recognises the significant changes of asset prices at different levels of granularity for enhancing the signal-to-noise ratio of price series. Afterwards, by reconstructing the tokens of financial data in a sequence, the attention-based cross-sectional analysis module and temporal analysis module of each agent can effectively capture the correlations between assets and the dependencies between time points. Besides, a portfolio generator is integrated into the proposed framework to fuse the spatial-temporal information and then summarise the portfolios suggested by all trading agents to produce a newly ensemble portfolio for reducing biased trading actions and balancing the overall returns and risks. The experimental results clearly demonstrate that the MASAAT framework achieves impressive enhancement when compared with many well-known portfolio optimsation approaches on three challenging data sets of DJIA, S&P 500 and CSI 300. More importantly, our proposal has potential strengths in many possible applications for future study.

Suggested Citation

  • Zhenglong Li & Vincent Tam, 2024. "Developing An Attention-Based Ensemble Learning Framework for Financial Portfolio Optimisation," Papers 2404.08935, arXiv.org.
  • Handle: RePEc:arx:papers:2404.08935
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2404.08935
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Bouyaddou, Youssef & Jebabli, Ikram, 2025. "Integration of investor behavioral perspective and climate change in reinforcement learning for portfolio optimization," Research in International Business and Finance, Elsevier, vol. 73(PB).
    2. François, Pascal & Gauthier, Geneviève & Godin, Frédéric & Mendoza, Carlos Octavio Pérez, 2025. "Is the difference between deep hedging and delta hedging a statistical arbitrage?," Finance Research Letters, Elsevier, vol. 73(C).
    3. Alejandra de-la-Rica-Escudero & Eduardo C Garrido-Merchán & María Coronado-Vaca, 2025. "Explainable post hoc portfolio management financial policy of a Deep Reinforcement Learning agent," PLOS ONE, Public Library of Science, vol. 20(1), pages 1-19, January.
    4. Wu, Bo & Li, Lingfei, 2024. "Reinforcement learning for continuous-time mean-variance portfolio selection in a regime-switching market," Journal of Economic Dynamics and Control, Elsevier, vol. 158(C).
    5. Konrad Mueller & Amira Akkari & Lukas Gonon & Ben Wood, 2024. "Fast Deep Hedging with Second-Order Optimization," Papers 2410.22568, arXiv.org.
    6. Nicole Bäuerle & Anna Jaśkiewicz, 2024. "Markov decision processes with risk-sensitive criteria: an overview," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 99(1), pages 141-178, April.
    7. Tonkin, Isaac & Gepp, Adrian & Harris, Geoff & Vanstone, Bruce, 2025. "Benchmarking deep reinforcement learning approaches to trade execution," Pacific-Basin Finance Journal, Elsevier, vol. 94(C).
    8. Haoren Zhu & Pengfei Zhao & Wilfred Siu Hung NG & Dik Lun Lee, 2024. "Financial Assets Dependency Prediction Utilizing Spatiotemporal Patterns," Papers 2406.11886, arXiv.org.
    9. Jaskaran Singh Walia & Aarush Sinha & Srinitish Srinivasan & Srihari Unnikrishnan, 2025. "Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation," Papers 2502.17011, arXiv.org.
    10. Mohammad Rezoanul Hoque & Md Meftahul Ferdaus & M. Kabir Hassan, 2025. "Reinforcement Learning in Financial Decision Making: A Systematic Review of Performance, Challenges, and Implementation Strategies," Papers 2512.10913, arXiv.org.
    11. Jiang, Yifu & Olmo, Jose & Atwi, Majed, 2025. "High-dimensional multi-period portfolio allocation using deep reinforcement learning," International Review of Economics & Finance, Elsevier, vol. 98(C).
    12. Rongwei Liu & Jin Zheng & John Cartlidge, 2025. "Deep Reinforcement Learning for Optimal Asset Allocation Using DDPG with TiDE," Papers 2508.20103, arXiv.org.
    13. Julius Graf & Thibaut Mastrolia, 2026. "Learning Market Making with Closing Auctions," Papers 2601.17247, arXiv.org.
    14. Guojun Xiong & Zhiyang Deng & Keyi Wang & Yupeng Cao & Haohang Li & Yangyang Yu & Xueqing Peng & Mingquan Lin & Kaleb E Smith & Xiao-Yang Liu & Jimin Huang & Sophia Ananiadou & Qianqian Xie, 2025. "FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading," Papers 2502.11433, arXiv.org, revised Feb 2025.
    15. Daniil Karzanov & Rub'en Garz'on & Mikhail Terekhov & Caglar Gulcehre & Thomas Raffinot & Marcin Detyniecki, 2025. "Regret-Optimized Portfolio Enhancement through Deep Reinforcement Learning and Future Looking Rewards," Papers 2502.02619, arXiv.org.
    16. Yuanfei Cui & Fengtong Yao, 2024. "RETRACTED ARTICLE: Integrating Deep Learning and Reinforcement Learning for Enhanced Financial Risk Forecasting in Supply Chain Management," Journal of the Knowledge Economy, Springer;Portland International Center for Management of Engineering and Technology (PICMET), vol. 15(4), pages 20091-20110, December.
    17. Hanqing Jin & Renyuan Xu & Yanzhao Yang, 2025. "Adaptive Partitioning and Learning for Stochastic Control of Diffusion Processes," Papers 2512.14991, arXiv.org.
    18. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    19. Ahmad Aghapour & Erhan Bayraktar & Fengyi Yuan, 2025. "Solving dynamic portfolio selection problems via score-based diffusion models," Papers 2507.09916, arXiv.org, revised Aug 2025.
    20. Shanyu Han & Yang Liu & Xiang Yu, 2025. "Risk-sensitive Reinforcement Learning Based on Convex Scoring Functions," Papers 2505.04553, arXiv.org, revised May 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2404.08935. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.