IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2511.18076.html

Reinforcement Learning for Portfolio Optimization with a Financial Goal and Defined Time Horizons

Author

Listed:
  • Fermat Leukam
  • Rock Stephane Koffi
  • Prudence Djagba

Abstract

This research proposes an enhancement to the innovative portfolio optimization approach using the G-Learning algorithm, combined with parametric optimization via the GIRL algorithm (G-learning approach to the setting of Inverse Reinforcement Learning) as presented by. The goal is to maximize portfolio value by a target date while minimizing the investor's periodic contributions. Our model operates in a highly volatile market with a well-diversified portfolio, ensuring a low-risk level for the investor, and leverages reinforcement learning to dynamically adjust portfolio positions over time. Results show that we improved the Sharpe Ratio from 0.42, as suggested by recent studies using the same approach, to a value of 0.483 a notable achievement in highly volatile markets with diversified portfolios. The comparison between G-Learning and GIRL reveals that while GIRL optimizes the reward function parameters (e.g., lambda = 0.0012 compared to 0.002), its impact on portfolio performance remains marginal. This suggests that reinforcement learning methods, like G-Learning, already enable robust optimization. This research contributes to the growing development of reinforcement learning applications in financial decision-making, demonstrating that probabilistic learning algorithms can effectively align portfolio management strategies with investor needs.

Suggested Citation

  • Fermat Leukam & Rock Stephane Koffi & Prudence Djagba, 2025. "Reinforcement Learning for Portfolio Optimization with a Financial Goal and Defined Time Horizons," Papers 2511.18076, arXiv.org.
  • Handle: RePEc:arx:papers:2511.18076
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2511.18076
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Scott, Robert Haney, 1976. "Teaching the Financial Markets Course," Journal of Financial and Quantitative Analysis, Cambridge University Press, vol. 11(4), pages 591-594, November.
    2. Matthew F. Dixon & Igor Halperin & Paul Bilokon, 2020. "Machine Learning in Finance," Springer Books, Springer, number 978-3-030-41068-1, January.
    3. Matthew F. Dixon & Igor Halperin & Paul Bilokon, 2020. "Frontiers of Machine Learning and Finance," Springer Books, in: Machine Learning in Finance, chapter 0, pages 519-541, Springer.
    4. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    5. Sanjiv R. Das & Daniel Ostrov & Anand Radhakrishnan & Deep Srivastav, 2020. "Dynamic portfolio allocation in goals-based wealth management," Computational Management Science, Springer, vol. 17(4), pages 613-640, December.
    6. Manganelli, Simone & Popov, Alexander, 2010. "Finance and diversification," Working Paper Series 1259, European Central Bank.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jiahua Xu & Yebo Feng & Daniel Perez & Benjamin Livshits, 2023. "Auto.gov: Learning-based Governance for Decentralized Finance (DeFi)," Papers 2302.09551, arXiv.org, revised May 2025.
    2. Caio de Souza Barbosa Costa & Anna Helena Reali Costa, 2025. "Comparing Normalization Methods for Portfolio Optimization with Reinforcement Learning," Papers 2508.03910, arXiv.org.
    3. Benjamin Coriat & Eric Benhamou, 2025. "HARLF: Hierarchical Reinforcement Learning and Lightweight LLM-Driven Sentiment Integration for Financial Portfolio Optimization," Papers 2507.18560, arXiv.org.
    4. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    5. Alexandre Carbonneau & Fr'ed'eric Godin, 2021. "Deep equal risk pricing of financial derivatives with non-translation invariant risk measures," Papers 2107.11340, arXiv.org.
    6. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    7. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    8. Stella C. Dong & James R. Finlay, 2025. "Dynamic Reinsurance Treaty Bidding via Multi-Agent Reinforcement Learning," Papers 2506.13113, arXiv.org.
    9. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    10. Weiyao Kang & Bingjia Shao & Hongquan Chen, 2024. "What influences users’ continuance intention of internet wealth management services? A perspective from network externalities and herding," Electronic Commerce Research, Springer, vol. 24(1), pages 205-238, March.
    11. Martino Banchio & Giacomo Mantegazza, 2022. "Artificial Intelligence and Spontaneous Collusion," Papers 2202.05946, arXiv.org, revised Sep 2023.
    12. Miquel Noguer i Alonso & Sonam Srivastava, 2020. "Deep Reinforcement Learning for Asset Allocation in US Equities," Papers 2010.04404, arXiv.org.
    13. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    14. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    15. Jinkyu Kim & Hyunjung Yi & Mogan Gim & Donghee Choi & Jaewoo Kang, 2025. "DeepAries: Adaptive Rebalancing Interval Selection for Enhanced Portfolio Selection," Papers 2510.14985, arXiv.org.
    16. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    17. Nymisha Bandi & Theja Tulabandhula, 2020. "Off-Policy Optimization of Portfolio Allocation Policies under Constraints," Papers 2012.11715, arXiv.org.
    18. Alessio Brini & Daniele Tantari, 2021. "Deep Reinforcement Trading with Predictable Returns," Papers 2104.14683, arXiv.org, revised May 2023.
    19. Jian Guo & Heung-Yeung Shum, 2024. "Large Investment Model," Papers 2408.10255, arXiv.org, revised Aug 2024.
    20. Carbonneau, Alexandre, 2021. "Deep hedging of long-term financial derivatives," Insurance: Mathematics and Economics, Elsevier, vol. 99(C), pages 327-340.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2511.18076. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.