IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2508.20103.html
   My bibliography  Save this paper

Deep Reinforcement Learning for Optimal Asset Allocation Using DDPG with TiDE

Author

Listed:
  • Rongwei Liu
  • Jin Zheng
  • John Cartlidge

Abstract

The optimal asset allocation between risky and risk-free assets is a persistent challenge due to the inherent volatility in financial markets. Conventional methods rely on strict distributional assumptions or non-additive reward ratios, which limit their robustness and applicability to investment goals. To overcome these constraints, this study formulates the optimal two-asset allocation problem as a sequential decision-making task within a Markov Decision Process (MDP). This framework enables the application of reinforcement learning (RL) mechanisms to develop dynamic policies based on simulated financial scenarios, regardless of prerequisites. We use the Kelly criterion to balance immediate reward signals against long-term investment objectives, and we take the novel step of integrating the Time-series Dense Encoder (TiDE) into the Deep Deterministic Policy Gradient (DDPG) RL framework for continuous decision-making. We compare DDPG-TiDE with a simple discrete-action Q-learning RL framework and a passive buy-and-hold investment strategy. Empirical results show that DDPG-TiDE outperforms Q-learning and generates higher risk adjusted returns than buy-and-hold. These findings suggest that tackling the optimal asset allocation problem by integrating TiDE within a DDPG reinforcement learning framework is a fruitful avenue for further exploration.

Suggested Citation

  • Rongwei Liu & Jin Zheng & John Cartlidge, 2025. "Deep Reinforcement Learning for Optimal Asset Allocation Using DDPG with TiDE," Papers 2508.20103, arXiv.org.
  • Handle: RePEc:arx:papers:2508.20103
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2508.20103
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Chao Zhang & Zihao Zhang & Mihai Cucuringu & Stefan Zohren, 2021. "A Universal End-to-End Approach to Portfolio Optimization via Deep Learning," Papers 2111.09170, arXiv.org.
    2. Amit Goyal & Ivo Welch & Athanasse Zafirov, 2024. "A Comprehensive 2022 Look at the Empirical Performance of Equity Premium Prediction," The Review of Financial Studies, Society for Financial Studies, vol. 37(11), pages 3490-3557.
    3. R. Jiang & D. Saunders & C. Weng, 2022. "The reinforcement learning Kelly strategy," Quantitative Finance, Taylor & Francis Journals, vol. 22(8), pages 1445-1464, August.
    4. Merton, Robert C, 1969. "Lifetime Portfolio Selection under Uncertainty: The Continuous-Time Case," The Review of Economics and Statistics, MIT Press, vol. 51(3), pages 247-257, August.
    5. Kan, Raymond & Zhou, Guofu, 2007. "Optimal Portfolio Choice with Parameter Uncertainty," Journal of Financial and Quantitative Analysis, Cambridge University Press, vol. 42(3), pages 621-656, September.
    6. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    7. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jiang, Yifu & Olmo, Jose & Atwi, Majed, 2025. "High-dimensional multi-period portfolio allocation using deep reinforcement learning," International Review of Economics & Finance, Elsevier, vol. 98(C).
    2. Minshuo Chen & Renyuan Xu & Yumin Xu & Ruixun Zhang, 2025. "Diffusion Factor Models: Generating High-Dimensional Returns with Factor Structure," Papers 2504.06566, arXiv.org, revised Jul 2025.
    3. Bouyaddou, Youssef & Jebabli, Ikram, 2025. "Integration of investor behavioral perspective and climate change in reinforcement learning for portfolio optimization," Research in International Business and Finance, Elsevier, vol. 73(PB).
    4. François, Pascal & Gauthier, Geneviève & Godin, Frédéric & Mendoza, Carlos Octavio Pérez, 2025. "Is the difference between deep hedging and delta hedging a statistical arbitrage?," Finance Research Letters, Elsevier, vol. 73(C).
    5. Alejandra de-la-Rica-Escudero & Eduardo C Garrido-Merchán & María Coronado-Vaca, 2025. "Explainable post hoc portfolio management financial policy of a Deep Reinforcement Learning agent," PLOS ONE, Public Library of Science, vol. 20(1), pages 1-19, January.
    6. Wu, Bo & Li, Lingfei, 2024. "Reinforcement learning for continuous-time mean-variance portfolio selection in a regime-switching market," Journal of Economic Dynamics and Control, Elsevier, vol. 158(C).
    7. Chen, Jia & Li, Degui & Linton, Oliver, 2019. "A new semiparametric estimation approach for large dynamic covariance matrices with multiple conditioning variables," Journal of Econometrics, Elsevier, vol. 212(1), pages 155-176.
    8. Konrad Mueller & Amira Akkari & Lukas Gonon & Ben Wood, 2024. "Fast Deep Hedging with Second-Order Optimization," Papers 2410.22568, arXiv.org.
    9. Nicole Bäuerle & Anna Jaśkiewicz, 2024. "Markov decision processes with risk-sensitive criteria: an overview," Mathematical Methods of Operations Research, Springer;Gesellschaft für Operations Research (GOR);Nederlands Genootschap voor Besliskunde (NGB), vol. 99(1), pages 141-178, April.
    10. Haoren Zhu & Pengfei Zhao & Wilfred Siu Hung NG & Dik Lun Lee, 2024. "Financial Assets Dependency Prediction Utilizing Spatiotemporal Patterns," Papers 2406.11886, arXiv.org.
    11. Jaskaran Singh Walia & Aarush Sinha & Srinitish Srinivasan & Srihari Unnikrishnan, 2025. "Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation," Papers 2502.17011, arXiv.org.
    12. Min Dai & Yuchao Dong & Yanwei Jia & Xun Yu Zhou, 2023. "Data-Driven Merton's Strategies via Policy Randomization," Papers 2312.11797, arXiv.org, revised May 2025.
    13. Constantinos Kardaras & Hyeng Keun Koo & Johannes Ruf, 2022. "Estimation of growth in fund models," Papers 2208.02573, arXiv.org.
    14. Zhang, Jinqing & Jin, Zeyu & An, Yunbi, 2017. "Dynamic portfolio optimization with ambiguity aversion," Journal of Banking & Finance, Elsevier, vol. 79(C), pages 95-109.
    15. Ruili Sun & Tiefeng Ma & Shuangzhe Liu & Milind Sathye, 2019. "Improved Covariance Matrix Estimation for Portfolio Risk Measurement: A Review," JRFM, MDPI, vol. 12(1), pages 1-34, March.
    16. Guojun Xiong & Zhiyang Deng & Keyi Wang & Yupeng Cao & Haohang Li & Yangyang Yu & Xueqing Peng & Mingquan Lin & Kaleb E Smith & Xiao-Yang Liu & Jimin Huang & Sophia Ananiadou & Qianqian Xie, 2025. "FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading," Papers 2502.11433, arXiv.org, revised Feb 2025.
    17. Daniil Karzanov & Rub'en Garz'on & Mikhail Terekhov & Caglar Gulcehre & Thomas Raffinot & Marcin Detyniecki, 2025. "Regret-Optimized Portfolio Enhancement through Deep Reinforcement Learning and Future Looking Rewards," Papers 2502.02619, arXiv.org.
    18. Yuanfei Cui & Fengtong Yao, 2024. "Integrating Deep Learning and Reinforcement Learning for Enhanced Financial Risk Forecasting in Supply Chain Management," Journal of the Knowledge Economy, Springer;Portland International Center for Management of Engineering and Technology (PICMET), vol. 15(4), pages 20091-20110, December.
    19. Xiangyu Cui & Xun Li & Yun Shi & Si Zhao, 2023. "Discrete-Time Mean-Variance Strategy Based on Reinforcement Learning," Papers 2312.15385, arXiv.org.
    20. Ahmad Aghapour & Erhan Bayraktar & Fengyi Yuan, 2025. "Solving dynamic portfolio selection problems via score-based diffusion models," Papers 2507.09916, arXiv.org, revised Aug 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2508.20103. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.