IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2102.06233.html
   My bibliography  Save this paper

Deep Reinforcement Learning for Portfolio Optimization using Latent Feature State Space (LFSS) Module

Author

Listed:
  • Kumar Yashaswi

Abstract

Dynamic Portfolio optimization is the process of distribution and rebalancing of a fund into different financial assets such as stocks, cryptocurrencies, etc, in consecutive trading periods to maximize accumulated profits or minimize risks over a time horizon. This field saw huge developments in recent years, because of the increased computational power and increased research in sequential decision making through control theory. Recently Reinforcement Learning(RL) has been an important tool in the development of sequential and dynamic portfolio optimization theory. In this paper, we design a Deep Reinforcement Learning(DRL) framework as an autonomous portfolio optimization agent consisting of a Latent Feature State Space(LFSS) Module for filtering and feature extraction of financial data which is used as a state space for deep RL model. We develop an extensive RL agent with high efficiency and performance advantages over several benchmarks and model-free RL agents used in prior work. The noisy and non-stationary behaviour of daily asset prices in the financial market is addressed through Kalman Filter. Autoencoders, ZoomSVD, and restricted Boltzmann machines were the models used and compared in the module to extract relevant time series features as state space. We simulate weekly data, with practical constraints and transaction costs, on a portfolio of S&P 500 stocks. We introduce a new benchmark based on technical indicator Kd-Index and Mean-Variance Model as compared to equal weighted portfolio used in most of the prior work. The study confirms that the proposed RL portfolio agent with state space function in the form of LFSS module gives robust results with an attractive performance profile over baseline RL agents and given benchmarks.

Suggested Citation

  • Kumar Yashaswi, 2021. "Deep Reinforcement Learning for Portfolio Optimization using Latent Feature State Space (LFSS) Module," Papers 2102.06233, arXiv.org.
  • Handle: RePEc:arx:papers:2102.06233
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2102.06233
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Haoran Wang & Xun Yu Zhou, 2019. "Continuous-Time Mean-Variance Portfolio Selection: A Reinforcement Learning Framework," Papers 1904.11392, arXiv.org, revised May 2019.
    2. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    3. Vladimir Puzyrev, 2019. "Deep convolutional autoencoder for cryptocurrency market analysis," Papers 1910.12281, arXiv.org.
    4. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    5. Yongyang Cai & Kenneth L. Judd & Rong Xu, 2013. "Numerical Solution of Dynamic Portfolio Optimization with Transaction Costs," NBER Working Papers 18709, National Bureau of Economic Research, Inc.
    6. Angelos Filos, 2019. "Reinforcement Learning for Portfolio Management," Papers 1909.09571, arXiv.org.
    7. J. B. Heaton & N. G. Polson & J. H. Witte, 2017. "Deep learning for finance: deep portfolios," Applied Stochastic Models in Business and Industry, John Wiley & Sons, vol. 33(1), pages 3-12, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Frensi Zejnullahu & Maurice Moser & Joerg Osterrieder, 2022. "Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network," Papers 2206.14267, arXiv.org.
    2. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & François Chareyron, 2021. "Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models," Working Papers hal-03202431, HAL.
    2. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & Franc{c}ois Chareyron, 2021. "Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting," Papers 2104.10483, arXiv.org, revised Apr 2021.
    3. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    4. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    5. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    6. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    7. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    8. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    9. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    10. Huanming Zhang & Zhengyong Jiang & Jionglong Su, 2021. "A Deep Deterministic Policy Gradient-based Strategy for Stocks Portfolio Management," Papers 2103.11455, arXiv.org.
    11. Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
    12. MohammadAmin Fazli & Mahdi Lashkari & Hamed Taherkhani & Jafar Habibi, 2022. "A Novel Experts Advice Aggregation Framework Using Deep Reinforcement Learning for Portfolio Management," Papers 2212.14477, arXiv.org.
    13. Eric Benhamou & David Saltiel & Jean-Jacques Ohana & Jamal Atif, 2020. "Detecting and adapting to crisis pattern with context based Deep Reinforcement Learning," Papers 2009.07200, arXiv.org, revised Nov 2020.
    14. Ricard Durall, 2022. "Asset Allocation: From Markowitz to Deep Reinforcement Learning," Papers 2208.07158, arXiv.org.
    15. Gang Huang & Xiaohua Zhou & Qingyang Song, 2020. "Deep reinforcement learning for portfolio management," Papers 2012.13773, arXiv.org, revised Apr 2022.
    16. Paskalis Glabadanidis, 2020. "Portfolio Strategies to Track and Outperform a Benchmark," JRFM, MDPI, vol. 13(8), pages 1-26, August.
    17. Frensi Zejnullahu & Maurice Moser & Joerg Osterrieder, 2022. "Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network," Papers 2206.14267, arXiv.org.
    18. Eleni Kosta, 2022. "Algorithmic state surveillance: Challenging the notion of agency in human rights," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 212-224, January.
    19. Jeonggyu Huh, 2018. "Measuring Systematic Risk with Neural Network Factor Model," Papers 1809.04925, arXiv.org.
    20. Adebayo Oshingbesan & Eniola Ajiboye & Peruth Kamashazi & Timothy Mbaka, 2022. "Model-Free Reinforcement Learning for Asset Allocation," Papers 2209.10458, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2102.06233. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.