IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2208.07165.html
   My bibliography  Save this paper

Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Author

Listed:
  • Taylan Kabbani
  • Ekrem Duman

Abstract

Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with their environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68 Sharpe Ratio on unseen data set (test data). From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.

Suggested Citation

  • Taylan Kabbani & Ekrem Duman, 2022. "Deep Reinforcement Learning Approach for Trading Automation in The Stock Market," Papers 2208.07165, arXiv.org.
  • Handle: RePEc:arx:papers:2208.07165
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2208.07165
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Souradeep Chakraborty, 2019. "Capturing Financial markets to apply Deep Reinforcement Learning," Papers 1907.04373, arXiv.org, revised Dec 2019.
    2. Terence Tai-Leung Chong & Wing-Kam Ng & Venus Khim-Sen Liew, 2014. "Revisiting the Performance of MACD and RSI Oscillators," JRFM, MDPI, vol. 7(1), pages 1-12, February.
    3. Marco Corazza & Francesco Bertoluzzo, 2014. "Q-Learning-based financial trading systems with applications," Working Papers 2014:15, Department of Economics, University of Venice "Ca' Foscari".
    4. Adamantios Ntakaris & Juho Kanniainen & Moncef Gabbouj & Alexandros Iosifidis, 2020. "Mid-price prediction based on machine learning methods with technical and quantitative indicators," PLOS ONE, Public Library of Science, vol. 15(6), pages 1-39, June.
    5. Nofsinger, John R., 2001. "The impact of public information on investors," Journal of Banking & Finance, Elsevier, vol. 25(7), pages 1339-1366, July.
    6. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Costola, Michele & Hinz, Oliver & Nofer, Michael & Pelizzon, Loriana, 2023. "Machine learning sentiment analysis, COVID-19 news and stock market reactions," Research in International Business and Finance, Elsevier, vol. 64(C).
    2. Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    2. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    3. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    4. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.
    5. Jiahua Xu & Daniel Perez & Yebo Feng & Benjamin Livshits, 2023. "Auto.gov: Learning-based On-chain Governance for Decentralized Finance (DeFi)," Papers 2302.09551, arXiv.org, revised May 2023.
    6. Kenneth A. Kim & John R. Nofsinger, 2005. "Institutional Herding, Business Groups, and Economic Regimes: Evidence from Japan," The Journal of Business, University of Chicago Press, vol. 78(1), pages 213-242, January.
    7. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    8. Alexis Cellier & Pierre Chollet & Jean-François Gajewski, 2011. "Les annonces de notations extrafinancières véhiculent-elles une information au marché?," Revue Finance Contrôle Stratégie, revues.org, vol. 14(3), pages 5-38, September.
    9. Mudalige, Priyantha & Duong, Huu Nhan & Kalev, Petko S. & Gupta, Kartick, 2020. "Who trades in competing firms around earnings announcements," Pacific-Basin Finance Journal, Elsevier, vol. 59(C).
    10. Alexandre Carbonneau & Fr'ed'eric Godin, 2021. "Deep equal risk pricing of financial derivatives with non-translation invariant risk measures," Papers 2107.11340, arXiv.org.
    11. Seok, Sangik & Cho, Hoon & Ryu, Doojin, 2022. "Scheduled macroeconomic news announcements and intraday market sentiment," The North American Journal of Economics and Finance, Elsevier, vol. 62(C).
    12. Yochi Cohen-Charash & Charles A Scherbaum & John D Kammeyer-Mueller & Barry M Staw, 2013. "Mood and the Market: Can Press Reports of Investors' Mood Predict Stock Prices?," PLOS ONE, Public Library of Science, vol. 8(8), pages 1-15, August.
    13. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    14. Martino Banchio & Giacomo Mantegazza, 2022. "Artificial Intelligence and Spontaneous Collusion," Papers 2202.05946, arXiv.org, revised Sep 2023.
    15. Miquel Noguer i Alonso & Sonam Srivastava, 2020. "Deep Reinforcement Learning for Asset Allocation in US Equities," Papers 2010.04404, arXiv.org.
    16. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    17. Longbing Cao, 2021. "AI in Finance: Challenges, Techniques and Opportunities," Papers 2107.09051, arXiv.org.
    18. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    19. Aharon, David Y. & Qadan, Mahmoud, 2020. "When do retail investors pay attention to their trading platforms?," The North American Journal of Economics and Finance, Elsevier, vol. 53(C).
    20. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2208.07165. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.