IDEAS home Printed from https://ideas.repec.org/p/arx/papers/1807.02787.html
   My bibliography  Save this paper

Financial Trading as a Game: A Deep Reinforcement Learning Approach

Author

Listed:
  • Chien Yi Huang

Abstract

An automatic program that generates constant profit from the financial market is lucrative for every market practitioner. Recent advance in deep reinforcement learning provides a framework toward end-to-end training of such trading agent. In this paper, we propose an Markov Decision Process (MDP) model suitable for the financial trading task and solve it with the state-of-the-art deep recurrent Q-network (DRQN) algorithm. We propose several modifications to the existing learning algorithm to make it more suitable under the financial trading setting, namely 1. We employ a substantially small replay memory (only a few hundreds in size) compared to ones used in modern deep reinforcement learning algorithms (often millions in size.) 2. We develop an action augmentation technique to mitigate the need for random exploration by providing extra feedback signals for all actions to the agent. This enables us to use greedy policy over the course of learning and shows strong empirical performance compared to more commonly used epsilon-greedy exploration. However, this technique is specific to financial trading under a few market assumptions. 3. We sample a longer sequence for recurrent neural network training. A side product of this mechanism is that we can now train the agent for every T steps. This greatly reduces training time since the overall computation is down by a factor of T. We combine all of the above into a complete online learning algorithm and validate our approach on the spot foreign exchange market.

Suggested Citation

  • Chien Yi Huang, 2018. "Financial Trading as a Game: A Deep Reinforcement Learning Approach," Papers 1807.02787, arXiv.org.
  • Handle: RePEc:arx:papers:1807.02787
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/1807.02787
    File Function: Latest version
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    2. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    3. Ali Hirsa & Joerg Osterrieder & Branka Hadji-Misheva & Jan-Alexander Posth, 2021. "Deep reinforcement learning on a multi-asset environment for trading," Papers 2106.08437, arXiv.org.
    4. Frensi Zejnullahu & Maurice Moser & Joerg Osterrieder, 2022. "Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network," Papers 2206.14267, arXiv.org.
    5. Jinan Zou & Qingying Zhao & Yang Jiao & Haiyao Cao & Yanxi Liu & Qingsen Yan & Ehsan Abbasnejad & Lingqiao Liu & Javen Qinfeng Shi, 2022. "Stock Market Prediction via Deep Learning Techniques: A Survey," Papers 2212.12717, arXiv.org, revised Feb 2023.
    6. Gang Hu, 2023. "Advancing Algorithmic Trading: A Multi-Technique Enhancement of Deep Q-Network Models," Papers 2311.05743, arXiv.org.
    7. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2020. "Deep Learning for Portfolio Optimization," Papers 2005.13665, arXiv.org, revised Jan 2021.
    8. Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.
    9. Jie Zou & Jiashu Lou & Baohua Wang & Sixue Liu, 2022. "A Novel Deep Reinforcement Learning Based Automated Stock Trading System Using Cascaded LSTM Networks," Papers 2212.02721, arXiv.org, revised Jul 2023.
    10. Jonas Hanetho, 2023. "Deep Policy Gradient Methods in Commodity Markets," Papers 2308.01910, arXiv.org.
    11. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    12. Xiao-Yang Liu & Hongyang Yang & Qian Chen & Runjia Zhang & Liuqing Yang & Bowen Xiao & Christina Dan Wang, 2020. "FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance," Papers 2011.09607, arXiv.org, revised Mar 2022.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1807.02787. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.