IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2301.08688.html
   My bibliography  Save this paper

Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets

Author

Listed:
  • Peer Nagy
  • Jan-Peter Calliess
  • Stefan Zohren

Abstract

We employ deep reinforcement learning (RL) to train an agent to successfully translate a high-frequency trading signal into a trading strategy that places individual limit orders. Based on the ABIDES limit order book simulator, we build a reinforcement learning OpenAI gym environment and utilise it to simulate a realistic trading environment for NASDAQ equities based on historic order book messages. To train a trading agent that learns to maximise its trading return in this environment, we use Deep Duelling Double Q-learning with the APEX (asynchronous prioritised experience replay) architecture. The agent observes the current limit order book state, its recent history, and a short-term directional forecast. To investigate the performance of RL for adaptive trading independently from a concrete forecasting algorithm, we study the performance of our approach utilising synthetic alpha signals obtained by perturbing forward-looking returns with varying levels of noise. Here, we find that the RL agent learns an effective trading strategy for inventory management and order placing that outperforms a heuristic benchmark trading strategy having access to the same signal.

Suggested Citation

  • Peer Nagy & Jan-Peter Calliess & Stefan Zohren, 2023. "Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets," Papers 2301.08688, arXiv.org, revised Sep 2023.
  • Handle: RePEc:arx:papers:2301.08688
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2301.08688
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Zihao Zhang & Bryan Lim & Stefan Zohren, 2021. "Deep Learning for Market by Order Data," Applied Mathematical Finance, Taylor & Francis Journals, vol. 28(1), pages 79-95, January.
    2. Antonio Briola & Jeremy Turiel & Riccardo Marcaccioli & Alvaro Cauderan & Tomaso Aste, 2021. "Deep Reinforcement Learning for Active High Frequency Trading," Papers 2101.07107, arXiv.org, revised Aug 2023.
    3. Michael Karpe & Jin Fang & Zhongyao Ma & Chen Wang, 2020. "Multi-Agent Reinforcement Learning in a Realistic Limit Order Book Market Simulation," Papers 2006.05574, arXiv.org, revised Sep 2020.
    4. Zihao Zhang & Bryan Lim & Stefan Zohren, 2021. "Deep Learning for Market by Order Data," Papers 2102.08811, arXiv.org, revised Jul 2021.
    5. Schnaubelt, Matthias, 2022. "Deep reinforcement learning for the optimal placement of cryptocurrency limit orders," European Journal of Operational Research, Elsevier, vol. 296(3), pages 993-1006.
    6. Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ilia Zaznov & Julian Kunkel & Alfonso Dufour & Atta Badii, 2022. "Predicting Stock Price Changes Based on the Limit Order Book: A Survey," Mathematics, MDPI, vol. 10(8), pages 1-33, April.
    2. Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
    3. Hong Guo & Jianwu Lin & Fanlin Huang, 2023. "Market Making with Deep Reinforcement Learning from Limit Order Books," Papers 2305.15821, arXiv.org.
    4. Jin Fang & Jiacheng Weng & Yi Xiang & Xinwen Zhang, 2022. "Imitate then Transcend: Multi-Agent Optimal Execution with Dual-Window Denoise PPO," Papers 2206.10736, arXiv.org.
    5. Konark Jain & Nick Firoozye & Jonathan Kochems & Philip Treleaven, 2024. "Limit Order Book Simulations: A Review," Papers 2402.17359, arXiv.org, revised Mar 2024.
    6. Antonio Briola & Jeremy Turiel & Riccardo Marcaccioli & Alvaro Cauderan & Tomaso Aste, 2021. "Deep Reinforcement Learning for Active High Frequency Trading," Papers 2101.07107, arXiv.org, revised Aug 2023.
    7. Eghbal Rahimikia & Stefan Zohren & Ser-Huang Poon, 2021. "Realised Volatility Forecasting: Machine Learning via Financial Word Embedding," Papers 2108.00480, arXiv.org, revised Mar 2023.
    8. Xianfeng Jiao & Zizhong Li & Chang Xu & Yang Liu & Weiqing Liu & Jiang Bian, 2023. "Microstructure-Empowered Stock Factor Extraction and Utilization," Papers 2308.08135, arXiv.org.
    9. Wang, Yuanrong & Aste, Tomaso, 2023. "Dynamic portfolio optimization with inverse covariance clustering," LSE Research Online Documents on Economics 117701, London School of Economics and Political Science, LSE Library.
    10. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    11. Kriebel, Johannes & Stitz, Lennart, 2022. "Credit default prediction from user-generated text in peer-to-peer lending using deep learning," European Journal of Operational Research, Elsevier, vol. 302(1), pages 309-323.
    12. Lorenzo Lucchese & Mikko Pakkanen & Almut Veraart, 2022. "The Short-Term Predictability of Returns in Order Book Markets: a Deep Learning Perspective," Papers 2211.13777, arXiv.org, revised Oct 2023.
    13. Xiao-Yang Liu & Jingyang Rui & Jiechao Gao & Liuqing Yang & Hongyang Yang & Zhaoran Wang & Christina Dan Wang & Jian Guo, 2021. "FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven Deep Reinforcement Learning in Quantitative Finance," Papers 2112.06753, arXiv.org, revised Mar 2022.
    14. Alvaro Arroyo & Alvaro Cartea & Fernando Moreno-Pino & Stefan Zohren, 2023. "Deep Attentive Survival Analysis in Limit Order Books: Estimating Fill Probabilities with Convolutional-Transformers," Papers 2306.05479, arXiv.org.
    15. Zijian Shi & John Cartlidge, 2023. "Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid Methodology," Papers 2303.00080, arXiv.org.
    16. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    17. Cong Zheng & Jiafa He & Can Yang, 2023. "Optimal Execution Using Reinforcement Learning," Papers 2306.17178, arXiv.org.
    18. Ren, Yi-Shuai & Ma, Chao-Qun & Kong, Xiao-Lin & Baltas, Konstantinos & Zureigat, Qasim, 2022. "Past, present, and future of the application of machine learning in cryptocurrency research," Research in International Business and Finance, Elsevier, vol. 63(C).
    19. Matteo Prata & Giuseppe Masi & Leonardo Berti & Viviana Arrigoni & Andrea Coletta & Irene Cannistraci & Svitlana Vyetrenko & Paola Velardi & Novella Bartolini, 2023. "LOB-Based Deep Learning Models for Stock Price Trend Prediction: A Benchmark Study," Papers 2308.01915, arXiv.org, revised Sep 2023.
    20. Paul Bilokon & Yitao Qiu, 2023. "Transformers versus LSTMs for electronic trading," Papers 2309.11400, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2301.08688. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.