IDEAS home Printed from https://ideas.repec.org/p/arx/papers/1911.08647.html
   My bibliography  Save this paper

Deep Reinforcement Learning in Cryptocurrency Market Making

Author

Listed:
  • Jonathan Sadighian

Abstract

This paper sets forth a framework for deep reinforcement learning as applied to market making (DRLMM) for cryptocurrencies. Two advanced policy gradient-based algorithms were selected as agents to interact with an environment that represents the observation space through limit order book data, and order flow arrival statistics. Within the experiment, a forward-feed neural network is used as the function approximator and two reward functions are compared. The performance of each combination of agent and reward function is evaluated by daily and average trade returns. Using this DRLMM framework, this paper demonstrates the effectiveness of deep reinforcement learning in solving stochastic inventory control challenges market makers face.

Suggested Citation

  • Jonathan Sadighian, 2019. "Deep Reinforcement Learning in Cryptocurrency Market Making," Papers 1911.08647, arXiv.org.
  • Handle: RePEc:arx:papers:1911.08647
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/1911.08647
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Maxime Morariu-Patrichi & Mikko S. Pakkanen, 2018. "State-dependent Hawkes processes and their application to limit order book modelling," Papers 1809.08060, arXiv.org, revised Sep 2021.
    2. Maxime Morariu-Patrichi & Mikko Pakkanen, 2018. "State-dependent Hawkes processes and their application to limit order book modelling," CREATES Research Papers 2018-26, Department of Economics and Business Economics, Aarhus University.
    3. Avraam Tsantekidis & Nikolaos Passalis & Anastasios Tefas & Juho Kanniainen & Moncef Gabbouj & Alexandros Iosifidis, 2018. "Using Deep Learning for price prediction by exploiting stationary limit order book features," Papers 1810.09965, arXiv.org.
    4. Justin Sirignano & Rama Cont, 2018. "Universal features of price formation in financial markets: perspectives from Deep Learning," Papers 1803.06917, arXiv.org.
    5. Peng Wu & Marcello Rambaldi & Jean-Franc{c}ois Muzy & Emmanuel Bacry, 2019. "Queue-reactive Hawkes models for the order flow," Papers 1901.08938, arXiv.org.
    6. Rama Cont & Arseniy Kukanov & Sasha Stoikov, 2013. "The Price Impact of Order Book Events," Journal of Financial Econometrics, Oxford University Press, vol. 12(1), pages 47-88, December.
    7. David W. Lu, 2017. "Agent Inspired Trading Using Recurrent Reinforcement Learning and LSTM Neural Networks," Papers 1707.07338, arXiv.org.
    8. E. Bacry & J. F Muzy, 2013. "Hawkes model for price and trades high-frequency dynamics," Papers 1301.1135, arXiv.org.
    9. Justin Sirignano & Rama Cont, 2018. "Universal features of price formation in financial markets: perspectives from Deep Learning," Working Papers hal-01754054, HAL.
    10. Yagna Patel, 2018. "Optimizing Market Making using Multi-Agent Reinforcement Learning," Papers 1812.10252, arXiv.org.
    11. David Silver & Aja Huang & Chris J. Maddison & Arthur Guez & Laurent Sifre & George van den Driessche & Julian Schrittwieser & Ioannis Antonoglou & Veda Panneershelvam & Marc Lanctot & Sander Dieleman, 2016. "Mastering the game of Go with deep neural networks and tree search," Nature, Nature, vol. 529(7587), pages 484-489, January.
    12. Thomas Spooner & John Fearnley & Rahul Savani & Andreas Koukorinis, 2018. "Market Making via Reinforcement Learning," Papers 1804.04216, arXiv.org.
    13. Ke Xu & Martin D. Gould & Sam D. Howison, 2019. "Multi-Level Order-Flow Imbalance in a Limit Order Book," Papers 1907.06230, arXiv.org, revised Oct 2019.
    14. Volodymyr Mnih & Koray Kavukcuoglu & David Silver & Andrei A. Rusu & Joel Veness & Marc G. Bellemare & Alex Graves & Martin Riedmiller & Andreas K. Fidjeland & Georg Ostrovski & Stig Petersen & Charle, 2015. "Human-level control through deep reinforcement learning," Nature, Nature, vol. 518(7540), pages 529-533, February.
    15. Baron Law & Frederi Viens, 2019. "Market Making under a Weakly Consistent Limit Order Book Model," Papers 1903.07222, arXiv.org, revised Jan 2020.
    16. Peng Wu & Marcello Rambaldi & Jean-François Muzy & Emmanuel Bacry, 2021. "Queue-reactive Hawkes models for the order flow," Working Papers hal-02409073, HAL.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Jiafa He & Cong Zheng & Can Yang, 2023. "Integrating Tick-level Data and Periodical Signal for High-frequency Market Making," Papers 2306.17179, arXiv.org.
    2. Bruno Gav{s}perov & Zvonko Kostanjv{c}ar, 2022. "Deep Reinforcement Learning for Market Making Under a Hawkes Process-Based Limit Order Book Model," Papers 2207.09951, arXiv.org.
    3. Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
    4. Tristan Lim, 2022. "Predictive Crypto-Asset Automated Market Making Architecture for Decentralized Finance using Deep Reinforcement Learning," Papers 2211.01346, arXiv.org, revised Jan 2023.
    5. Ali Raheman & Anton Kolonin & Alexey Glushchenko & Arseniy Fokin & Ikram Ansari, 2022. "Adaptive Multi-Strategy Market-Making Agent For Volatile Markets," Papers 2204.13265, arXiv.org.
    6. Hui Niu & Siyuan Li & Jiahao Zheng & Zhouchi Lin & Jian Li & Jian Guo & Bo An, 2023. "IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making," Papers 2308.08918, arXiv.org.
    7. Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.
    8. Joseph Jerome & Gregory Palmer & Rahul Savani, 2022. "Market Making with Scaled Beta Policies," Papers 2207.03352, arXiv.org, revised Sep 2022.
    9. Jonathan Sadighian, 2020. "Extending Deep Reinforcement Learning Frameworks in Cryptocurrency Market Making," Papers 2004.06985, arXiv.org.
    10. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    11. Hong Guo & Jianwu Lin & Fanlin Huang, 2023. "Market Making with Deep Reinforcement Learning from Limit Order Books," Papers 2305.15821, arXiv.org.
    12. Zihao Zhang & Stefan Zohren, 2021. "Multi-Horizon Forecasting for Limit Order Books: Novel Deep Learning Approaches and Hardware Acceleration using Intelligent Processing Units," Papers 2105.10430, arXiv.org, revised Aug 2021.
    13. Joseph Jerome & Leandro Sanchez-Betancourt & Rahul Savani & Martin Herdegen, 2022. "Model-based gym environments for limit order book trading," Papers 2209.07823, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jonathan Sadighian, 2020. "Extending Deep Reinforcement Learning Frameworks in Cryptocurrency Market Making," Papers 2004.06985, arXiv.org.
    2. Antoine Fosset & Jean-Philippe Bouchaud & Michael Benzaquen, 2020. "Endogenous Liquidity Crises," Working Papers hal-02567495, HAL.
    3. Fabrizio Lillo, 2021. "Order flow and price formation," Papers 2105.00521, arXiv.org.
    4. Antoine Fosset & Jean-Philippe Bouchaud & Michael Benzaquen, 2019. "Endogenous Liquidity Crises," Papers 1912.00359, arXiv.org, revised Feb 2020.
    5. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    6. Antoine Fosset & Jean-Philippe Bouchaud & Michael Benzaquen, 2020. "Non-parametric Estimation of Quadratic Hawkes Processes for Order Book Events," Papers 2005.05730, arXiv.org.
    7. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    8. Iwao Maeda & David deGraw & Michiharu Kitano & Hiroyasu Matsushima & Hiroki Sakaji & Kiyoshi Izumi & Atsuo Kato, 2020. "Deep Reinforcement Learning in Agent Based Financial Market Simulation," JRFM, MDPI, vol. 13(4), pages 1-17, April.
    9. Hyungjun Park & Min Kyu Sim & Dong Gu Choi, 2019. "An intelligent financial portfolio trading strategy using deep Q-learning," Papers 1907.03665, arXiv.org, revised Nov 2019.
    10. Ahmet Murat Ozbayoglu & Mehmet Ugur Gudelek & Omer Berat Sezer, 2020. "Deep Learning for Financial Applications : A Survey," Papers 2002.05786, arXiv.org.
    11. Bruno Gašperov & Stjepan Begušić & Petra Posedel Šimović & Zvonko Kostanjčar, 2021. "Reinforcement Learning Approaches to Optimal Market Making," Mathematics, MDPI, vol. 9(21), pages 1-22, October.
    12. Luca De Gennaro Aquino & Carole Bernard, 2019. "Bounds on Multi-asset Derivatives via Neural Networks," Papers 1911.05523, arXiv.org, revised Nov 2020.
    13. Adamantios Ntakaris & Giorgio Mirone & Juho Kanniainen & Moncef Gabbouj & Alexandros Iosifidis, 2019. "Feature Engineering for Mid-Price Prediction with Deep Learning," Papers 1904.05384, arXiv.org, revised Jun 2019.
    14. Yassine Chemingui & Adel Gastli & Omar Ellabban, 2020. "Reinforcement Learning-Based School Energy Management System," Energies, MDPI, vol. 13(23), pages 1-21, December.
    15. Peng Wu & Marcello Rambaldi & Jean-Franc{c}ois Muzy & Emmanuel Bacry, 2019. "Queue-reactive Hawkes models for the order flow," Papers 1901.08938, arXiv.org.
    16. Ivan Peñaloza & Pablo Padilla, 2022. "A Pricing Method in a Constrained Market with Differential Informational Frameworks," Computational Economics, Springer;Society for Computational Economics, vol. 60(3), pages 1055-1100, October.
    17. Yuhong Wang & Lei Chen & Hong Zhou & Xu Zhou & Zongsheng Zheng & Qi Zeng & Li Jiang & Liang Lu, 2021. "Flexible Transmission Network Expansion Planning Based on DQN Algorithm," Energies, MDPI, vol. 14(7), pages 1-21, April.
    18. Ioane Muni Toke & Nakahiro Yoshida, 2020. "Marked point processes and intensity ratios for limit order book modeling," Papers 2001.08442, arXiv.org.
    19. Huang, Ruchen & He, Hongwen & Gao, Miaojue, 2023. "Training-efficient and cost-optimal energy management for fuel cell hybrid electric bus based on a novel distributed deep reinforcement learning framework," Applied Energy, Elsevier, vol. 346(C).
    20. Neha Soni & Enakshi Khular Sharma & Narotam Singh & Amita Kapoor, 2019. "Impact of Artificial Intelligence on Businesses: from Research, Innovation, Market Deployment to Future Shifts in Business Models," Papers 1905.02092, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1911.08647. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.