IDEAS home Printed from https://ideas.repec.org/p/arx/papers/1906.04813.html
   My bibliography  Save this paper

Towards Inverse Reinforcement Learning for Limit Order Book Dynamics

Author

Listed:
  • Jacobo Roa-Vicens
  • Cyrine Chtourou
  • Angelos Filos
  • Francisco Rullan
  • Yarin Gal
  • Ricardo Silva

Abstract

Multi-agent learning is a promising method to simulate aggregate competitive behaviour in finance. Learning expert agents' reward functions through their external demonstrations is hence particularly relevant for subsequent design of realistic agent-based simulations. Inverse Reinforcement Learning (IRL) aims at acquiring such reward functions through inference, allowing to generalize the resulting policy to states not observed in the past. This paper investigates whether IRL can infer such rewards from agents within real financial stochastic environments: limit order books (LOB). We introduce a simple one-level LOB, where the interactions of a number of stochastic agents and an expert trading agent are modelled as a Markov decision process. We consider two cases for the expert's reward: either a simple linear function of state features; or a complex, more realistic non-linear function. Given the expert agent's demonstrations, we attempt to discover their strategy by modelling their latent reward function using linear and Gaussian process (GP) regressors from previous literature, and our own approach through Bayesian neural networks (BNN). While the three methods can learn the linear case, only the GP-based and our proposed BNN methods are able to discover the non-linear reward case. Our BNN IRL algorithm outperforms the other two approaches as the number of samples increases. These results illustrate that complex behaviours, induced by non-linear reward functions amid agent-based stochastic scenarios, can be deduced through inference, encouraging the use of inverse reinforcement learning for opponent-modelling in multi-agent systems.

Suggested Citation

  • Jacobo Roa-Vicens & Cyrine Chtourou & Angelos Filos & Francisco Rullan & Yarin Gal & Ricardo Silva, 2019. "Towards Inverse Reinforcement Learning for Limit Order Book Dynamics," Papers 1906.04813, arXiv.org.
  • Handle: RePEc:arx:papers:1906.04813
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/1906.04813
    File Function: Latest version
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yuanrong Wang & Yinsen Miao & Alexander CY Wong & Nikita P Granger & Christian Michler, 2023. "Domain-adapted Learning and Interpretability: DRL for Gas Trading," Papers 2301.08359, arXiv.org, revised Sep 2023.
    2. Jacobo Roa-Vicens & Yuanbo Wang & Virgile Mison & Yarin Gal & Ricardo Silva, 2019. "Adversarial recovery of agent rewards from latent spaces of the limit order book," Papers 1912.04242, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1906.04813. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.