IDEAS home Printed from https://ideas.repec.org/a/taf/quantf/v22y2022i8p1429-1443.html
   My bibliography  Save this article

Deep differentiable reinforcement learning and optimal trading

Author

Listed:
  • Thibault Jaisson

Abstract

In many reinforcement learning applications, the underlying environment reward and transition functions are explicitly known differentiable functions. This enables us to use recent research which applies machine learning tools to stochastic control to find optimal action functions. In this paper, we define differentiable reinforcement learning as a particular case of this research. We find that incorporating deep learning in this framework leads to more accurate and stable solutions than those obtained from more generic actor critic algorithms. We apply this deep differentiable reinforcement learning (DDRL) algorithm to the problem of one asset optimal trading strategies in various environments where the market dynamics are known. Thanks to the stability of this method, we are able to efficiently find optimal strategies for complex multi-scale market models. We also extend these methods to simultaneously find optimal action functions for a wide range of environment parameters. This makes it applicable to real life financial signals and portfolio optimization where the expected return has multiple time scales. In the case of a slow and a fast alpha signal, we find that the optimal trading strategy consists in using the fast signal to time the trades associated to the slow signal.

Suggested Citation

  • Thibault Jaisson, 2022. "Deep differentiable reinforcement learning and optimal trading," Quantitative Finance, Taylor & Francis Journals, vol. 22(8), pages 1429-1443, August.
  • Handle: RePEc:taf:quantf:v:22:y:2022:i:8:p:1429-1443
    DOI: 10.1080/14697688.2022.2062431
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/14697688.2022.2062431
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/14697688.2022.2062431?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:quantf:v:22:y:2022:i:8:p:1429-1443. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/RQUF20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.