IDEAS home Printed from https://ideas.repec.org/p/arx/papers/1908.10771.html
   My bibliography  Save this paper

Reinforcement Learning: Prediction, Control and Value Function Approximation

Author

Listed:
  • Haoqian Li
  • Thomas Lau

Abstract

With the increasing power of computers and the rapid development of self-learning methodologies such as machine learning and artificial intelligence, the problem of constructing an automatic Financial Trading Systems (FTFs) becomes an increasingly attractive research topic. An intuitive way of developing such a trading algorithm is to use Reinforcement Learning (RL) algorithms, which does not require model-building. In this paper, we dive into the RL algorithms and illustrate the definitions of the reward function, actions and policy functions in details, as well as introducing algorithms that could be applied to FTFs.

Suggested Citation

  • Haoqian Li & Thomas Lau, 2019. "Reinforcement Learning: Prediction, Control and Value Function Approximation," Papers 1908.10771, arXiv.org.
  • Handle: RePEc:arx:papers:1908.10771
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/1908.10771
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Francesco Bertoluzzo & Marco Corazza, 2012. "Reinforcement Learning for automatic financial trading: Introduction and some applications," Working Papers 2012:33, Department of Economics, University of Venice "Ca' Foscari", revised 2012.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Fischer, Thomas G., 2018. "Reinforcement learning in financial markets - a survey," FAU Discussion Papers in Economics 12/2018, Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics.
    2. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    3. Ariel Neufeld & Julian Sester & Mario v{S}iki'c, 2022. "Markov Decision Processes under Model Uncertainty," Papers 2206.06109, arXiv.org, revised Jan 2023.
    4. Caiyu Jiang & Jianhua Wang, 2022. "A Portfolio Model with Risk Control Policy Based on Deep Reinforcement Learning," Mathematics, MDPI, vol. 11(1), pages 1-16, December.
    5. Ariel Neufeld & Julian Sester & Mario Šikić, 2023. "Markov decision processes under model uncertainty," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 618-665, July.
    6. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2020. "Deep Learning for Portfolio Optimization," Papers 2005.13665, arXiv.org, revised Jan 2021.
    7. Hyungjun Park & Min Kyu Sim & Dong Gu Choi, 2019. "An intelligent financial portfolio trading strategy using deep Q-learning," Papers 1907.03665, arXiv.org, revised Nov 2019.
    8. Marco Corazza & Andrea Sangalli, 2015. "Q-Learning and SARSA: a comparison between two intelligent stochastic control approaches for financial trading," Working Papers 2015:15, Department of Economics, University of Venice "Ca' Foscari", revised 2015.
    9. Petrus Strydom, 2017. "Funding optimization for a bank integrating credit and liquidity risk," Journal of Applied Finance & Banking, SCIENPRESS Ltd, vol. 7(2), pages 1-1.
    10. Xiao-Yang Liu & Zhuoran Xiong & Shan Zhong & Hongyang Yang & Anwar Walid, 2018. "Practical Deep Reinforcement Learning Approach for Stock Trading," Papers 1811.07522, arXiv.org, revised Jul 2022.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1908.10771. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.