IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2309.00626.html
   My bibliography  Save this paper

An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading

Author

Listed:
  • Shuyang Wang
  • Diego Klabjan

Abstract

We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms in a highly stochastic environment of intraday cryptocurrency portfolio trading. We adopt a model selection method that evaluates on multiple validation periods, and propose a novel mixture distribution policy to effectively ensemble the selected models. We provide a distributional view of the out-of-sample performance on granular test periods to demonstrate the robustness of the strategies in evolving market conditions, and retrain the models periodically to address non-stationarity of financial data. Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy.

Suggested Citation

  • Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.
  • Handle: RePEc:arx:papers:2309.00626
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2309.00626
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Taylan Kabbani & Ekrem Duman, 2022. "Deep Reinforcement Learning Approach for Trading Automation in The Stock Market," Papers 2208.07165, arXiv.org.
    2. Jonathan Sadighian, 2019. "Deep Reinforcement Learning in Cryptocurrency Market Making," Papers 1911.08647, arXiv.org.
    3. Mao Guan & Xiao-Yang Liu, 2021. "Explainable Deep Reinforcement Learning for Portfolio Management: An Empirical Approach," Papers 2111.03995, arXiv.org, revised Dec 2021.
    4. Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
    5. Chien Yi Huang, 2018. "Financial Trading as a Game: A Deep Reinforcement Learning Approach," Papers 1807.02787, arXiv.org.
    6. Xiao-Yang Liu & Ziyi Xia & Jingyang Rui & Jiechao Gao & Hongyang Yang & Ming Zhu & Christina Dan Wang & Zhaoran Wang & Jian Guo, 2022. "FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning," Papers 2211.03107, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xiao-Yang Liu & Hongyang Yang & Jiechao Gao & Christina Dan Wang, 2021. "FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance," Papers 2111.09395, arXiv.org.
    2. Frensi Zejnullahu & Maurice Moser & Joerg Osterrieder, 2022. "Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network," Papers 2206.14267, arXiv.org.
    3. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    4. Hong Guo & Jianwu Lin & Fanlin Huang, 2023. "Market Making with Deep Reinforcement Learning from Limit Order Books," Papers 2305.15821, arXiv.org.
    5. Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
    6. Hui Niu & Siyuan Li & Jiahao Zheng & Zhouchi Lin & Jian Li & Jian Guo & Bo An, 2023. "IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making," Papers 2308.08918, arXiv.org.
    7. Tristan Lim, 2022. "Predictive Crypto-Asset Automated Market Making Architecture for Decentralized Finance using Deep Reinforcement Learning," Papers 2211.01346, arXiv.org, revised Jan 2023.
    8. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2019. "Deep Reinforcement Learning for Trading," Papers 1911.10107, arXiv.org.
    9. Costola, Michele & Hinz, Oliver & Nofer, Michael & Pelizzon, Loriana, 2023. "Machine learning sentiment analysis, COVID-19 news and stock market reactions," Research in International Business and Finance, Elsevier, vol. 64(C).
    10. Xiao-Yang Liu & Ziyi Xia & Jingyang Rui & Jiechao Gao & Hongyang Yang & Ming Zhu & Christina Dan Wang & Zhaoran Wang & Jian Guo, 2022. "FinRL-Meta: Market Environments and Benchmarks for Data-Driven Financial Reinforcement Learning," Papers 2211.03107, arXiv.org.
    11. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    12. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Feb 2024.
    13. Ali Raheman & Anton Kolonin & Alexey Glushchenko & Arseniy Fokin & Ikram Ansari, 2022. "Adaptive Multi-Strategy Market-Making Agent For Volatile Markets," Papers 2204.13265, arXiv.org.
    14. Bruno Gav{s}perov & Zvonko Kostanjv{c}ar, 2022. "Deep Reinforcement Learning for Market Making Under a Hawkes Process-Based Limit Order Book Model," Papers 2207.09951, arXiv.org.
    15. Jonas Hanetho, 2023. "Deep Policy Gradient Methods in Commodity Markets," Papers 2308.01910, arXiv.org.
    16. Hui Niu & Siyuan Li & Jian Li, 2022. "MetaTrader: An Reinforcement Learning Approach Integrating Diverse Policies for Portfolio Optimization," Papers 2210.01774, arXiv.org.
    17. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    18. Zihao Zhang & Stefan Zohren & Stephen Roberts, 2020. "Deep Learning for Portfolio Optimization," Papers 2005.13665, arXiv.org, revised Jan 2021.
    19. Tidor-Vlad Pricope, 2021. "Deep Reinforcement Learning in Quantitative Algorithmic Trading: A Review," Papers 2106.00123, arXiv.org.
    20. Joseph Jerome & Gregory Palmer & Rahul Savani, 2022. "Market Making with Scaled Beta Policies," Papers 2207.03352, arXiv.org, revised Sep 2022.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2309.00626. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.