IDEAS home Printed from https://ideas.repec.org/a/spr/digfin/v7y2025i1d10.1007_s42521-024-00123-2.html
   My bibliography  Save this article

Regime switching forecasting for cryptocurrencies

Author

Listed:
  • Ilyas Agakishiev

    (Humboldt Universität zu Berlin)

  • Wolfgang Karl Härdle

    (Humboldt Universität zu Berlin)

  • Denis Becker

    (NTNU Business School)

  • Xiaorui Zuo

    (Shaw Foundation)

Abstract

There are many ways to model complex time series. The simplest approach is to increase the complexity, and thus, the flexibility of the model, for the entire time series. As an example, one could use a neural network. Another solution would be to change the parameters of a model dependent on the “state” or “regime” of the time series. A typical example here would be the Hidden Markov model (HMM). This paper combines the two concepts to create a Reinforcement Learning (RL) model that adds variables that depend on the state of the time series. To test the concept, the RL model is used with cryptocurrency data to determine the share to invest into the cryptocurrency index CRIX in order to maximize wealth. The results have shown that cryptocurrency metadata is useful as supplementary data for analysis of the respective prices. The Reinforcement learning model with regimes shows potential for investment management, but comes with some caveats.

Suggested Citation

  • Ilyas Agakishiev & Wolfgang Karl Härdle & Denis Becker & Xiaorui Zuo, 2025. "Regime switching forecasting for cryptocurrencies," Digital Finance, Springer, vol. 7(1), pages 107-131, March.
  • Handle: RePEc:spr:digfin:v:7:y:2025:i:1:d:10.1007_s42521-024-00123-2
    DOI: 10.1007/s42521-024-00123-2
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s42521-024-00123-2
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s42521-024-00123-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Dietmar Maringer & Tikesh Ramtohul, 2012. "Regime-switching recurrent reinforcement learning for investment decision making," Computational Management Science, Springer, vol. 9(1), pages 89-107, February.
    2. repec:osf:osfxxx:jrc58_v1 is not listed on IDEAS
    3. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    4. repec:hum:wpaper:sfb649dp2015-048 is not listed on IDEAS
    5. Trimborn, Simon & Härdle, Wolfgang Karl, 2018. "CRIX an Index for cryptocurrencies," Journal of Empirical Finance, Elsevier, vol. 49(C), pages 107-122.
    6. Vincenzo Candila, 2021. "Multivariate Analysis of Cryptocurrencies," Econometrics, MDPI, vol. 9(3), pages 1-17, July.
    7. Härdle, Wolfgang Karl & Trimborn, Simon, 2015. "CRIX or evaluating blockchain based currencies," SFB 649 Discussion Papers 2015-048, Humboldt University Berlin, Collaborative Research Center 649: Economic Risk.
    8. Karim, Muhammad Mahmudul & Ali, Md Hakim & Yarovaya, Larisa & Uddin, Md Hamid & Hammoudeh, Shawkat, 2023. "Return-volatility relationships in cryptocurrency markets: Evidence from asymmetric quantiles and non-linear ARDL approach," International Review of Financial Analysis, Elsevier, vol. 90(C).
    9. Mosavi, Amir & Faghan, Yaser & Ghamisi, Pedram & Duan, Puhong & Ardabili, Sina Faizollahzadeh & Hassan, Salwana & Band, Shahab S., 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," OSF Preprints jrc58, Center for Open Science.
    10. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Horobet, Alexandra & Boubaker, Sabri & Belascu, Lucian & Negreanu, Cristina Carmencita & Dinca, Zeno, 2024. "Technology-driven advancements: Mapping the landscape of algorithmic trading literature," Technological Forecasting and Social Change, Elsevier, vol. 209(C).
    2. Brini, Alessio & Tedeschi, Gabriele & Tantari, Daniele, 2023. "Reinforcement learning policy recommendation for interbank network stability," Journal of Financial Stability, Elsevier, vol. 67(C).
    3. Alexandra Horobet & Sabri Boubaker & Lucian Belascu & Cristina Carmencita Negreanu & Zeno Dinca, 2024. "Technology-driven advancements: Mapping the landscape of algorithmic trading literature," Post-Print hal-04990283, HAL.
    4. Dimitrios Vamvakas & Panagiotis Michailidis & Christos Korkas & Elias Kosmatopoulos, 2023. "Review and Evaluation of Reinforcement Learning Frameworks on Smart Grid Applications," Energies, MDPI, vol. 16(14), pages 1-38, July.
    5. Jan Niederreiter, 2023. "Broadening Economics in the Era of Artificial Intelligence and Experimental Evidence," Italian Economic Journal: A Continuation of Rivista Italiana degli Economisti and Giornale degli Economisti, Springer;Società Italiana degli Economisti (Italian Economic Association), vol. 9(1), pages 265-294, March.
    6. Tian Zhu & Wei Zhu, 2022. "Quantitative Trading through Random Perturbation Q-Network with Nonlinear Transaction Costs," Stats, MDPI, vol. 5(2), pages 1-15, June.
    7. Ben Hambly & Renyuan Xu & Huining Yang, 2023. "Recent advances in reinforcement learning in finance," Mathematical Finance, Wiley Blackwell, vol. 33(3), pages 437-503, July.
    8. Li, Jianwei & Liu, Jie & Yang, Qingqing & Wang, Tianci & He, Hongwen & Wang, Hanxiao & Sun, Fengchun, 2025. "Reinforcement learning based energy management for fuel cell hybrid electric vehicles: A comprehensive review on decision process reformulation and strategy implementation," Renewable and Sustainable Energy Reviews, Elsevier, vol. 213(C).
    9. Fatemehsadat Mirshafiee & Emad Shahbazi & Mohadeseh Safi & Rituraj Rituraj, 2023. "Predicting Power and Hydrogen Generation of a Renewable Energy Converter Utilizing Data-Driven Methods: A Sustainable Smart Grid Case Study," Energies, MDPI, vol. 16(1), pages 1-20, January.
    10. Charl Maree & Christian W. Omlin, 2022. "Balancing Profit, Risk, and Sustainability for Portfolio Management," Papers 2207.02134, arXiv.org.
    11. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    12. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    13. Jifan Zhang & Salih Tutun & Samira Fazel Anvaryazdi & Mohammadhossein Amini & Durai Sundaramoorthi & Hema Sundaramoorthi, 2024. "Management of resource sharing in emergency response using data-driven analytics," Annals of Operations Research, Springer, vol. 339(1), pages 663-692, August.
    14. Konstantin Häusler & Hongyu Xia, 2022. "Indices on cryptocurrencies: an evaluation," Digital Finance, Springer, vol. 4(2), pages 149-167, September.
    15. Zuo Xiaorui & Chen Yao-Tsung & Härdle Wolfgang Karl, 2024. "Emoji driven crypto assets market reactions," Management & Marketing, Sciendo, vol. 19(2), pages 158-178.
    16. Valentin Kuleto & Milena Ilić & Mihail Dumangiu & Marko Ranković & Oliva M. D. Martins & Dan Păun & Larisa Mihoreanu, 2021. "Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions," Sustainability, MDPI, vol. 13(18), pages 1-16, September.
    17. Adrian Millea, 2021. "Deep Reinforcement Learning for Trading—A Critical Survey," Data, MDPI, vol. 6(11), pages 1-25, November.
    18. Chien-Liang Chiu & Paoyu Huang & Min-Yuh Day & Yensen Ni & Yuhsin Chen, 2024. "Mastery of “Monthly Effects”: Big Data Insights into Contrarian Strategies for DJI 30 and NDX 100 Stocks over a Two-Decade Period," Mathematics, MDPI, vol. 12(2), pages 1-21, January.
    19. Muhammad Umar Khan & Somia Mehak & Dr. Wajiha Yasir & Shagufta Anwar & Muhammad Usman Majeed & Hafiz Arslan Ramzan, 2023. "Quantitative Studies Of Deep Reinforcement Learning In Gaming, Robotics And Real-World Control Systems," Bulletin of Business and Economics (BBE), Research Foundation for Humanity (RFH), vol. 12(2), pages 389-395.
    20. Petr Suler & Zuzana Rowland & Tomas Krulicky, 2021. "Evaluation of the Accuracy of Machine Learning Predictions of the Czech Republic’s Exports to the China," JRFM, MDPI, vol. 14(2), pages 1-30, February.

    More about this item

    Keywords

    Regime switching; Machine learning; Crypto currencies; Reinforcement learning; FinTech;
    All these keywords.

    JEL classification:

    • C14 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Semiparametric and Nonparametric Methods: General
    • C15 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General - - - Statistical Simulation Methods: General
    • C87 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Econometric Software
    • C63 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling - - - Computational Techniques

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:digfin:v:7:y:2025:i:1:d:10.1007_s42521-024-00123-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.