IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2002.05780.html
   My bibliography  Save this paper

Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States

Author

Listed:
  • Yunan Ye
  • Hengzhi Pei
  • Boxin Wang
  • Pin-Yu Chen
  • Yada Zhu
  • Jun Xiao
  • Bo Li

Abstract

Portfolio management (PM) is a fundamental financial planning task that aims to achieve investment goals such as maximal profits or minimal risks. Its decision process involves continuous derivation of valuable information from various data sources and sequential decision optimization, which is a prospective research direction for reinforcement learning (RL). In this paper, we propose SARL, a novel State-Augmented RL framework for PM. Our framework aims to address two unique challenges in financial PM: (1) data heterogeneity -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary. To incorporate heterogeneous data and enhance robustness against environment uncertainty, our SARL augments the asset information with their price movement prediction as additional states, where the prediction can be solely based on financial data (e.g., asset prices) or derived from alternative sources such as news. Experiments on two real-world datasets, (i) Bitcoin market and (ii) HighTech stock market with 7-year Reuters news articles, validate the effectiveness of SARL over existing PM approaches, both in terms of accumulated profits and risk-adjusted profits. Moreover, extensive simulations are conducted to demonstrate the importance of our proposed state augmentation, providing new insights and boosting performance significantly over standard RL-based PM method and other baselines.

Suggested Citation

  • Yunan Ye & Hengzhi Pei & Boxin Wang & Pin-Yu Chen & Yada Zhu & Jun Xiao & Bo Li, 2020. "Reinforcement-Learning based Portfolio Management with Augmented Asset Movement Prediction States," Papers 2002.05780, arXiv.org.
  • Handle: RePEc:arx:papers:2002.05780
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2002.05780
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Mih�ly Ormos & Andr�s Urb�n, 2013. "Performance analysis of log-optimal portfolio strategies with transaction costs," Quantitative Finance, Taylor & Francis Journals, vol. 13(10), pages 1587-1597, October.
    2. William F. Sharpe, 1964. "Capital Asset Prices: A Theory Of Market Equilibrium Under Conditions Of Risk," Journal of Finance, American Finance Association, vol. 19(3), pages 425-442, September.
    3. Bin Li & Steven C. H. Hoi, 2012. "On-Line Portfolio Selection with Moving Average Reversion," Papers 1206.4626, arXiv.org.
    4. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    5. Zhipeng Liang & Hao Chen & Junhao Zhu & Kangkang Jiang & Yanran Li, 2018. "Adversarial Deep Reinforcement Learning in Portfolio Management," Papers 1808.09940, arXiv.org, revised Nov 2018.
    6. J. B. Heaton & N. G. Polson & J. H. Witte, 2017. "Deep learning for finance: deep portfolios," Applied Stochastic Models in Business and Industry, John Wiley & Sons, vol. 33(1), pages 3-12, January.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & François Chareyron, 2021. "Distinguish the indistinguishable: a Deep Reinforcement Learning approach for volatility targeting models," Working Papers hal-03202431, HAL.
    2. Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
    3. Francisco Caio Lima Paiva & Leonardo Kanashiro Felizardo & Reinaldo Augusto da Costa Bianchi & Anna Helena Reali Costa, 2021. "Intelligent Trading Systems: A Sentiment-Aware Reinforcement Learning Approach," Papers 2112.02095, arXiv.org.
    4. Woosung Koh & Insu Choi & Yuntae Jang & Gimin Kang & Woo Chang Kim, 2023. "Curriculum Learning and Imitation Learning for Model-free Control on Financial Time-series," Papers 2311.13326, arXiv.org, revised Jan 2024.
    5. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    6. Chung I Lu, 2023. "Evaluation of Deep Reinforcement Learning Algorithms for Portfolio Optimisation," Papers 2307.07694, arXiv.org, revised Jul 2023.
    7. Frensi Zejnullahu & Maurice Moser & Joerg Osterrieder, 2022. "Applications of Reinforcement Learning in Finance -- Trading with a Double Deep Q-Network," Papers 2206.14267, arXiv.org.
    8. Shuo Sun & Molei Qin & Xinrun Wang & Bo An, 2023. "PRUDEX-Compass: Towards Systematic Evaluation of Reinforcement Learning in Financial Markets," Papers 2302.00586, arXiv.org, revised Mar 2023.
    9. Shuyang Wang & Diego Klabjan, 2023. "An Ensemble Method of Deep Reinforcement Learning for Automated Cryptocurrency Trading," Papers 2309.00626, arXiv.org.
    10. Kumar Yashaswi, 2021. "Deep Reinforcement Learning for Portfolio Optimization using Latent Feature State Space (LFSS) Module," Papers 2102.06233, arXiv.org.
    11. Yuchen Fang & Kan Ren & Weiqing Liu & Dong Zhou & Weinan Zhang & Jiang Bian & Yong Yu & Tie-Yan Liu, 2021. "Universal Trading for Order Execution with Oracle Policy Distillation," Papers 2103.10860, arXiv.org.
    12. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Feb 2024.
    13. Alexandre Carbonneau & Frédéric Godin, 2023. "Deep Equal Risk Pricing of Financial Derivatives with Non-Translation Invariant Risk Measures," Risks, MDPI, vol. 11(8), pages 1-27, August.
    14. Hui Niu & Siyuan Li & Jian Li, 2022. "MetaTrader: An Reinforcement Learning Approach Integrating Diverse Policies for Portfolio Optimization," Papers 2210.01774, arXiv.org.
    15. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    16. Eric Benhamou & David Saltiel & Serge Tabachnik & Sui Kai Wong & Franc{c}ois Chareyron, 2021. "Adaptive learning for financial markets mixing model-based and model-free RL for volatility targeting," Papers 2104.10483, arXiv.org, revised Apr 2021.
    17. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Bridging the gap between Markowitz planning and deep reinforcement learning," Papers 2010.09108, arXiv.org.
    18. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    19. Ricard Durall, 2022. "Asset Allocation: From Markowitz to Deep Reinforcement Learning," Papers 2208.07158, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Amir Mosavi & Pedram Ghamisi & Yaser Faghan & Puhong Duan, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Papers 2004.01509, arXiv.org.
    2. Amirhosein Mosavi & Yaser Faghan & Pedram Ghamisi & Puhong Duan & Sina Faizollahzadeh Ardabili & Ely Salwana & Shahab S. Band, 2020. "Comprehensive Review of Deep Reinforcement Learning Methods and Applications in Economics," Mathematics, MDPI, vol. 8(10), pages 1-42, September.
    3. Shuo Sun & Rundong Wang & Bo An, 2021. "Reinforcement Learning for Quantitative Trading," Papers 2109.13851, arXiv.org.
    4. Yinheng Li & Junhao Wang & Yijie Cao, 2019. "A General Framework on Enhancing Portfolio Management with Reinforcement Learning," Papers 1911.11880, arXiv.org, revised Oct 2023.
    5. Zhenhan Huang & Fumihide Tanaka, 2021. "MSPM: A Modularized and Scalable Multi-Agent Reinforcement Learning-based System for Financial Portfolio Management," Papers 2102.03502, arXiv.org, revised Feb 2022.
    6. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay, 2020. "Time your hedge with Deep Reinforcement Learning," Papers 2009.14136, arXiv.org, revised Nov 2020.
    7. Ricard Durall, 2022. "Asset Allocation: From Markowitz to Deep Reinforcement Learning," Papers 2208.07158, arXiv.org.
    8. Eric Benhamou & David Saltiel & Sandrine Ungari & Abhishek Mukhopadhyay & Jamal Atif, 2020. "AAMDRL: Augmented Asset Management with Deep Reinforcement Learning," Papers 2010.08497, arXiv.org.
    9. Jeonggyu Huh, 2018. "Measuring Systematic Risk with Neural Network Factor Model," Papers 1809.04925, arXiv.org.
    10. Vitor Azevedo & Christopher Hoegner, 2023. "Enhancing stock market anomalies with machine learning," Review of Quantitative Finance and Accounting, Springer, vol. 60(1), pages 195-230, January.
    11. Mei-Li Shen & Cheng-Feng Lee & Hsiou-Hsiang Liu & Po-Yin Chang & Cheng-Hong Yang, 2021. "An Effective Hybrid Approach for Forecasting Currency Exchange Rates," Sustainability, MDPI, vol. 13(5), pages 1-29, March.
    12. Mengying Zhu & Xiaolin Zheng & Yan Wang & Yuyuan Li & Qianqiao Liang, 2019. "Adaptive Portfolio by Solving Multi-armed Bandit via Thompson Sampling," Papers 1911.05309, arXiv.org, revised Nov 2019.
    13. Ben Hambly & Renyuan Xu & Huining Yang, 2021. "Recent Advances in Reinforcement Learning in Finance," Papers 2112.04553, arXiv.org, revised Feb 2023.
    14. Zhengyong Jiang & Jeyan Thiayagalingam & Jionglong Su & Jinjun Liang, 2023. "CAD: Clustering And Deep Reinforcement Learning Based Multi-Period Portfolio Management Strategy," Papers 2310.01319, arXiv.org.
    15. Yasuhiro Nakayama & Tomochika Sawaki, 2023. "Causal Inference on Investment Constraints and Non-stationarity in Dynamic Portfolio Optimization through Reinforcement Learning," Papers 2311.04946, arXiv.org.
    16. Xing Wang & Yijun Wang & Bin Weng & Aleksandr Vinel, 2020. "Stock2Vec: A Hybrid Deep Learning Framework for Stock Market Prediction with Representation Learning and Temporal Convolutional Network," Papers 2010.01197, arXiv.org.
    17. Uddin, Ajim & Yu, Dantong, 2020. "Latent factor model for asset pricing," Journal of Behavioral and Experimental Finance, Elsevier, vol. 27(C).
    18. Zhengyao Jiang & Dixing Xu & Jinjun Liang, 2017. "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem," Papers 1706.10059, arXiv.org, revised Jul 2017.
    19. Liu Ziyin & Kentaro Minami & Kentaro Imajo, 2021. "Theoretically Motivated Data Augmentation and Regularization for Portfolio Construction," Papers 2106.04114, arXiv.org, revised Dec 2022.
    20. Wenbo Wu & Jiaqi Chen & Zhibin (Ben) Yang & Michael L. Tindall, 2021. "A Cross-Sectional Machine Learning Approach for Hedge Fund Return Prediction and Selection," Management Science, INFORMS, vol. 67(7), pages 4577-4601, July.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2002.05780. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.