IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2309.03736.html
   My bibliography  Save this paper

TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance

Author

Listed:
  • Yang Li
  • Yangyang Yu
  • Haohang Li
  • Zhi Chen
  • Khaldoun Khashanah

Abstract

Large Language Models (LLMs), prominently highlighted by the recent evolution in the Generative Pre-trained Transformers (GPT) series, have displayed significant prowess across various domains, such as aiding in healthcare diagnostics and curating analytical business reports. The efficacy of GPTs lies in their ability to decode human instructions, achieved through comprehensively processing historical inputs as an entirety within their memory system. Yet, the memory processing of GPTs does not precisely emulate the hierarchical nature of human memory. This can result in LLMs struggling to prioritize immediate and critical tasks efficiently. To bridge this gap, we introduce an innovative LLM multi-agent framework endowed with layered memories. We assert that this framework is well-suited for stock and fund trading, where the extraction of highly relevant insights from hierarchical financial data is imperative to inform trading decisions. Within this framework, one agent organizes memory into three distinct layers, each governed by a custom decay mechanism, aligning more closely with human cognitive processes. Agents can also engage in inter-agent debate. In financial trading contexts, LLMs serve as the decision core for trading agents, leveraging their layered memory system to integrate multi-source historical actions and market insights. This equips them to navigate financial changes, formulate strategies, and debate with peer agents about investment decisions. Another standout feature of our approach is to equip agents with individualized trading traits, enhancing memory diversity and decision robustness. These sophisticated designs boost the system's responsiveness to historical trades and real-time market signals, ensuring superior automated trading accuracy.

Suggested Citation

  • Yang Li & Yangyang Yu & Haohang Li & Zhi Chen & Khaldoun Khashanah, 2023. "TradingGPT: Multi-Agent System with Layered Memory and Distinct Characters for Enhanced Financial Trading Performance," Papers 2309.03736, arXiv.org.
  • Handle: RePEc:arx:papers:2309.03736
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2309.03736
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Jaap M J Murre & Joeri Dros, 2015. "Replication and Analysis of Ebbinghaus’ Forgetting Curve," PLOS ONE, Public Library of Science, vol. 10(7), pages 1-23, July.
    2. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Carolina Camassa, 2023. "Legal NLP Meets MiCAR: Advancing the Analysis of Crypto White Papers," Papers 2310.10333, arXiv.org, revised Oct 2023.
    2. Calvin Thigpen & Kelcie Ralph & Nicholas J. Klein & Anne Brown, 2023. "Can information increase support for transportation reform? Results from an experiment," Transportation, Springer, vol. 50(3), pages 893-912, June.
    3. Cattaneo, Cristina & D’Adda, Giovanna & Tavoni, Massimo & Bonan, Jacopo, 2019. "Can We Make Social Information Programs More Effective? The Role of Identity and Values," RFF Working Paper Series 19-21, Resources for the Future.
    4. Andrew J. Stier & Sina Sajjadi & Fariba Karimi & Luís M. A. Bettencourt & Marc G. Berman, 2024. "Implicit racial biases are lower in more populous more diverse and less segregated US cities," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    5. Balázs Zélity, 2023. "Age diversity and aggregate productivity," Journal of Population Economics, Springer;European Society for Population Economics, vol. 36(3), pages 1863-1899, July.
    6. Tao Ren & Ruihan Zhou & Jinyang Jiang & Jiafeng Liang & Qinghao Wang & Yijie Peng, 2024. "RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search," Papers 2402.07080, arXiv.org, revised Feb 2024.
    7. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Feb 2024.
    8. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org.
    9. Yupeng Cao & Zhi Chen & Qingyun Pei & Fabrizio Dimino & Lorenzo Ausiello & Prashant Kumar & K. P. Subbalakshmi & Papa Momar Ndiaye, 2024. "RiskLabs: Predicting Financial Risk Using Large Language Model Based on Multi-Sources Data," Papers 2404.07452, arXiv.org.
    10. Zhongyang Guo & Guanran Jiang & Zhongdan Zhang & Peng Li & Zhefeng Wang & Yinchun Wang, 2023. "Shai: A large language model for asset management," Papers 2312.14203, arXiv.org.
    11. Xi Zhang & Rui Gao & Jin Ling Lin & Ning Chen & Qin Lin & Gui Fang Huang & Long Wang & Xiao Huan Chen & Fang Qin Xue & Hong Li, 2020. "Effects of hospital‐family holistic care model on the health outcome of patients with permanent enterostomy based on the theory of ‘Timing It Right’," Journal of Clinical Nursing, John Wiley & Sons, vol. 29(13-14), pages 2196-2208, July.
    12. Kelvin J. L. Koa & Yunshan Ma & Ritchie Ng & Tat-Seng Chua, 2024. "Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models," Papers 2402.03659, arXiv.org, revised Feb 2024.
    13. Bonan, Jacopo & Cattaneo, Cristina & d’Adda, Giovanna & Tavoni, Massimo, 2021. "Can social information programs be more effective? The role of environmental identity for energy conservation," Journal of Environmental Economics and Management, Elsevier, vol. 108(C).
    14. Neng Wang & Hongyang Yang & Christina Dan Wang, 2023. "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets," Papers 2310.04793, arXiv.org, revised Nov 2023.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2309.03736. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.