IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2401.06164.html
   My bibliography  Save this paper

Multimodal Gen-AI for Fundamental Investment Research

Author

Listed:
  • Lezhi Li
  • Ting-Yu Chang
  • Hai Wang

Abstract

This report outlines a transformative initiative in the financial investment industry, where the conventional decision-making process, laden with labor-intensive tasks such as sifting through voluminous documents, is being reimagined. Leveraging language models, our experiments aim to automate information summarization and investment idea generation. We seek to evaluate the effectiveness of fine-tuning methods on a base model (Llama2) to achieve specific application-level goals, including providing insights into the impact of events on companies and sectors, understanding market condition relationships, generating investor-aligned investment ideas, and formatting results with stock recommendations and detailed explanations. Through state-of-the-art generative modeling techniques, the ultimate objective is to develop an AI agent prototype, liberating human investors from repetitive tasks and allowing a focus on high-level strategic thinking. The project encompasses a diverse corpus dataset, including research reports, investment memos, market news, and extensive time-series market data. We conducted three experiments applying unsupervised and supervised LoRA fine-tuning on the llama2_7b_hf_chat as the base model, as well as instruction fine-tuning on the GPT3.5 model. Statistical and human evaluations both show that the fine-tuned versions perform better in solving text modeling, summarization, reasoning, and finance domain questions, demonstrating a pivotal step towards enhancing decision-making processes in the financial domain. Code implementation for the project can be found on GitHub: https://github.com/Firenze11/finance_lm.

Suggested Citation

  • Lezhi Li & Ting-Yu Chang & Hai Wang, 2023. "Multimodal Gen-AI for Fundamental Investment Research," Papers 2401.06164, arXiv.org.
  • Handle: RePEc:arx:papers:2401.06164
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2401.06164
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Shijie Wu & Ozan Irsoy & Steven Lu & Vadim Dabravolski & Mark Dredze & Sebastian Gehrmann & Prabhanjan Kambadur & David Rosenberg & Gideon Mann, 2023. "BloombergGPT: A Large Language Model for Finance," Papers 2303.17564, arXiv.org, revised Dec 2023.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    2. Frank Xing, 2024. "Designing Heterogeneous LLM Agents for Financial Sentiment Analysis," Papers 2401.05799, arXiv.org.
    3. Seppälä, Timo & Mucha, Tomasz & Mattila, Juri, 2023. "Beyond AI, Blockchain Systems, and Digital Platforms: Digitalization Unlocks Mass Hyper-Personalization and Mass Servitization," ETLA Working Papers 106, The Research Institute of the Finnish Economy.
    4. Zhaofeng Zhang & Banghao Chen & Shengxin Zhu & Nicolas Langren'e, 2024. "From attention to profit: quantitative trading strategy based on transformer," Papers 2404.00424, arXiv.org.
    5. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Feb 2024.
    6. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org.
    7. Masanori Hirano & Kentaro Imajo, 2024. "Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training," Papers 2404.10555, arXiv.org.
    8. Haoqiang Kang & Xiao-Yang Liu, 2023. "Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination," Papers 2311.15548, arXiv.org.
    9. Mamalis, Marios & Kalampokis, Evangelos & Karamanou, Areti & Brimos, Petros & Tarabanis, Konstantinos, 2023. "Can Large Language Models Revolutionalize Open Government Data Portals? A Case of Using ChatGPT in statistics.gov.scot," OSF Preprints 9b35z, Center for Open Science.
    10. Claudia Biancotti & Carolina Camassa, 2023. "Loquacity and visible emotion: ChatGPT as a policy advisor," Questioni di Economia e Finanza (Occasional Papers) 814, Bank of Italy, Economic Research and International Relations Area.
    11. Alejandro Lopez-Lira & Yuehua Tang, 2023. "Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models," Papers 2304.07619, arXiv.org, revised Sep 2023.
    12. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    13. Eric Fischer & Rebecca McCaughrin & Saketh Prazad & Mark Vandergon, 2023. "Fed Transparency and Policy Expectation Errors: A Text Analysis Approach," Staff Reports 1081, Federal Reserve Bank of New York.
    14. Dat Mai, 2024. "StockGPT: A GenAI Model for Stock Prediction and Trading," Papers 2404.05101, arXiv.org, revised Apr 2024.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2401.06164. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.