IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2310.04793.html
   My bibliography  Save this paper

FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets

Author

Listed:
  • Neng Wang
  • Hongyang Yang
  • Christina Dan Wang

Abstract

In the swiftly expanding domain of Natural Language Processing (NLP), the potential of GPT-based models for the financial sector is increasingly evident. However, the integration of these models with financial datasets presents challenges, notably in determining their adeptness and relevance. This paper introduces a distinctive approach anchored in the Instruction Tuning paradigm for open-source large language models, specifically adapted for financial contexts. Through this methodology, we capitalize on the interoperability of open-source models, ensuring a seamless and transparent integration. We begin by explaining the Instruction Tuning paradigm, highlighting its effectiveness for immediate integration. The paper presents a benchmarking scheme designed for end-to-end training and testing, employing a cost-effective progression. Firstly, we assess basic competencies and fundamental tasks, such as Named Entity Recognition (NER) and sentiment analysis to enhance specialization. Next, we delve into a comprehensive model, executing multi-task operations by amalgamating all instructional tunings to examine versatility. Finally, we explore the zero-shot capabilities by earmarking unseen tasks and incorporating novel datasets to understand adaptability in uncharted terrains. Such a paradigm fortifies the principles of openness and reproducibility, laying a robust foundation for future investigations in open-source financial large language models (FinLLMs).

Suggested Citation

  • Neng Wang & Hongyang Yang & Christina Dan Wang, 2023. "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets," Papers 2310.04793, arXiv.org, revised Nov 2023.
  • Handle: RePEc:arx:papers:2310.04793
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2310.04793
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Pekka Malo & Ankur Sinha & Pekka Korhonen & Jyrki Wallenius & Pyry Takala, 2014. "Good debt or bad debt: Detecting semantic orientations in economic texts," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(4), pages 782-796, April.
    2. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ali Elahi & Fatemeh Taghvaei, 2024. "Combining Financial Data and News Articles for Stock Price Movement Prediction Using Large Language Models," Papers 2411.01368, arXiv.org.
    2. Qilong Wu & Xiaoneng Xiang & Hejia Huang & Xuan Wang & Yeo Wei Jie & Ranjan Satapathy & Ricardo Shirota Filho & Bharadwaj Veeravalli, 2024. "SusGen-GPT: A Data-Centric LLM for Financial NLP and Sustainability Report Generation," Papers 2412.10906, arXiv.org.
    3. Yixuan Liang & Yuncong Liu & Boyu Zhang & Christina Dan Wang & Hongyang Yang, 2024. "FinGPT: Enhancing Sentiment-Based Stock Movement Prediction with Dissemination-Aware and Context-Enriched LLMs," Papers 2412.10823, arXiv.org.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kirtac, Kemal & Germano, Guido, 2024. "Sentiment trading with large language models," Finance Research Letters, Elsevier, vol. 62(PB).
    2. Chen, Cathy Yi-Hsuan & Fengler, Matthias R. & Härdle, Wolfgang Karl & Liu, Yanchu, 2022. "Media-expressed tone, option characteristics, and stock return predictability," Journal of Economic Dynamics and Control, Elsevier, vol. 134(C).
    3. Carolina Camassa, 2023. "Legal NLP Meets MiCAR: Advancing the Analysis of Crypto White Papers," Papers 2310.10333, arXiv.org, revised Oct 2023.
    4. Paola Cerchiello & Giancarlo Nicola, 2018. "Assessing News Contagion in Finance," Econometrics, MDPI, vol. 6(1), pages 1-19, February.
    5. Chandan Singh & Armin Askari & Rich Caruana & Jianfeng Gao, 2023. "Augmenting interpretable models with large language models during training," Nature Communications, Nature, vol. 14(1), pages 1-11, December.
    6. Borchert, Philipp & Coussement, Kristof & De Weerdt, Jochen & De Caigny, Arno, 2024. "Industry-sensitive language modeling for business," European Journal of Operational Research, Elsevier, vol. 315(2), pages 691-702.
    7. Priyank Sonkiya & Vikas Bajpai & Anukriti Bansal, 2021. "Stock price prediction using BERT and GAN," Papers 2107.09055, arXiv.org.
    8. Duygu Ider & Stefan Lessmann, 2022. "Forecasting Cryptocurrency Returns from Sentiment Signals: An Analysis of BERT Classifiers and Weak Supervision," Papers 2204.05781, arXiv.org, revised Mar 2023.
    9. Darko B. Vuković & Senanu Dekpo-Adza & Stefana Matović, 2025. "AI integration in financial services: a systematic review of trends and regulatory challenges," Palgrave Communications, Palgrave Macmillan, vol. 12(1), pages 1-29, December.
    10. Ankur Sinha & Chaitanya Agarwal & Pekka Malo, 2025. "FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data," Papers 2502.18471, arXiv.org.
    11. Hoyoung Lee & Youngsoo Choi & Yuhee Kwon, 2024. "Quantifying Qualitative Insights: Leveraging LLMs to Market Predict," Papers 2411.08404, arXiv.org.
    12. Xinghong Fu & Masanori Hirano & Kentaro Imajo, 2024. "Financial Fine-tuning a Large Time Series Model," Papers 2412.09880, arXiv.org.
    13. Julian Junyan Wang & Victor Xiaoqi Wang, 2025. "Assessing Consistency and Reproducibility in the Outputs of Large Language Models: Evidence Across Diverse Finance and Accounting Tasks," Papers 2503.16974, arXiv.org, revised Mar 2025.
    14. Hu, Yi & Kim, Hyeonjin & Ye, Kai & Lu, Ning, 2025. "Applying fine-tuned LLMs for reducing data needs in load profile analysis," Applied Energy, Elsevier, vol. 377(PC).
    15. Tao Ren & Ruihan Zhou & Jinyang Jiang & Jiafeng Liang & Qinghao Wang & Yijie Peng, 2024. "RiskMiner: Discovering Formulaic Alphas via Risk Seeking Monte Carlo Tree Search," Papers 2402.07080, arXiv.org, revised Feb 2024.
    16. Andrea Ajello & Diego Silva & Travis Adams & Francisco Vazquez-Grande, 2023. "More than Words: Twitter Chatter and Financial Market Sentiment," Finance and Economics Discussion Series 2023-034, Board of Governors of the Federal Reserve System (U.S.).
    17. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    18. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    19. Ankur Sinha & Satishwar Kedas & Rishu Kumar & Pekka Malo, 2022. "SEntFiN 1.0: Entity‐aware sentiment analysis for financial news," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 73(9), pages 1314-1335, September.
    20. Tingsong Jiang & Qingyun Zeng, 2023. "Financial sentiment analysis using FinBERT with application in predicting stock movement," Papers 2306.02136, arXiv.org, revised Jun 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2310.04793. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.