IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2502.18471.html
   My bibliography  Save this paper

FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data

Author

Listed:
  • Ankur Sinha
  • Chaitanya Agarwal
  • Pekka Malo

Abstract

Large language models (LLMs) excel at generating human-like responses but often struggle with interactive tasks that require access to real-time information. This limitation poses challenges in finance, where models must access up-to-date information, such as recent news or price movements, to support decision-making. To address this, we introduce Financial Agent, a knowledge-grounding approach for LLMs to handle financial queries using real-time text and tabular data. Our contributions are threefold: First, we develop a Financial Context Dataset of over 50,000 financial queries paired with the required context. Second, we train FinBloom 7B, a custom 7 billion parameter LLM, on 14 million financial news articles from Reuters and Deutsche Presse-Agentur, alongside 12 million Securities and Exchange Commission (SEC) filings. Third, we fine-tune FinBloom 7B using the Financial Context Dataset to serve as a Financial Agent. This agent generates relevant financial context, enabling efficient real-time data retrieval to answer user queries. By reducing latency and eliminating the need for users to manually provide accurate data, our approach significantly enhances the capability of LLMs to handle dynamic financial tasks. Our proposed approach makes real-time financial decisions, algorithmic trading and other related tasks streamlined, and is valuable in contexts with high-velocity data flows.

Suggested Citation

  • Ankur Sinha & Chaitanya Agarwal & Pekka Malo, 2025. "FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data," Papers 2502.18471, arXiv.org.
  • Handle: RePEc:arx:papers:2502.18471
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2502.18471
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Shijie Wu & Ozan Irsoy & Steven Lu & Vadim Dabravolski & Mark Dredze & Sebastian Gehrmann & Prabhanjan Kambadur & David Rosenberg & Gideon Mann, 2023. "BloombergGPT: A Large Language Model for Finance," Papers 2303.17564, arXiv.org, revised Dec 2023.
    2. Pekka Malo & Ankur Sinha & Pekka Korhonen & Jyrki Wallenius & Pyry Takala, 2014. "Good debt or bad debt: Detecting semantic orientations in economic texts," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 65(4), pages 782-796, April.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    2. Baptiste Lefort & Eric Benhamou & Jean-Jacques Ohana & David Saltiel & Beatrice Guez, 2024. "Optimizing Performance: How Compact Models Match or Exceed GPT's Classification Capabilities through Fine-Tuning," Papers 2409.11408, arXiv.org.
    3. Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
    4. Kirtac, Kemal & Germano, Guido, 2024. "Sentiment trading with large language models," Finance Research Letters, Elsevier, vol. 62(PB).
    5. Chen, Cathy Yi-Hsuan & Fengler, Matthias R. & Härdle, Wolfgang Karl & Liu, Yanchu, 2022. "Media-expressed tone, option characteristics, and stock return predictability," Journal of Economic Dynamics and Control, Elsevier, vol. 134(C).
    6. Xia Li & Hanghang Zheng & Xiao Chen & Hong Liu & Mao Mao, 2025. "Class-Imbalanced-Aware Adaptive Dataset Distillation for Scalable Pretrained Model on Credit Scoring," Papers 2501.10677, arXiv.org, revised Jan 2025.
    7. Lezhi Li & Ting-Yu Chang & Hai Wang, 2023. "Multimodal Gen-AI for Fundamental Investment Research," Papers 2401.06164, arXiv.org.
    8. Fengbin Zhu & Junfeng Li & Liangming Pan & Wenjie Wang & Fuli Feng & Chao Wang & Huanbo Luan & Tat-Seng Chua, 2025. "FinTMMBench: Benchmarking Temporal-Aware Multi-Modal RAG in Finance," Papers 2503.05185, arXiv.org.
    9. Paola Cerchiello & Giancarlo Nicola, 2018. "Assessing News Contagion in Finance," Econometrics, MDPI, vol. 6(1), pages 1-19, February.
    10. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    11. Travis Adams & Andrea Ajello & Diego Silva & Francisco Vazquez-Grande, 2023. "More than Words: Twitter Chatter and Financial Market Sentiment," Papers 2305.16164, arXiv.org.
    12. Chandan Singh & Armin Askari & Rich Caruana & Jianfeng Gao, 2023. "Augmenting interpretable models with large language models during training," Nature Communications, Nature, vol. 14(1), pages 1-11, December.
    13. Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
    14. Borchert, Philipp & Coussement, Kristof & De Weerdt, Jochen & De Caigny, Arno, 2024. "Industry-sensitive language modeling for business," European Journal of Operational Research, Elsevier, vol. 315(2), pages 691-702.
    15. Martina Halouskov'a & v{S}tefan Ly'ocsa, 2025. "Forecasting U.S. equity market volatility with attention and sentiment to the economy," Papers 2503.19767, arXiv.org.
    16. Frank Xing, 2024. "Designing Heterogeneous LLM Agents for Financial Sentiment Analysis," Papers 2401.05799, arXiv.org.
    17. Priyank Sonkiya & Vikas Bajpai & Anukriti Bansal, 2021. "Stock price prediction using BERT and GAN," Papers 2107.09055, arXiv.org.
    18. Duygu Ider & Stefan Lessmann, 2022. "Forecasting Cryptocurrency Returns from Sentiment Signals: An Analysis of BERT Classifiers and Weak Supervision," Papers 2204.05781, arXiv.org, revised Mar 2023.
    19. Moritz Scherrmann, 2023. "Multi-Label Topic Model for Financial Textual Data," Papers 2311.07598, arXiv.org.
    20. Hoyoung Lee & Youngsoo Choi & Yuhee Kwon, 2024. "Quantifying Qualitative Insights: Leveraging LLMs to Market Predict," Papers 2411.08404, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2502.18471. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.