IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2311.15548.html
   My bibliography  Save this paper

Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination

Author

Listed:
  • Haoqiang Kang
  • Xiao-Yang Liu

Abstract

The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs' hallucination behaviors in financial tasks. First, we empirically investigate LLM model's ability of explaining financial concepts and terminologies. Second, we assess LLM models' capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs' hallucination.

Suggested Citation

  • Haoqiang Kang & Xiao-Yang Liu, 2023. "Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination," Papers 2311.15548, arXiv.org.
  • Handle: RePEc:arx:papers:2311.15548
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2311.15548
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Shijie Wu & Ozan Irsoy & Steven Lu & Vadim Dabravolski & Mark Dredze & Sebastian Gehrmann & Prabhanjan Kambadur & David Rosenberg & Gideon Mann, 2023. "BloombergGPT: A Large Language Model for Finance," Papers 2303.17564, arXiv.org, revised Dec 2023.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Lezhi Li & Ting-Yu Chang & Hai Wang, 2023. "Multimodal Gen-AI for Fundamental Investment Research," Papers 2401.06164, arXiv.org.
    2. Ankur Sinha & Chaitanya Agarwal & Pekka Malo, 2025. "FinBloom: Knowledge Grounding Large Language Model with Real-time Financial Data," Papers 2502.18471, arXiv.org.
    3. Hoyoung Lee & Youngsoo Choi & Yuhee Kwon, 2024. "Quantifying Qualitative Insights: Leveraging LLMs to Market Predict," Papers 2411.08404, arXiv.org.
    4. Zhaofeng Zhang & Banghao Chen & Shengxin Zhu & Nicolas Langren'e, 2024. "Quantformer: from attention to profit with a quantitative transformer trading strategy," Papers 2404.00424, arXiv.org, revised Oct 2024.
    5. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Jun 2024.
    6. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org, revised Jul 2024.
    7. Lars Hornuf & David J. Streich & Niklas Töllich, 2025. "Making GenAI Smarter: Evidence from a Portfolio Allocation Experiment," CESifo Working Paper Series 11862, CESifo.
    8. Gupta, Abhijit, 2025. "Decoding Futures Price Dynamics: A Regularized Sparse Autoencoder for Interpretable Multi-Horizon Forecasting and Factor Discovery," OSF Preprints 4rzky_v1, Center for Open Science.
    9. Dong, Mengming Michael & Stratopoulos, Theophanis C. & Wang, Victor Xiaoqi, 2024. "A scoping review of ChatGPT research in accounting and finance," International Journal of Accounting Information Systems, Elsevier, vol. 55(C).
    10. Vasant Dhar & Jo~ao Sedoc, 2025. "DBOT: Artificial Intelligence for Systematic Long-Term Investing," Papers 2504.05639, arXiv.org.
    11. Adria Pop & Jan Sporer, 2024. "The Structure of Financial Equity Research Reports -- Identification of the Most Frequently Asked Questions in Financial Analyst Reports to Automate Equity Research Using Llama 3 and GPT-4," Papers 2407.18327, arXiv.org, revised Jun 2025.
    12. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    13. Masanori Hirano & Kentaro Imajo, 2024. "Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training," Papers 2404.10555, arXiv.org.
    14. Van-Duc Le, 2024. "Auto-Generating Earnings Report Analysis via a Financial-Augmented LLM," Papers 2412.08179, arXiv.org.
    15. Baptiste Lefort & Eric Benhamou & Jean-Jacques Ohana & David Saltiel & Beatrice Guez, 2024. "Optimizing Performance: How Compact Models Match or Exceed GPT's Classification Capabilities through Fine-Tuning," Papers 2409.11408, arXiv.org.
    16. Zhiyu Cao & Zachary Feinstein, 2024. "Large Language Model in Financial Regulatory Interpretation," Papers 2405.06808, arXiv.org, revised Jul 2024.
    17. Alejandro Lopez-Lira & Yuehua Tang, 2023. "Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models," Papers 2304.07619, arXiv.org, revised Sep 2024.
    18. Gu, Hanchi & Schreyer, Marco & Moffitt, Kevin & Vasarhelyi, Miklos, 2024. "Artificial intelligence co-piloted auditing," International Journal of Accounting Information Systems, Elsevier, vol. 54(C).
    19. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    20. Bokai Cao & Saizhuo Wang & Xinyi Lin & Xiaojun Wu & Haohan Zhang & Lionel M. Ni & Jian Guo, 2025. "From Deep Learning to LLMs: A survey of AI in Quantitative Investment," Papers 2503.21422, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2311.15548. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.