IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2506.04290.html
   My bibliography  Save this paper

Interpretable LLMs for Credit Risk: A Systematic Review and Taxonomy

Author

Listed:
  • Muhammed Golec
  • Maha AlabdulJalil

Abstract

Large Language Models (LLM), which have developed in recent years, enable credit risk assessment through the analysis of financial texts such as analyst reports and corporate disclosures. This paper presents the first systematic review and taxonomy focusing on LLMbased approaches in credit risk estimation. We determined the basic model architectures by selecting 60 relevant papers published between 2020-2025 with the PRISMA research strategy. And we examined the data used for scenarios such as credit default prediction and risk analysis. Since the main focus of the paper is interpretability, we classify concepts such as explainability mechanisms, chain of thought prompts and natural language justifications for LLM-based credit models. The taxonomy organizes the literature under four main headings: model architectures, data types, explainability mechanisms and application areas. Based on this analysis, we highlight the main future trends and research gaps for LLM-based credit scoring systems. This paper aims to be a reference paper for artificial intelligence and financial researchers.

Suggested Citation

  • Muhammed Golec & Maha AlabdulJalil, 2025. "Interpretable LLMs for Credit Risk: A Systematic Review and Taxonomy," Papers 2506.04290, arXiv.org, revised Jun 2025.
  • Handle: RePEc:arx:papers:2506.04290
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2506.04290
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Liu, Yingnan & Bu, Ningbo & Li, Zhiqiang & Zhang, Yongmin & Zhao, Zhenyu, 2025. "AT-FinGPT: Financial risk prediction via an audio-text large language model," Finance Research Letters, Elsevier, vol. 77(C).
    2. Khaoula Idbenjra & Kristof Coussement & Arno de Caigny, 2024. "Investigating the beneficial impact of segmentation-based modelling for credit scoring," Post-Print hal-04543449, HAL.
    3. Julian Junyan Wang & Victor Xiaoqi Wang, 2025. "Assessing Consistency and Reproducibility in the Outputs of Large Language Models: Evidence Across Diverse Finance and Accounting Tasks," Papers 2503.16974, arXiv.org, revised Sep 2025.
    4. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    5. Jaskaran Singh Walia & Aarush Sinha & Srinitish Srinivasan & Srihari Unnikrishnan, 2025. "Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation," Papers 2502.17011, arXiv.org.
    6. Xue Wen Tan & Stanley Kok, 2024. "Explainable Risk Classification in Financial Reports," Papers 2405.01881, arXiv.org, revised Dec 2024.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xuewen Han & Neng Wang & Shangkun Che & Hongyang Yang & Kunpeng Zhang & Sean Xin Xu, 2024. "Enhancing Investment Analysis: Optimizing AI-Agent Collaboration in Financial Research," Papers 2411.04788, arXiv.org.
    2. Arnav Grover, 2025. "FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL Contest 2024," Papers 2502.01992, arXiv.org.
    3. Jizhou Wang & Xiaodan Fang & Lei Huang & Yongfeng Huang, 2025. "TaxAgent: How Large Language Model Designs Fiscal Policy," Papers 2506.02838, arXiv.org.
    4. Tianyu Zhou & Pinqiao Wang & Yilin Wu & Hongyang Yang, 2024. "FinRobot: AI Agent for Equity Research and Valuation with Large Language Models," Papers 2411.08804, arXiv.org.
    5. Shanyan Lai, 2025. "Asset Pricing in Pre-trained Transformer," Papers 2505.01575, arXiv.org, revised May 2025.
    6. Yoontae Hwang & Yaxuan Kong & Stefan Zohren & Yongjae Lee, 2025. "Decision-informed Neural Networks with Large Language Model Integration for Portfolio Optimization," Papers 2502.00828, arXiv.org.
    7. Qingwen Liang & Matias Carrasco Kind, 2025. "How do managers' non-responses during earnings calls affect analyst forecasts," Papers 2505.18419, arXiv.org.
    8. Shijie Han & Jingshu Zhang & Yiqing Shen & Kaiyuan Yan & Hongguang Li, 2025. "FinSphere, a Real-Time Stock Analysis Agent Powered by Instruction-Tuned LLMs and Domain Tools," Papers 2501.12399, arXiv.org, revised Jul 2025.
    9. Joel R. Bock, 2024. "Generating long-horizon stock "buy" signals with a neural language model," Papers 2410.18988, arXiv.org.
    10. Felix Drinkall & Janet B. Pierrehumbert & Stefan Zohren, 2024. "Forecasting Credit Ratings: A Case Study where Traditional Methods Outperform Generative LLMs," Papers 2407.17624, arXiv.org, revised Jan 2025.
    11. Alejandro Lopez-Lira & Jihoon Kwon & Sangwoon Yoon & Jy-yong Sohn & Chanyeol Choi, 2025. "Bridging Language Models and Financial Analysis," Papers 2503.22693, arXiv.org.
    12. Zonghan Wu & Junlin Wang & Congyuan Zou & Chenhan Wang & Yilei Shao, 2025. "Towards Competent AI for Fundamental Analysis in Finance: A Benchmark Dataset and Evaluation," Papers 2506.07315, arXiv.org.
    13. Yuzhe Yang & Yifei Zhang & Yan Hu & Yilin Guo & Ruoli Gan & Yueru He & Mingcong Lei & Xiao Zhang & Haining Wang & Qianqian Xie & Jimin Huang & Honghai Yu & Benyou Wang, 2024. "UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models," Papers 2410.14059, arXiv.org, revised Feb 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2506.04290. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.