Author
Listed:
- José Rômulo de Castro Vieira
(Department of Business Management, Faculty of Economics, Business Management, Accounting and Public Policy Management (FACE), University of Brasília, Brasília 70910-900, DF, Brazil)
- Flavio Barboza
(Faculty of Management and Business, Federal University of Uberlândia, Uberlândia 38408-100, MG, Brazil)
- Daniel Cajueiro
(Department of Economics, Faculty of Economics, Business Management, Accounting and Public Policy Management (FACE), University of Brasília, Brasília 70910-900, DF, Brazil)
- Herbert Kimura
(Department of Business Management, Faculty of Economics, Business Management, Accounting and Public Policy Management (FACE), University of Brasília, Brasília 70910-900, DF, Brazil)
Abstract
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used to identify and mitigate biases in AI models applied to credit granting. We conducted a systematic literature review using the IEEE, Scopus, Web of Science, and Science Direct databases, covering the period from 1 January 2013 to 1 October 2024. From the 414 identified articles, 34 were selected for detailed analysis. Most studies are empirical and quantitative, focusing on fairness in outcomes and biases present in datasets. Preprocessing techniques dominated as the approach for bias mitigation, often relying on public academic datasets. Gender and race were the most studied sensitive attributes, with statistical parity being the most commonly used fairness metric. The findings reveal a maturing research landscape that prioritizes fairness in model outcomes and the mitigation of biases embedded in historical data. However, only a quarter of the papers report more than one fairness metric, limiting comparability across approaches. The literature remains largely focused on a narrow set of sensitive attributes, with little attention to intersectionality or alternative sources of bias. Furthermore, no study employed causal inference techniques to identify proxy discrimination. Despite some promising results—where fairness gains exceed 30% with minimal accuracy loss—significant methodological gaps persist, including the lack of standardized metrics, overreliance on legacy data, and insufficient transparency in model pipelines. Future work should prioritize developing advanced bias mitigation methods, exploring sensitive attributes, standardizing fairness metrics, improving model explainability, reducing computational complexity, enhancing synthetic data generation, and addressing the legal and ethical challenges of algorithms.
Suggested Citation
José Rômulo de Castro Vieira & Flavio Barboza & Daniel Cajueiro & Herbert Kimura, 2025.
"Towards Fair AI: Mitigating Bias in Credit Decisions—A Systematic Literature Review,"
JRFM, MDPI, vol. 18(5), pages 1-30, April.
Handle:
RePEc:gam:jjrfmx:v:18:y:2025:i:5:p:228-:d:1641302
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jjrfmx:v:18:y:2025:i:5:p:228-:d:1641302. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.