Author
Listed:
- Hossein Habibinejad
(CERAG - Centre d'études et de recherches appliquées à la gestion - UGA - Université Grenoble Alpes)
- Morteza Alaeddini
(ICN Business School, CEREFIGE - Centre Européen de Recherche en Economie Financière et Gestion des Entreprises - UL - Université de Lorraine)
- Paul Reaidy
(CERAG - Centre d'études et de recherches appliquées à la gestion - UGA - Université Grenoble Alpes)
Abstract
As artificial intelligence (AI) becomes increasingly central to environmental, social, and governance (ESG) risk assessment, concerns about model opacity and stakeholder trust have come to the forefront Traditional ESG scoring systems face limitations such as inconsistent data, lack of transparency, and potential bias issues that are often exacerbated by complex, black-box AI models. This paper examines the role of explainable AI (XAI) and responsible AI (RAI) in enhancing the credibility and ethical alignment of ESG assessments. A comprehensive review of the literature highlights critical research gaps, including the absence of standardised explainability metrics, minimal empirical validation in real-world contexts, and the neglect of cultural variability in trust formation. To address these gaps, the paper introduces a theoretical framework that integrates trust determinants, RAI principles, and XAI techniques. The model also incorporates human-centric moderators and feedback loops to ensure adaptability across stakeholder groups. By linking interpretability, ethical safeguards, and user-centred design, the framework offers a path toward more trustworthy and transparent ESG systems. Ultimately, this study contributes to the development of AI-powered tools that support responsible decision-making in sustainable finance while reinforcing stakeholder confidence and accountability.
Suggested Citation
Hossein Habibinejad & Morteza Alaeddini & Paul Reaidy, 2025.
"Toward Trustworthy ESG Risk Assessment through XAI: a State-of-the-Art Review,"
Post-Print
hal-05356025, HAL.
Handle:
RePEc:hal:journl:hal-05356025
DOI: 10.1504/IJGAIB.2025.10073995
Download full text from publisher
To our knowledge, this item is not available for
download. To find whether it is available, there are three
options:
1. Check below whether another version of this item is available online.
2. Check on the provider's
web page
whether it is in fact available.
3. Perform a
for a similarly titled item that would be
available.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:journl:hal-05356025. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: CCSD (email available below). General contact details of provider: https://hal.archives-ouvertes.fr/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.