Author
Abstract
Electronic marketplaces increasingly deploy AI for critical decisions, onboarding, pricing, approvals, and fraud detection. Effective explainable AI (XAI) must satisfy diverse stakeholders while maintaining technical accuracy. This research presents qualitative methods analyzing 24 semi-structured interviews across five specialized groups: retail users, advisors, developers, risk officers, and regulators from nine European countries. The investigation examined lending and buy-now-pay-later platforms through two scenarios - credit-limit changes and fraud-flag reviews - using validated XAI evaluation techniques. Three patterns emerged. First, process-oriented explanations that explain "why this case" enhance fairness perceptions and clarify actions. Second, progressive disclosure frameworks - beginning with concise text, then offering detailed visualizations - optimize comprehension without overload. Third, raw confidence metrics create uncertainty, whereas counterfactual examples demonstrate decision-altering factors effectively. These patterns, derived through systematic thematic analysis, challenge conventional approaches that prioritize algorithmic transparency over stakeholder comprehension. The study establishes six design principles for interpretable AI and proposes a multi-stakeholder evaluation framework connecting explanation tools to technical accuracy and human-centered outcomes. This responsible AI approach provides specialized meaning through role-appropriate explanations. The governance toolkit comprises operational performance indicators and audit-compliant documentation protocols, offering practical implementation methods for platform operators and regulators navigating XAI deployment in financial marketplaces.
Suggested Citation
Christoph Kreiterling, 2025.
"Design Principles for Explainable AI in Finance: A Multi- Stakeholder Framework,"
Working Papers
hal-05233140, HAL.
Handle:
RePEc:hal:wpaper:hal-05233140
DOI: 10.13140/RG.2.2.13686.46408
Note: View the original document on HAL open archive server: https://hal.science/hal-05233140v1
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:wpaper:hal-05233140. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: CCSD (email available below). General contact details of provider: https://hal.archives-ouvertes.fr/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.