IDEAS home Printed from https://ideas.repec.org/a/eme/rafpps/raf-01-2025-0006.html
   My bibliography  Save this article

Artificial intelligence bias auditing – current approaches, challenges and lessons from practice

Author

Listed:
  • Sabina Lacmanovic
  • Marinko Skare

Abstract

Purpose - This study aims to explore current approaches, challenges and practical lessons in auditing artificial intelligence (AI) systems for bias, focusing on legal compliance audits in the USA and the European Union (EU). This emphasizes the need for standardized methodologies to ensure trustworthy AI systems that align with ethical and regulatory expectations. Design/methodology/approach - A qualitative analysis compared bias audit practices, including US bias audit report summaries under New York City’s Local Law 144 and conformity assessments (CAs) required by the EU AI Act. Data was gathered from publicly available reports and compliance guidelines to identify key challenges and lessons. Findings - The findings revealed that AI systems are susceptible to various biases stemming from data, algorithms and human oversight. Although valuable, legal compliance audits lack standardization, leading to inconsistent reporting practices. The EU’s risk-based CA approach offers a comprehensive framework; however, its effectiveness depends on developing practical standards and consistent application. Research limitations/implications - This study is limited by the early implementation stage of regulatory frameworks, particularly the EU AI Act, and restricted access to comprehensive audit reports. A geographic focus on US and EU jurisdictions may limit the generalizability of the findings. Data availability constraints and the lack of standardized reporting frameworks affect the comparative analysis. Future research should focus on longitudinal studies of audit effectiveness, the development of standardized methodologies for intersectional bias assessment and the investigation of automated audit tools that can adapt to emerging AI technologies while maintaining practical feasibility across different organizational contexts. Practical implications - This research underscores the necessity of adopting socio-technical perspectives and standardized methodologies in AI auditing. It provides actionable insights for firms, regulators and auditors into implementing robust governance and risk assessment practices to mitigate AI biases. Social implications - Effective AI bias auditing practices ensure algorithmic fairness and prevent discriminatory outcomes in critical domains like employment, health care and financial services. The findings emphasize the need for enhanced stakeholder engagement and community representation in audit processes. Implementing robust auditing frameworks can help close socioeconomic gaps by identifying and mitigating biases disproportionately affecting marginalized groups. This research contributes to developing equitable AI systems that respect diversity and promote social justice while maintaining technological advancement. Originality/value - This study contributes to the discourse on AI governance by comparing two regulatory approaches, bias audits and CAs and offers practical lessons from current implementation. It highlights the critical role of standardization in advancing trustworthy and ethical AI systems in the finance and accounting contexts.

Suggested Citation

  • Sabina Lacmanovic & Marinko Skare, 2025. "Artificial intelligence bias auditing – current approaches, challenges and lessons from practice," Review of Accounting and Finance, Emerald Group Publishing Limited, vol. 24(3), pages 375-400, March.
  • Handle: RePEc:eme:rafpps:raf-01-2025-0006
    DOI: 10.1108/RAF-01-2025-0006
    as

    Download full text from publisher

    File URL: https://www.emerald.com/insight/content/doi/10.1108/RAF-01-2025-0006/full/html?utm_source=repec&utm_medium=feed&utm_campaign=repec
    Download Restriction: Access to full text is restricted to subscribers

    File URL: https://www.emerald.com/insight/content/doi/10.1108/RAF-01-2025-0006/full/pdf?utm_source=repec&utm_medium=feed&utm_campaign=repec
    Download Restriction: Access to full text is restricted to subscribers

    File URL: https://libkey.io/10.1108/RAF-01-2025-0006?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eme:rafpps:raf-01-2025-0006. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Emerald Support (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.