IDEAS home Printed from https://ideas.repec.org/a/gam/jjopen/v5y2022i1p10-138d752840.html
   My bibliography  Save this article

Metrics, Explainability and the European AI Act Proposal

Author

Listed:
  • Francesco Sovrano

    (Department of Computer Science and Engineering (DISI), Università di Bologna, 40126 Bologna, Italy)

  • Salvatore Sapienza

    (CIRSFID—ALMA AI, Università di Bologna, 40126 Bologna, Italy)

  • Monica Palmirani

    (CIRSFID—ALMA AI, Università di Bologna, 40126 Bologna, Italy)

  • Fabio Vitali

    (Department of Computer Science and Engineering (DISI), Università di Bologna, 40126 Bologna, Italy)

Abstract

On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion.

Suggested Citation

  • Francesco Sovrano & Salvatore Sapienza & Monica Palmirani & Fabio Vitali, 2022. "Metrics, Explainability and the European AI Act Proposal," J, MDPI, vol. 5(1), pages 1-13, February.
  • Handle: RePEc:gam:jjopen:v:5:y:2022:i:1:p:10-138:d:752840
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2571-8800/5/1/10/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2571-8800/5/1/10/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Ugo Pagallo & Massimo Durante, 2022. "The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”," J, MDPI, vol. 5(1), pages 1-11, February.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jjopen:v:5:y:2022:i:1:p:10-138:d:752840. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.