IDEAS home Printed from https://ideas.repec.org/p/zbw/safewp/334511.html

Knowing (not) to know: Explainable artificial intelligence and human metacognition

Author

Listed:
  • von Zahn, Moritz
  • Liebich, Lena
  • Jussupow, Ekaterina
  • Hinz, Oliver
  • Bauer, Kevin

Abstract

The use of explainable AI (XAI) methods to render the prediction logic of black-box AI interpretable to humans is becoming more popular and more widely used in practice, among other things due to regulatory requirements such as the EU AI Act. Previous research on human-XAI interaction has shown that explainability may help mitigate black-box problems but also unintentionally alter individuals' cognitive processes, e.g., distorting their reasoning and evoking informational overload. While empirical evidence on the impact of XAI on how individuals "think" is growing, it has been largely overlooked whether XAI can even affect individuals' "thinking about thinking", i.e., metacognition, which theory conceptualizes to monitor and control previously-studied thinking processes. Aiming to take a first step in filling this gap, we investigate whether XAI affects confidence calibrations, and, thereby, decisions to transfer decision-making responsibility to AI, on the meta-level of cognition. We conduct two incentivized experiments in which human experts repeatedly perform prediction tasks, with the option to delegate each task to an AI. We exogenously vary whether participants initially receive explanations that reveal the AI's underlying prediction logic. We find that XAI improves individuals' metaknowledge (the alignment between confidence and actual performance) and partially enhances confidence sensitivity (the variation of confidence with task performance). These metacognitive shifts causally increase both the frequency and effectiveness of human-to-AI delegation decisions. Interestingly, these effects only occur when explanations reveal to individuals that AI's logic diverges from their own, leading to a systematic reduction in confidence. Our findings suggest that XAI can correct overconfidence at the potential cost of lowering confidence even when individuals perform well. Both effects influence decisions to cede responsibility to AI, highlighting metacognition as a central mechanism in human-XAI collaboration.

Suggested Citation

  • von Zahn, Moritz & Liebich, Lena & Jussupow, Ekaterina & Hinz, Oliver & Bauer, Kevin, 2025. "Knowing (not) to know: Explainable artificial intelligence and human metacognition," SAFE Working Paper Series 464, Leibniz Institute for Financial Research SAFE.
  • Handle: RePEc:zbw:safewp:334511
    DOI: 10.2139/ssrn.5383106
    as

    Download full text from publisher

    File URL: https://www.econstor.eu/bitstream/10419/334511/1/1947737333.pdf
    Download Restriction: no

    File URL: https://libkey.io/10.2139/ssrn.5383106?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:zbw:safewp:334511. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: ZBW - Leibniz Information Centre for Economics (email available below). General contact details of provider: https://edirc.repec.org/data/csafede.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.