IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0342332.html

Prior beliefs & automated fact checking: Limits on the effectiveness of AI-based corrections

Author

Listed:
  • Kelly M Amaddio
  • Jacob T Goebel
  • Jason K Clark
  • Duane T Wegener
  • R Kelly Garrett
  • Mark W Susmann
  • Srinivasan Parthasarathy

Abstract

Proliferation of misinformation poses significant challenges in contemporary society, necessitating efficient strategies for its identification and mitigation. Automated fact-checking systems might prove effective, but they face challenges, particularly in charged contexts where prior beliefs are likely to influence responses to fact-checks. Data from two studies where participants were given a piece of gun-control misinformation and an automated fact-checker correction (N = 1,372) illustrate the nuanced interplay between prior beliefs, trust in artificial intelligence (AI), and the perceived accuracy of fact-checking systems in shaping (a) post-correction misinformation endorsement, and (b) post-correction perceptions of system quality. Study 1 examined default perceptions of system accuracy and demonstrated a high degree of variability in those perceptions; when fact-checked by such a system, people’s prior beliefs predicted continued belief after the correction and post-correction perceptions of the fact-check system. Study 2 directly manipulated the purported accuracy of the system. When automated fact-checkers were said to have an accuracy level close to current expectations of existing AI systems (67%), people continued to believe misinformation more to the extent it was consistent with prior beliefs. This pattern was attenuated when participants were told that the fact-checker was highly (97%) accurate. Similarly, prior beliefs related more strongly to post-correction perceptions of system reliability when accuracy information was provided and especially when the system was described as not highly accurate. This research demonstrates biases in reactions to automated fact-checkers and highlights the importance of accounting for individual beliefs and perceived system characteristics in designing scalable interventions.

Suggested Citation

  • Kelly M Amaddio & Jacob T Goebel & Jason K Clark & Duane T Wegener & R Kelly Garrett & Mark W Susmann & Srinivasan Parthasarathy, 2026. "Prior beliefs & automated fact checking: Limits on the effectiveness of AI-based corrections," PLOS ONE, Public Library of Science, vol. 21(2), pages 1-21, February.
  • Handle: RePEc:plo:pone00:0342332
    DOI: 10.1371/journal.pone.0342332
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0342332
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0342332&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0342332?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Man-pui Sally Chan & Dolores Albarracín, 2023. "A meta-analysis of correction effects in science-relevant misinformation," Nature Human Behaviour, Nature, vol. 7(9), pages 1514-1525, September.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hu, Bo & Lu, Chang & Yang, Jianxun, 2025. "Replication of Chan and Albarracín (2023) "A meta-analysis of correction effects in sciencerelevant misinformation"," I4R Discussion Paper Series 276, The Institute for Replication (I4R).
    2. Myunghoon Kang & Chunho Park & Jisung Yoon & Greg Chih-Hsin Sheen, 2025. "Partisan attitudes and the motivation behind the spread of misleading information," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 12(1), pages 1-12, December.
    3. Tobia Spampatti & Ulf J. J. Hahnel & Evelina Trutnevyte & Tobias Brosch, 2024. "Psychological inoculation strategies to fight climate disinformation across 12 countries," Nature Human Behaviour, Nature, vol. 8(2), pages 380-398, February.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0342332. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.