IDEAS home Printed from https://ideas.repec.org/a/nat/nathum/v8y2024i7d10.1038_s41562-024-01882-z.html
   My bibliography  Save this article

Testing theory of mind in large language models and humans

Author

Listed:
  • James W. A. Strachan

    (University Medical Center Hamburg-Eppendorf)

  • Dalila Albergo

    (Italian Institute of Technology
    University of Trento)

  • Giulia Borghini

    (Italian Institute of Technology)

  • Oriana Pansardi

    (University Medical Center Hamburg-Eppendorf
    Italian Institute of Technology
    University of Turin)

  • Eugenio Scaliti

    (University Medical Center Hamburg-Eppendorf
    Italian Institute of Technology
    University of Turin
    University of Turin)

  • Saurabh Gupta

    (Alien Technology Transfer Ltd)

  • Krati Saxena

    (Alien Technology Transfer Ltd)

  • Alessandro Rufo

    (Alien Technology Transfer Ltd)

  • Stefano Panzeri

    (University Medical Center Hamburg- Eppendorf)

  • Guido Manzi

    (Alien Technology Transfer Ltd)

  • Michael S. A. Graziano

    (Princeton University)

  • Cristina Becchio

    (University Medical Center Hamburg-Eppendorf
    Italian Institute of Technology)

Abstract

At the core of what defines us as humans is the concept of theory of mind: the ability to track other people’s mental states. The recent development of large language models (LLMs) such as ChatGPT has led to intense debate about the possibility that these models exhibit behaviour that is indistinguishable from human behaviour in theory of mind tasks. Here we compare human and LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas. We tested two families of LLMs (GPT and LLaMA2) repeatedly against these measures and compared their performance with those from a sample of 1,907 human participants. Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas. Faux pas, however, was the only test where LLaMA2 outperformed humans. Follow-up manipulations of the belief likelihood revealed that the superiority of LLaMA2 was illusory, possibly reflecting a bias towards attributing ignorance. By contrast, the poor performance of GPT originated from a hyperconservative approach towards committing to conclusions rather than from a genuine failure of inference. These findings not only demonstrate that LLMs exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlight the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.

Suggested Citation

  • James W. A. Strachan & Dalila Albergo & Giulia Borghini & Oriana Pansardi & Eugenio Scaliti & Saurabh Gupta & Krati Saxena & Alessandro Rufo & Stefano Panzeri & Guido Manzi & Michael S. A. Graziano & , 2024. "Testing theory of mind in large language models and humans," Nature Human Behaviour, Nature, vol. 8(7), pages 1285-1295, July.
  • Handle: RePEc:nat:nathum:v:8:y:2024:i:7:d:10.1038_s41562-024-01882-z
    DOI: 10.1038/s41562-024-01882-z
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41562-024-01882-z
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41562-024-01882-z?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Taylor Webb & Keith J. Holyoak & Hongjing Lu, 2023. "Emergent analogical reasoning in large language models," Nature Human Behaviour, Nature, vol. 7(9), pages 1526-1541, September.
    2. Anthony Chemero, 2023. "LLMs differ from human cognition because they are not embodied," Nature Human Behaviour, Nature, vol. 7(11), pages 1828-1829, November.
    3. Kosinski, Michal, 2023. "Theory of Mind May Have Spontaneously Emerged in Large Language Models," Research Papers 4086, Stanford University, Graduate School of Business.
    4. Michael C. Frank, 2023. "Openly accessible LLMs can help us to understand human cognition," Nature Human Behaviour, Nature, vol. 7(11), pages 1825-1827, November.
    5. Oriel FeldmanHall & Amitai Shenhav, 2019. "Resolving uncertainty in a social world," Nature Human Behaviour, Nature, vol. 3(5), pages 426-435, May.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Daniel Albert & Stephan Billinger, 2024. "Reproducing and Extending Experiments in Behavioral Strategy with Large Language Models," Papers 2410.06932, arXiv.org.
    2. Hannah Rose Kirk & Iason Gabriel & Chris Summerfield & Bertie Vidgen & Scott A. Hale, 2025. "Why human–AI relationships need socioaffective alignment," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 12(1), pages 1-9, December.
    3. Marta Andersson, 2025. "Companionship in code: AI’s role in the future of human connection," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 12(1), pages 1-7, December.
    4. Bowen Lou & Tian Lu & T. S. Raghu & Yingjie Zhang, 2025. "Unraveling Human-AI Teaming: A Review and Outlook," Papers 2504.05755, arXiv.org, revised Apr 2025.
    5. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Siting Estee Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org, revised Oct 2024.
    2. Elif Akata & Lion Schulz & Julian Coda-Forno & Seong Joon Oh & Matthias Bethge & Eric Schulz, 2025. "Playing repeated games with large language models," Nature Human Behaviour, Nature, vol. 9(7), pages 1380-1390, July.
    3. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.
    4. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    5. Qihui Xu & Yingying Peng & Samuel A. Nastase & Martin Chodorow & Minghua Wu & Ping Li, 2025. "Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts," Nature Human Behaviour, Nature, vol. 9(9), pages 1871-1886, September.
    6. Söderlund, Magnus & Natorina, Alona, 2024. "Service robots in a multi-party setting: An examination of robots’ ability to detect human-to-human conflict and its effects on robot evaluations," Technology in Society, Elsevier, vol. 77(C).
    7. Nace Mikus & Christoph Eisenegger & Christoph Mathys & Luke Clark & Ulrich Müller & Trevor W. Robbins & Claus Lamm & Michael Naef, 2023. "Blocking D2/D3 dopamine receptors in male participants increases volatility of beliefs when learning to trust others," Nature Communications, Nature, vol. 14(1), pages 1-17, December.
    8. Timm Teubner & Christoph M. Flath & Christof Weinhardt & Wil Aalst & Oliver Hinz, 2023. "Welcome to the Era of ChatGPT et al," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 65(2), pages 95-101, April.
    9. Bauer, Kevin & Liebich, Lena & Hinz, Oliver & Kosfeld, Michael, 2023. "Decoding GPT's hidden "rationality" of cooperation," SAFE Working Paper Series 401, Leibniz Institute for Financial Research SAFE.
    10. repec:cup:judgdm:v:16:y:2021:i:2:p:505-550 is not listed on IDEAS
    11. Andrea Baronchelli, 2023. "Shaping New Norms for AI," Papers 2307.08564, arXiv.org, revised Jun 2024.
    12. Andres Karjus, 2025. "Machine-assisted quantitizing designs: augmenting humanities and social sciences with artificial intelligence," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 12(1), pages 1-18, December.
    13. Luca Lazzaro & Manuel S. Mariani & Ren'e Algesheimer & Radu Tanase, 2025. "A behavioral reinvestigation of the effect of long ties on social contagions," Papers 2510.04785, arXiv.org.
    14. Jesse Hoey & Neil J. MacKinnon & Tobias Schröder, 2021. "Denotative and connotative management of uncertainty: A computational dual-process model," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 16(2), pages 505-550, March.
    15. Shu Wang & Zijun Yao & Shuhuai Zhang & Jianuo Gai & Tracy Xiao Liu & Songfa Zhong, 2025. "When Experimental Economics Meets Large Language Models: Evidence-based Tactics," Papers 2505.21371, arXiv.org, revised Jul 2025.
    16. Gavin Kader & Dongwoo Lee, 2024. "The Emergence of Strategic Reasoning of Large Language Models," Papers 2412.13013, arXiv.org, revised Oct 2025.
    17. Kevin M. Tan & Amy L. Daitch & Pedro Pinheiro-Chagas & Kieran C. R. Fox & Josef Parvizi & Matthew D. Lieberman, 2022. "Electrocorticographic evidence of a common neurocognitive sequence for mentalizing about the self and others," Nature Communications, Nature, vol. 13(1), pages 1-17, December.
    18. Ljubiša Bojić & Predrag Kovačević & Milan Čabarkapa, 2025. "Does GPT-4 surpass human performance in linguistic pragmatics?," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 12(1), pages 1-10, December.
    19. So Kuroki & Yingtao Tian & Kou Misaki & Takashi Ikegami & Takuya Akiba & Yujin Tang, 2025. "Reimagining Agent-based Modeling with Large Language Model Agents via Shachi," Papers 2509.21862, arXiv.org, revised Oct 2025.
    20. Taylor Webb & Shanka Subhra Mondal & Ida Momennejad, 2025. "A brain-inspired agentic architecture to improve planning with LLMs," Nature Communications, Nature, vol. 16(1), pages 1-12, December.
    21. Johannes Schneider & Christian Meske & Pauline Kuss, 2024. "Foundation Models," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 66(2), pages 221-231, April.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nathum:v:8:y:2024:i:7:d:10.1038_s41562-024-01882-z. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.