IDEAS home Printed from https://ideas.repec.org/a/pal/palcom/v12y2025i1d10.1057_s41599-025-04912-x.html
   My bibliography  Save this article

Does GPT-4 surpass human performance in linguistic pragmatics?

Author

Listed:
  • Ljubiša Bojić

    (Institute for Artificial Intelligence Research and Development of Serbia
    University of Belgrade
    Complexity Science Hub)

  • Predrag Kovačević

    (University of Novi Sad)

  • Milan Čabarkapa

    (University of Kragujevac)

Abstract

As Large Language Models (LLMs) become increasingly integrated into everyday life as general-purpose multimodal AI systems, their capabilities to simulate human understanding are under examination. This study investigates LLMs’ ability to interpret linguistic pragmatics, which involves context and implied meanings. Using Grice’s communication principles, we evaluated both LLMs (GPT-2, GPT-3, GPT-3.5, GPT-4, and Bard) and human subjects (N = 147) on dialogue-based tasks. Human participants included 71 primarily Serbian students and 76 native English speakers from the United States. Findings revealed that LLMs, particularly GPT-4, outperformed humans. GPT-4 achieved the highest score of 4.80, surpassing the best human score of 4.55. Other LLMs performed well: GPT-3.5 scored 4.10, Bard 3.75, and GPT-3 3.25; GPT-2 had the lowest score of 1.05. The average LLM score was 3.39, exceeding the human cohorts’ averages of 2.80 (Serbian students) and 2.34 (U.S. participants). In the ranking of all 155 subjects (including LLMs and humans), GPT-4 secured the top position, while the best human ranked second. These results highlight significant progress in LLMs’ ability to simulate understanding of linguistic pragmatics. Future studies should confirm these findings with more dialogue-based tasks and diverse participants. This research has important implications for advancing general-purpose AI models in various communication-centered tasks, including potential application in humanoid robots in the future.

Suggested Citation

  • Ljubiša Bojić & Predrag Kovačević & Milan Čabarkapa, 2025. "Does GPT-4 surpass human performance in linguistic pragmatics?," Palgrave Communications, Palgrave Macmillan, vol. 12(1), pages 1-10, December.
  • Handle: RePEc:pal:palcom:v:12:y:2025:i:1:d:10.1057_s41599-025-04912-x
    DOI: 10.1057/s41599-025-04912-x
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1057/s41599-025-04912-x
    File Function: Abstract
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1057/s41599-025-04912-x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Kosinski, Michal, 2023. "Theory of Mind May Have Spontaneously Emerged in Large Language Models," Research Papers 4086, Stanford University, Graduate School of Business.
    2. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    2. Söderlund, Magnus & Natorina, Alona, 2024. "Service robots in a multi-party setting: An examination of robots’ ability to detect human-to-human conflict and its effects on robot evaluations," Technology in Society, Elsevier, vol. 77(C).
    3. Siting Estee Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org, revised Oct 2024.
    4. Timm Teubner & Christoph M. Flath & Christof Weinhardt & Wil Aalst & Oliver Hinz, 2023. "Welcome to the Era of ChatGPT et al," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 65(2), pages 95-101, April.
    5. Bauer, Kevin & Liebich, Lena & Hinz, Oliver & Kosfeld, Michael, 2023. "Decoding GPT's hidden "rationality" of cooperation," SAFE Working Paper Series 401, Leibniz Institute for Financial Research SAFE.
    6. Andrea Baronchelli, 2023. "Shaping New Norms for AI," Papers 2307.08564, arXiv.org, revised Jun 2024.
    7. James W. A. Strachan & Dalila Albergo & Giulia Borghini & Oriana Pansardi & Eugenio Scaliti & Saurabh Gupta & Krati Saxena & Alessandro Rufo & Stefano Panzeri & Guido Manzi & Michael S. A. Graziano & , 2024. "Testing theory of mind in large language models and humans," Nature Human Behaviour, Nature, vol. 8(7), pages 1285-1295, July.
    8. Claudia Biancotti & Carolina Camassa, 2023. "Loquacity and visible emotion: ChatGPT as a policy advisor," Questioni di Economia e Finanza (Occasional Papers) 814, Bank of Italy, Economic Research and International Relations Area.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:pal:palcom:v:12:y:2025:i:1:d:10.1057_s41599-025-04912-x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: https://www.nature.com/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.