Author
Listed:
- Jin Han
- Balaraju Battu
- Ivan Romić
- Talal Rahwan
- Petter Holme
Abstract
Large language models (LLMs) are increasingly used to model human social behavior, with recent research exploring their ability to simulate social dynamics. Here, we test whether LLMs mirror human behavior in social dilemmas, where individual and collective interests conflict. Humans generally cooperate more than expected in laboratory settings, showing less cooperation in well-mixed populations but more in fixed networks. In contrast, LLMs tend to exhibit greater cooperation in well-mixed settings. This raises a key question: Are LLMs about to emulate human behavior in cooperative dilemmas on networks? In this study, we examine networked interactions where agents repeatedly engage in the Prisoner’s Dilemma within both well-mixed and structured network configurations, aiming to identify parallels in cooperative behavior between LLMs and humans. Our findings indicate critical distinctions: while humans tend to cooperate more within structured networks, LLMs display increased cooperation mainly in well-mixed environments, with limited adjustment to networked contexts. Notably, LLM cooperation also varies across model types, illustrating the complexities of replicating human-like social adaptability in artificial agents. These results highlight a crucial gap: LLMs struggle to emulate the nuanced, adaptive social strategies humans deploy in fixed networks. Unlike human participants, LLMs do not alter their cooperative behavior in response to network structures or evolving social contexts, missing the reciprocity norms that humans adaptively employ. This limitation points to a fundamental need in future LLM design—to integrate a deeper comprehension of social norms, enabling more authentic modeling of human-like cooperation and adaptability in networked environments.Author summary: Large language models (LLMs) are thought to behave similarly to humans in social dilemmas, where individual interests conflict with collective needs. Research indicates that humans generally demonstrate greater cooperation in structured network settings compared to random interactions. In contrast, LLMs tend to be more cooperative in random environments and struggle to adapt their behavior to specific network dynamics, leading to lower cooperation levels in structured settings. Additionally, the variations in cooperation levels across different LLM models highlight the complexity of mimicking human-like social behavior and adaptability. These findings reveal a significant gap: LLMs lack the nuanced social strategies that humans employ in response to varying network contexts and evolving situations. Therefore, there is a pressing need for future LLM developments to better understand social norms, enabling more accurate modeling of human cooperation and adaptability in diverse social environments.
Suggested Citation
Jin Han & Balaraju Battu & Ivan Romić & Talal Rahwan & Petter Holme, 2025.
"Static network structure cannot stabilize cooperation among large language model agents,"
PLOS ONE, Public Library of Science, vol. 20(5), pages 1-16, May.
Handle:
RePEc:plo:pone00:0320094
DOI: 10.1371/journal.pone.0320094
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0320094. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.