IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2309.05898.html
   My bibliography  Save this paper

Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing

Author

Listed:
  • Nunzio Lor`e
  • Babak Heydari

Abstract

This paper investigates the strategic decision-making capabilities of three Large Language Models (LLMs): GPT-3.5, GPT-4, and LLaMa-2, within the framework of game theory. Utilizing four canonical two-player games -- Prisoner's Dilemma, Stag Hunt, Snowdrift, and Prisoner's Delight -- we explore how these models navigate social dilemmas, situations where players can either cooperate for a collective benefit or defect for individual gain. Crucially, we extend our analysis to examine the role of contextual framing, such as diplomatic relations or casual friendships, in shaping the models' decisions. Our findings reveal a complex landscape: while GPT-3.5 is highly sensitive to contextual framing, it shows limited ability to engage in abstract strategic reasoning. Both GPT-4 and LLaMa-2 adjust their strategies based on game structure and context, but LLaMa-2 exhibits a more nuanced understanding of the games' underlying mechanics. These results highlight the current limitations and varied proficiencies of LLMs in strategic decision-making, cautioning against their unqualified use in tasks requiring complex strategic reasoning.

Suggested Citation

  • Nunzio Lor`e & Babak Heydari, 2023. "Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing," Papers 2309.05898, arXiv.org.
  • Handle: RePEc:arx:papers:2309.05898
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2309.05898
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    2. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    3. Steve Phelps & Yvan I. Russell, 2023. "Investigating Emergent Goal-Like Behaviour in Large Language Models Using Experimental Economics," Papers 2305.07970, arXiv.org.
    4. Joseph N. Luchman, 2021. "Determining relative importance in Stata using dominance analysis: domin and domme," Stata Journal, StataCorp LP, vol. 21(2), pages 510-538, June.
    5. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    2. Bauer, Kevin & Liebich, Lena & Hinz, Oliver & Kosfeld, Michael, 2023. "Decoding GPT's hidden "rationality" of cooperation," SAFE Working Paper Series 401, Leibniz Institute for Financial Research SAFE.
    3. Philip Brookins & Jason DeBacker, 2024. "Playing games with GPT: What can we learn about a large language model from canonical strategic games?," Economics Bulletin, AccessEcon, vol. 44(1), pages 25-37.
    4. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    5. Kevin Leyton-Brown & Paul Milgrom & Neil Newman & Ilya Segal, 2023. "Artificial Intelligence and Market Design: Lessons Learned from Radio Spectrum Reallocation," NBER Chapters, in: New Directions in Market Design, National Bureau of Economic Research, Inc.
    6. Jolene Tan, 2023. "Perceptions towards pronatalist policies in Singapore," Journal of Population Research, Springer, vol. 40(3), pages 1-27, September.
    7. Zengqing Wu & Run Peng & Xu Han & Shuyuan Zheng & Yixin Zhang & Chuan Xiao, 2023. "Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations," Papers 2311.06330, arXiv.org, revised Dec 2023.
    8. LE BOENNEC, Rémy & SALLADARRE, Frédéric, 2023. "Investigating the use of privately-owned micromobility modes for commuting in four European countries," MPRA Paper 119202, University Library of Munich, Germany.
    9. Joshua C. Yang & Marcin Korecki & Damian Dailisan & Carina I. Hausladen & Dirk Helbing, 2024. "LLM Voting: Human Choices and AI Collective Decision Making," Papers 2402.01766, arXiv.org.
    10. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    11. Schneck, Andreas & Przepiorka, Wojtek, 2023. "Meta-dominance analysis - A tool for the assessment of the quality of digital behavioural data," SocArXiv cy3wj, Center for Open Science.
    12. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    13. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    14. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    15. Tyna Eloundou & Sam Manning & Pamela Mishkin & Daniel Rock, 2023. "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models," Papers 2303.10130, arXiv.org, revised Aug 2023.
    16. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    17. George Gui & Olivier Toubia, 2023. "The Challenge of Using LLMs to Simulate Human Behavior: A Causal Inference Perspective," Papers 2312.15524, arXiv.org.
    18. Felix Chopra & Ingar Haaland, 2023. "Conducting qualitative interviews with AI," CEBI working paper series 23-06, University of Copenhagen. Department of Economics. The Center for Economic Behavior and Inequality (CEBI).
    19. Valerio Capraro & Roberto Di Paolo & Veronica Pizziol, 2023. "Assessing Large Language Models' ability to predict how humans balance self-interest and the interest of others," Papers 2307.12776, arXiv.org, revised Feb 2024.
    20. Ying Zhang & Cornelia Lawson & Liangping Ding, 2023. "Can scientists remain internationally visible after the return to their home country? A study of Chinese scientists," MIOIR Working Paper Series 2023-01, The Manchester Institute of Innovation Research (MIoIR), The University of Manchester.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2309.05898. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.