IDEAS home Printed from https://ideas.repec.org/p/awi/wpaper/0768.html

When Artificial Minds Negotiate: Dark Personality and the Ultimatum Game in Large Language Models

Author

Listed:
  • Ferraz, Vinícius
  • Olah, Tamas
  • Sazedul, Ratin
  • Schmidt, Robert
  • Schwieren, Christiane

Abstract

We investigate if Large Language Models (LLMs) exhibit personality-driven strategic behavior in the Ultimatum Game by manipulating Dark Factor of Personality (D-Factor) profiles via standardized prompts. Across 400k decisions from 17 open-source models and 4,166 human benchmarks, we test whether LLMs playing the proposer and responder roles exhibit systematic behavioral shifts across five D-Factor levels (from least to most selfish). The proposer role exhibited strong monotonic declines in fair offers from 91% (D1) to 17% (D5), mirroring human patterns but with 34% steeper gradients, indicating hypersensitivity to personality prompts. Responders diverged sharply: where humans became more punitive at higher D-levels, LLMs maintained high acceptance rates (75-92%) with weak or reversed D-Factor sensitivity,failing to reproduce reciprocity-punishment dynamics. These role-specific patterns align with strong-weak situation accounts—personality matters when incentives are ambiguous (proposers) but is muted when contingent (responders). Cross-model heterogeneity was substantial: Models exhibiting the closest alignment with human behavior, according to composite similarity scores (integrating prosocial rates, D-Factor correlations, and odds ratios), were dolphin3, deepseek_1.5b, and llama3.2 (0.74-0.85), while others exhibited extreme or non-variable behavior. Temperature settings (0.2 vs. 0.8) exerted minimal influence. We interpret these patterns as prompt-driven regularities rather than genuine motivational processes, suggesting LLMs can approximate but not fully replicate human strategic behavior in social dilemmas.

Suggested Citation

  • Ferraz, Vinícius & Olah, Tamas & Sazedul, Ratin & Schmidt, Robert & Schwieren, Christiane, 2025. "When Artificial Minds Negotiate: Dark Personality and the Ultimatum Game in Large Language Models," Working Papers 0768, University of Heidelberg, Department of Economics.
  • Handle: RePEc:awi:wpaper:0768
    Note: This paper is part of http://archiv.ub.uni-heidelberg.de/volltextserver/view/schriftenreihen/sr-3.html
    as

    Download full text from publisher

    File URL: https://nbn-resolving.de/urn:nbn:de:bsz:16-heidok-378137
    File Function: Frontdoor page on HeiDOK
    Download Restriction: no

    File URL: https://archiv.ub.uni-heidelberg.de/volltextserver/37813/1/Ferraz_Olah_Sazedul_et._al._2025_dp768%20A.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Philip Brookins & Jason DeBacker, 2024. "Playing games with GPT: What can we learn about a large language model from canonical strategic games?," Economics Bulletin, AccessEcon, vol. 44(1), pages 25-37.
    2. Kahneman, Daniel & Knetsch, Jack L & Thaler, Richard H, 1986. "Fairness and the Assumptions of Economics," The Journal of Business, University of Chicago Press, vol. 59(4), pages 285-300, October.
    3. Julia Müller & Christiane Schwieren, 2020. "Big Five personality factors in the Trust Game," Journal of Business Economics, Springer, vol. 90(1), pages 37-55, February.
    4. Ali Goli & Amandeep Singh, 2024. "Frontiers: Can Large Language Models Capture Human Preferences?," Marketing Science, INFORMS, vol. 43(4), pages 709-722, July.
    5. Guth, Werner & Schmittberger, Rolf & Schwarze, Bernd, 1982. "An experimental analysis of ultimatum bargaining," Journal of Economic Behavior & Organization, Elsevier, vol. 3(4), pages 367-388, December.
    6. Argyle, Lisa P. & Busby, Ethan C. & Fulda, Nancy & Gubler, Joshua R. & Rytting, Christopher & Wingate, David, 2023. "Out of One, Many: Using Language Models to Simulate Human Samples," Political Analysis, Cambridge University Press, vol. 31(3), pages 337-351, July.
    7. John J. Horton & Apostolos Filippas & Benjamin S. Manning, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    8. John J. Horton & Apostolos Filippas & Benjamin S. Manning, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org, revised Feb 2026.
    9. Elif Akata & Lion Schulz & Julian Coda-Forno & Seong Joon Oh & Matthias Bethge & Eric Schulz, 2025. "Playing repeated games with large language models," Nature Human Behaviour, Nature, vol. 9(7), pages 1380-1390, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yingnan Yan & Tianming Liu & Yafeng Yin, 2025. "Valuing Time in Silicon: Can Large Language Models Replicate Human Value of Travel Time," Papers 2507.22244, arXiv.org, revised Dec 2025.
    2. Hongshen Sun & Juanjuan Zhang, 2025. "From Model Choice to Model Belief: Establishing a New Measure for LLM-Based Research," Papers 2512.23184, arXiv.org.
    3. Shu Wang & Zijun Yao & Shuhuai Zhang & Jianuo Gai & Tracy Xiao Liu & Songfa Zhong, 2025. "When Experimental Economics Meets Large Language Models: Evidence-based Tactics," Papers 2505.21371, arXiv.org, revised Jul 2025.
    4. Matthew O. Jackson & Qiaozhu Me & Stephanie W. Wang & Yutong Xie & Walter Yuan & Seth Benzell & Erik Brynjolfsson & Colin F. Camerer & James Evans & Brian Jabarian & Jon Kleinberg & Juanjuan Meng & Se, 2025. "AI Behavioral Science," Papers 2509.13323, arXiv.org.
    5. George Gui & Seungwoo Kim, 2025. "Leveraging LLMs to Improve Experimental Design: A Generative Stratification Approach," Papers 2509.25709, arXiv.org.
    6. Koji Takahashi & Joon Suk Park, 2025. "Generative AI for Surveys on Payment Apps: AIs' View on Privacy and Technology," IMES Discussion Paper Series 25-E-13, Institute for Monetary and Economic Studies, Bank of Japan.
    7. Hui Chen & Antoine Didisheim & Mohammad & Pourmohammadi & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org, revised Feb 2026.
    8. repec:osf:osfxxx:r3qng_v1 is not listed on IDEAS
    9. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.
    10. Aliya Amirova & Theodora Fteropoulli & Nafiso Ahmed & Martin R Cowie & Joel Z Leibo, 2024. "Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelity," PLOS ONE, Public Library of Science, vol. 19(3), pages 1-33, March.
    11. Sugat Chaturvedi & Rochana Chaturvedi, 2025. "Who Gets the Callback? Generative AI and Gender Bias," Papers 2504.21400, arXiv.org.
    12. Anne Lundgaard Hansen & Seung Jung Lee, 2025. "Financial Stability Implications of Generative AI: Taming the Animal Spirits," Papers 2510.01451, arXiv.org.
    13. Hua Li & Qifang Wang & Ye Wu, 2025. "From Mobile Media to Generative AI: The Evolutionary Logic of Computational Social Science Across Data, Methods, and Theory," Mathematics, MDPI, vol. 13(19), pages 1-17, September.
    14. Ben Weidmann & Yixian Xu & David J. Deming, 2025. "Measuring Human Leadership Skills with Artificially Intelligent Agents," Papers 2508.02966, arXiv.org.
    15. Thomas Henning & Siddhartha M. Ojha & Ross Spoon & Jiatong Han & Colin F. Camerer, 2025. "LLM Agents Do Not Replicate Human Market Traders: Evidence From Experimental Finance," Papers 2502.15800, arXiv.org, revised Oct 2025.
    16. Navid Ghaffarzadegan & Aritra Majumdar & Ross Williams & Niyousha Hosseinichimeh, 2024. "Generative agent‐based modeling: an introduction and tutorial," System Dynamics Review, System Dynamics Society, vol. 40(1), January.
    17. Augusto Gonzalez-Bonorino & Monica Capra & Emilio Pantoja, 2025. "LLMs Model Non-WEIRD Populations: Experiments with Synthetic Cultural Agents," Papers 2501.06834, arXiv.org.
    18. Seung Jung Lee & Anne Lundgaard Hansen, 2025. "Financial Stability Implications of Generative AI: Taming the Animal Spirits," Finance and Economics Discussion Series 2025-090, Board of Governors of the Federal Reserve System (U.S.).
    19. Wayne Gao & Sukjin Han & Annie Liang, 2026. "How Well Do LLMs Predict Human Behavior? A Measure of their Pretrained Knowledge," Papers 2601.12343, arXiv.org.
    20. Yu Liu & Wenwen Li & Yifan Dou & Guangnan Ye, 2025. "When Machines Meet Each Other: Network Effects and the Strategic Role of History in Multi-Agent AI," Papers 2510.06903, arXiv.org.
    21. Sriram Tolety, 2025. "Tacit Bidder-Side Collusion: Artificial Intelligence in Dynamic Auctions," Papers 2511.21802, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:awi:wpaper:0768. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Gabi Rauscher The email address of this maintainer does not seem to be valid anymore. Please ask Gabi Rauscher to update the entry or send us the correct address (email available below). General contact details of provider: https://edirc.repec.org/data/awheide.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.