IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2312.15524.html
   My bibliography  Save this paper

The Challenge of Using LLMs to Simulate Human Behavior: A Causal Inference Perspective

Author

Listed:
  • George Gui
  • Olivier Toubia

Abstract

Large Language Models (LLMs) have shown impressive potential to simulate human behavior. We identify a fundamental challenge in using them to simulate experiments: when LLM-simulated subjects are blind to the experimental design (as is standard practice with human subjects), variations in treatment systematically affect unspecified variables that should remain constant, violating the unconfoundedness assumption. Using demand estimation as a context and an actual experiment as a benchmark, we show this can lead to implausible results. While confounding may in principle be addressed by controlling for covariates, this can compromise ecological validity in the context of LLM simulations: controlled covariates become artificially salient in the simulated decision process, which introduces focalism. This trade-off between unconfoundedness and ecological validity is usually absent in traditional experimental design and represents a unique challenge in LLM simulations. We formalize this challenge theoretically, showing it stems from ambiguous prompting strategies, and hence cannot be fully addressed by improving training data or by fine-tuning. Alternative approaches that unblind the experimental design to the LLM show promise. Our findings suggest that effectively leveraging LLMs for experimental simulations requires fundamentally rethinking established experimental design practices rather than simply adapting protocols developed for human subjects.

Suggested Citation

  • George Gui & Olivier Toubia, 2023. "The Challenge of Using LLMs to Simulate Human Behavior: A Causal Inference Perspective," Papers 2312.15524, arXiv.org, revised Jan 2025.
  • Handle: RePEc:arx:papers:2312.15524
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2312.15524
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    2. Günter J. Hitsch & Ali Hortaçsu & Xiliang Lin, 2021. "Prices and promotions in U.S. retail markets," Quantitative Marketing and Economics (QME), Springer, vol. 19(3), pages 289-368, December.
    3. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hortense Fong & George Gui, 2024. "Modeling Story Expectations to Understand Engagement: A Generative Framework Using LLMs," Papers 2412.15239, arXiv.org, revised Jul 2025.
    2. Ruicheng Ao & Hongyu Chen & David Simchi-Levi, 2024. "Prediction-Guided Active Experiments," Papers 2411.12036, arXiv.org, revised Nov 2024.
    3. Ali Goli & Amandeep Singh, 2024. "Frontiers: Can Large Language Models Capture Human Preferences?," Marketing Science, INFORMS, vol. 43(4), pages 709-722, July.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kirshner, Samuel N., 2024. "GPT and CLT: The impact of ChatGPT's level of abstraction on consumer recommendations," Journal of Retailing and Consumer Services, Elsevier, vol. 76(C).
    2. Hui Chen & Antoine Didisheim & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org.
    3. Elif Akata & Lion Schulz & Julian Coda-Forno & Seong Joon Oh & Matthias Bethge & Eric Schulz, 2025. "Playing repeated games with large language models," Nature Human Behaviour, Nature, vol. 9(7), pages 1380-1390, July.
    4. Nir Chemaya & Daniel Martin, 2024. "Perceptions and detection of AI use in manuscript preparation for academic journals," PLOS ONE, Public Library of Science, vol. 19(7), pages 1-16, July.
    5. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    6. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    7. Evangelos Katsamakas, 2024. "Business models for the simulation hypothesis," Papers 2404.08991, arXiv.org.
    8. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.
    9. Umberto Collodel, 2025. "Interpreting the Interpreter: Can We Model post-ECB Conferences Volatility with LLM Agents?," Papers 2508.13635, arXiv.org, revised Oct 2025.
    10. Jiaxin Liu & Yixuan Tang & Yi Yang & Kar Yan Tam, 2025. "Evaluating and Aligning Human Economic Risk Preferences in LLMs," Papers 2503.06646, arXiv.org, revised Sep 2025.
    11. George Gui & Seungwoo Kim, 2025. "Leveraging LLMs to Improve Experimental Design: A Generative Stratification Approach," Papers 2509.25709, arXiv.org.
    12. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    13. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    14. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    15. Aliya Amirova & Theodora Fteropoulli & Nafiso Ahmed & Martin R Cowie & Joel Z Leibo, 2024. "Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelity," PLOS ONE, Public Library of Science, vol. 19(3), pages 1-33, March.
    16. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    17. Zareh Asatryan & Carlo Birkholz & Friedrich Heinemann, 2025. "Evidence-based policy or beauty contest? An LLM-based meta-analysis of EU cohesion policy evaluations," International Tax and Public Finance, Springer;International Institute of Public Finance, vol. 32(2), pages 625-655, April.
    18. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    19. Hua Li & Qifang Wang & Ye Wu, 2025. "From Mobile Media to Generative AI: The Evolutionary Logic of Computational Social Science Across Data, Methods, and Theory," Mathematics, MDPI, vol. 13(19), pages 1-17, September.
    20. Ben Weidmann & Yixian Xu & David J. Deming, 2025. "Measuring Human Leadership Skills with Artificially Intelligent Agents," Papers 2508.02966, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2312.15524. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.