IDEAS home Printed from https://ideas.repec.org/p/nbr/nberwo/31122.html
   My bibliography  Save this paper

Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?

Author

Listed:
  • John J. Horton

Abstract

Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. LLMs can be used like economists use homo economicus: they can be given endowments, information, preferences, and so on, and then their behavior can be explored in scenarios via simulation. Experiments using this approach, derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986), and Samuelson and Zeckhauser (1988) show qualitatively similar results to the original, but it is also easy to try variations for fresh insights. LLMs could allow researchers to pilot studies via simulation first, searching for novel social science insights to test in the real world.

Suggested Citation

  • John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
  • Handle: RePEc:nbr:nberwo:31122
    Note: LS PR
    as

    Download full text from publisher

    File URL: http://www.nber.org/papers/w31122.pdf
    Download Restriction: Access to the full text is generally limited to series subscribers, however if the top level domain of the client browser is in a developing country or transition economy free access is provided. More information about subscriptions and free access is available at http://www.nber.org/wwphelp.html. Free access is also available to older working papers.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Felix Chopra & Ingar Haaland, 2023. "Conducting qualitative interviews with AI," CEBI working paper series 23-06, University of Copenhagen. Department of Economics. The Center for Economic Behavior and Inequality (CEBI).
    2. Andrea Coletta & Kshama Dwarakanath & Penghang Liu & Svitlana Vyetrenko & Tucker Balch, 2024. "LLM-driven Imitation of Subrational Behavior : Illusion or Reality?," Papers 2402.08755, arXiv.org.
    3. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    4. Emaad Manzoor & Nikhil Malik, 2023. "Designing Effective Music Excerpts," Papers 2309.14475, arXiv.org.
    5. Gary Charness & Brian Jabarian & John List, 2023. "Generation Next: Experimentation with AI," Artefactual Field Experiments 00777, The Field Experiments Website.
    6. Siting Lu, 2024. "Strategic Interactions between Large Language Models-based Agents in Beauty Contests," Papers 2404.08492, arXiv.org.
    7. Miguel Faria-e-Castro & Fernando Leibovici, 2023. "Artificial Intelligence and Inflation Forecasts," Working Papers 2023-015, Federal Reserve Bank of St. Louis, revised 26 Feb 2024.
    8. Joshua C. Yang & Marcin Korecki & Damian Dailisan & Carina I. Hausladen & Dirk Helbing, 2024. "LLM Voting: Human Choices and AI Collective Decision Making," Papers 2402.01766, arXiv.org.
    9. Kevin Leyton-Brown & Paul Milgrom & Neil Newman & Ilya Segal, 2023. "Artificial Intelligence and Market Design: Lessons Learned from Radio Spectrum Reallocation," NBER Chapters, in: New Directions in Market Design, National Bureau of Economic Research, Inc.
    10. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    11. Bauer, Kevin & Liebich, Lena & Hinz, Oliver & Kosfeld, Michael, 2023. "Decoding GPT's hidden "rationality" of cooperation," SAFE Working Paper Series 401, Leibniz Institute for Financial Research SAFE.
    12. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    13. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    14. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    15. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    16. Zengqing Wu & Shuyuan Zheng & Qianying Liu & Xu Han & Brian Inhyuk Kwon & Makoto Onizuka & Shaojie Tang & Run Peng & Chuan Xiao, 2024. "Shall We Talk: Exploring Spontaneous Collaborations of Competing LLM Agents," Papers 2402.12327, arXiv.org.
    17. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    18. Keegan Harris & Nicole Immorlica & Brendan Lucier & Aleksandrs Slivkins, 2023. "Algorithmic Persuasion Through Simulation," Papers 2311.18138, arXiv.org, revised Apr 2024.
    19. Leland Bybee, 2023. "Surveying Generative AI's Economic Expectations," Papers 2305.02823, arXiv.org, revised May 2023.
    20. Van Pham & Scott Cunningham, 2024. "ChatGPT Can Predict the Future when it Tells Stories Set in the Future About the Past," Papers 2404.07396, arXiv.org, revised Apr 2024.
    21. Philip Brookins & Jason DeBacker, 2024. "Playing games with GPT: What can we learn about a large language model from canonical strategic games?," Economics Bulletin, AccessEcon, vol. 44(1), pages 25-37.
    22. Zengqing Wu & Run Peng & Xu Han & Shuyuan Zheng & Yixin Zhang & Chuan Xiao, 2023. "Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations," Papers 2311.06330, arXiv.org, revised Dec 2023.
    23. Rosa-García, Alfonso, 2024. "Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video," MPRA Paper 120135, University Library of Munich, Germany.
    24. Yan Leng & Yuan Yuan, 2023. "Do LLM Agents Exhibit Social Behavior?," Papers 2312.15198, arXiv.org, revised Feb 2024.
    25. George Gui & Olivier Toubia, 2023. "The Challenge of Using LLMs to Simulate Human Behavior: A Causal Inference Perspective," Papers 2312.15524, arXiv.org.

    More about this item

    JEL classification:

    • D0 - Microeconomics - - General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nbr:nberwo:31122. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/nberrus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.