IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2508.17407.html
   My bibliography  Save this paper

General Social Agents

Author

Listed:
  • Benjamin S. Manning
  • John J. Horton

Abstract

Useful social science theories predict behavior across settings. However, applying a theory to make predictions in new settings is challenging: rarely can it be done without ad hoc modifications to account for setting-specific factors. We argue that AI agents put in simulations of those novel settings offer an alternative for applying theory, requiring minimal or no modifications. We present an approach for building such "general" agents that use theory-grounded natural language instructions, existing empirical data, and knowledge acquired by the underlying AI during training. To demonstrate the approach in settings where no data from that data-generating process exists--as is often the case in applied prediction problems--we design a heterogeneous population of 883,320 novel games. AI agents are constructed using human data from a small set of conceptually related but structurally distinct "seed" games. In preregistered experiments, on average, agents predict initial human play in a random sample of 1,500 games from the population better than (i) a cognitive hierarchy model, (ii) game-theoretic equilibria, and (iii) out-of-the-box agents. For a small set of separate novel games, these simulations predict responses from a new sample of human subjects better even than the most plausibly relevant published human data.

Suggested Citation

  • Benjamin S. Manning & John J. Horton, 2025. "General Social Agents," Papers 2508.17407, arXiv.org, revised Sep 2025.
  • Handle: RePEc:arx:papers:2508.17407
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2508.17407
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. John C. Harsanyi & Reinhard Selten, 1988. "A General Theory of Equilibrium Selection in Games," MIT Press Books, The MIT Press, edition 1, volume 1, number 0262582384, December.
    2. Benjamin S. Manning & Kehang Zhu & John J. Horton, 2024. "Automated Social Science: Language Models as Scientist and Subjects," Papers 2404.11794, arXiv.org, revised Apr 2024.
    3. Jeongbin Kim & Matthew Kovach & Kyu-Min Lee & Euncheol Shin & Hector Tzavellas, 2024. "Learning to be Homo Economicus: Can an LLM Learn Preferences from Choice," Papers 2401.07345, arXiv.org.
    4. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    5. Mohammed Alsobay & David G. Rand & Duncan J. Watts & Abdullah Almaatouq, 2025. "Integrative Experiments Identify How Punishment Impacts Welfare in Public Goods Games," Papers 2508.17151, arXiv.org.
    6. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    7. Benjamin S. Manning & Kehang Zhu & John J. Horton, 2024. "Automated Social Science: Language Models as Scientist and Subjects," NBER Working Papers 32381, National Bureau of Economic Research, Inc.
    8. Sendhil Mullainathan & Ashesh Rambachan, 2024. "From Predictive Algorithms to Automatic Generation of Anomalies," Papers 2404.10111, arXiv.org, revised Sep 2025.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Sugat Chaturvedi & Rochana Chaturvedi, 2025. "Who Gets the Callback? Generative AI and Gender Bias," Papers 2504.21400, arXiv.org.
    2. Alexander Erlei, 2025. "From Digital Distrust to Codified Honesty: Experimental Evidence on Generative AI in Credence Goods Markets," Papers 2509.06069, arXiv.org.
    3. Alejandro Lopez-Lira & Yuehua Tang & Mingyin Zhu, 2025. "The Memorization Problem: Can We Trust LLMs' Economic Forecasts?," Papers 2504.14765, arXiv.org.
    4. Alejandro Lopez-Lira, 2025. "Can Large Language Models Trade? Testing Financial Theories with LLM Agents in Market Simulations," Papers 2504.10789, arXiv.org.
    5. Felipe A. Csaszar & Harsh Ketkar & Hyunjin Kim, 2024. "Artificial Intelligence and Strategic Decision-Making: Evidence from Entrepreneurs and Investors," Papers 2408.08811, arXiv.org.
    6. Gillian K. Hadfield & Andrew Koh, 2025. "An Economy of AI Agents," Papers 2509.01063, arXiv.org.
    7. Kevin Leyton-Brown & Paul Milgrom & Neil Newman & Ilya Segal, 2024. "Artificial Intelligence and Market Design: Lessons Learned from Radio Spectrum Reallocation," NBER Chapters, in: New Directions in Market Design, National Bureau of Economic Research, Inc.
    8. Capra, C. Monica & Kniesner, Thomas J., 2025. "Daniel Kahneman’s Underappreciated Last Published Paper: Empirical Implications for Benefit-Cost Analysis and a Chat Session Discussion with Bots," IZA Discussion Papers 17841, Institute of Labor Economics (IZA).
    9. Kirshner, Samuel N., 2024. "GPT and CLT: The impact of ChatGPT's level of abstraction on consumer recommendations," Journal of Retailing and Consumer Services, Elsevier, vol. 76(C).
    10. Shu Wang & Zijun Yao & Shuhuai Zhang & Jianuo Gai & Tracy Xiao Liu & Songfa Zhong, 2025. "When Experimental Economics Meets Large Language Models: Evidence-based Tactics," Papers 2505.21371, arXiv.org, revised Jul 2025.
    11. C. Monica Capra & Thomas J. Kniesner, 2025. "Daniel Kahneman’s underappreciated last published paper: Empirical implications for benefit-cost analysis and a chat session discussion with bots," Journal of Risk and Uncertainty, Springer, vol. 71(1), pages 29-51, August.
    12. Zengqing Wu & Run Peng & Xu Han & Shuyuan Zheng & Yixin Zhang & Chuan Xiao, 2023. "Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations," Papers 2311.06330, arXiv.org, revised Dec 2023.
    13. repec:osf:osfxxx:udz28_v1 is not listed on IDEAS
    14. Hui Chen & Antoine Didisheim & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org.
    15. Joshua C. Yang & Damian Dailisan & Marcin Korecki & Carina I. Hausladen & Dirk Helbing, 2024. "LLM Voting: Human Choices and AI Collective Decision Making," Papers 2402.01766, arXiv.org, revised Aug 2024.
    16. Elif Akata & Lion Schulz & Julian Coda-Forno & Seong Joon Oh & Matthias Bethge & Eric Schulz, 2025. "Playing repeated games with large language models," Nature Human Behaviour, Nature, vol. 9(7), pages 1380-1390, July.
    17. Nir Chemaya & Daniel Martin, 2024. "Perceptions and detection of AI use in manuscript preparation for academic journals," PLOS ONE, Public Library of Science, vol. 19(7), pages 1-16, July.
    18. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    19. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    20. repec:osf:osfxxx:r3qng_v1 is not listed on IDEAS
    21. Evangelos Katsamakas, 2024. "Business models for the simulation hypothesis," Papers 2404.08991, arXiv.org.
    22. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2508.17407. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.