IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2508.02966.html

Measuring Human Leadership Skills with Artificially Intelligent Agents

Author

Listed:
  • Ben Weidmann
  • Yixian Xu
  • David J. Deming

Abstract

We show that the ability to lead groups of humans is predicted by leadership skill with Artificially Intelligent agents. In a large pre-registered lab experiment, human leaders worked with AI agents to solve problems. Their performance on this 'AI leadership test' was strongly correlated with their causal impact on human teams, which we estimate by repeatedly randomly assigning leaders to groups of human followers and measuring team performance. Successful leaders of both humans and AI agents ask more questions and engage in more conversational turn-taking; they score higher on measures of social intelligence, fluid intelligence, and decision-making skill, but do not differ in gender, age, ethnicity or education. Our findings indicate that AI agents can be effective proxies for human participants in social experiments, which greatly simplifies the measurement of leadership and teamwork skills.

Suggested Citation

  • Ben Weidmann & Yixian Xu & David J. Deming, 2025. "Measuring Human Leadership Skills with Artificially Intelligent Agents," Papers 2508.02966, arXiv.org.
  • Handle: RePEc:arx:papers:2508.02966
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2508.02966
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ben Weidmann & David J. Deming, 2021. "Team Players: How Social Skills Improve Team Performance," Econometrica, Econometric Society, vol. 89(6), pages 2637-2657, November.
    2. Jason W. Burton & Ezequiel Lopez-Lopez & Shahar Hechtlinger & Zoe Rahwan & Samuel Aeschbach & Michiel A. Bakker & Joshua A. Becker & Aleks Berditchevskaia & Julian Berger & Levin Brinkmann & Lucie Fle, 2024. "How large language models can reshape collective intelligence," Nature Human Behaviour, Nature, vol. 8(9), pages 1643-1655, September.
    3. Argyle, Lisa P. & Busby, Ethan C. & Fulda, Nancy & Gubler, Joshua R. & Rytting, Christopher & Wingate, David, 2023. "Out of One, Many: Using Language Models to Simulate Human Samples," Political Analysis, Cambridge University Press, vol. 31(3), pages 337-351, July.
    4. Milena Tsvetkova & Taha Yasseri & Niccolo Pescetelli & Tobias Werner, 2024. "A new sociology of humans and machines," Nature Human Behaviour, Nature, vol. 8(10), pages 1864-1876, October.
    5. David J. Deming, 2017. "The Growing Importance of Social Skills in the Labor Market," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 132(4), pages 1593-1640.
    6. John J. Horton & Apostolos Filippas & Benjamin S. Manning, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    7. John J. Horton & Apostolos Filippas & Benjamin S. Manning, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org, revised Feb 2026.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hongshen Sun & Juanjuan Zhang, 2025. "From Model Choice to Model Belief: Establishing a New Measure for LLM-Based Research," Papers 2512.23184, arXiv.org.
    2. Koji Takahashi & Joon Suk Park, 2025. "Generative AI for Surveys on Payment Apps: AIs' View on Privacy and Technology," IMES Discussion Paper Series 25-E-13, Institute for Monetary and Economic Studies, Bank of Japan.
    3. Hui Chen & Antoine Didisheim & Mohammad & Pourmohammadi & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org, revised Feb 2026.
    4. repec:osf:osfxxx:r3qng_v1 is not listed on IDEAS
    5. Matthew O. Jackson & Qiaozhu Me & Stephanie W. Wang & Yutong Xie & Walter Yuan & Seth Benzell & Erik Brynjolfsson & Colin F. Camerer & James Evans & Brian Jabarian & Jon Kleinberg & Juanjuan Meng & Se, 2025. "AI Behavioral Science," Papers 2509.13323, arXiv.org.
    6. George Gui & Seungwoo Kim, 2025. "Leveraging LLMs to Improve Experimental Design: A Generative Stratification Approach," Papers 2509.25709, arXiv.org.
    7. Filippo Gusella & Eugenio Vicario, 2025. "Generative Agents and Expectations: Do LLMs Align with Heterogeneous Agent Models?," Working Papers - Economics wp2025_18.rdf, Universita' degli Studi di Firenze, Dipartimento di Scienze per l'Economia e l'Impresa.
    8. Aliya Amirova & Theodora Fteropoulli & Nafiso Ahmed & Martin R Cowie & Joel Z Leibo, 2024. "Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelity," PLOS ONE, Public Library of Science, vol. 19(3), pages 1-33, March.
    9. Sugat Chaturvedi & Rochana Chaturvedi, 2025. "Who Gets the Callback? Generative AI and Gender Bias," Papers 2504.21400, arXiv.org.
    10. Anne Lundgaard Hansen & Seung Jung Lee, 2025. "Financial Stability Implications of Generative AI: Taming the Animal Spirits," Papers 2510.01451, arXiv.org.
    11. Hua Li & Qifang Wang & Ye Wu, 2025. "From Mobile Media to Generative AI: The Evolutionary Logic of Computational Social Science Across Data, Methods, and Theory," Mathematics, MDPI, vol. 13(19), pages 1-17, September.
    12. Navid Ghaffarzadegan & Aritra Majumdar & Ross Williams & Niyousha Hosseinichimeh, 2024. "Generative agent‐based modeling: an introduction and tutorial," System Dynamics Review, System Dynamics Society, vol. 40(1), January.
    13. Seung Jung Lee & Anne Lundgaard Hansen, 2025. "Financial Stability Implications of Generative AI: Taming the Animal Spirits," Finance and Economics Discussion Series 2025-090, Board of Governors of the Federal Reserve System (U.S.).
    14. Wayne Gao & Sukjin Han & Annie Liang, 2026. "How Well Do LLMs Predict Human Behavior? A Measure of their Pretrained Knowledge," Papers 2601.12343, arXiv.org.
    15. Ferraz, Vinícius & Olah, Tamas & Sazedul, Ratin & Schmidt, Robert & Schwieren, Christiane, 2025. "When Artificial Minds Negotiate: Dark Personality and the Ultimatum Game in Large Language Models," Working Papers 0768, University of Heidelberg, Department of Economics.
    16. Paola Cillo & Gaia Rubera, 2025. "Generative AI in innovation and marketing processes: A roadmap of research opportunities," Journal of the Academy of Marketing Science, Springer, vol. 53(3), pages 684-701, May.
    17. Yingnan Yan & Tianming Liu & Yafeng Yin, 2025. "Valuing Time in Silicon: Can Large Language Models Replicate Human Value of Travel Time," Papers 2507.22244, arXiv.org, revised Dec 2025.
    18. Niyousha Hosseinichimeh & Aritra Majumdar & Ross Williams & Navid Ghaffarzadegan, 2024. "From text to map: a system dynamics bot for constructing causal loop diagrams," System Dynamics Review, System Dynamics Society, vol. 40(3), July.
    19. Filippo Gusella & Eugenio Vicario, 2025. "Generative Agents and Expectations: Do LLMs Align with Heterogeneous Agent Models?," Papers 2511.08604, arXiv.org.
    20. Nikoleta Anesti & Edward Hill & Andreas Joseph, 2025. "Inflation Attitudes of Large Language Models," Papers 2512.14306, arXiv.org.
    21. Eric Hitz & Mingmin Feng & Radu Tanase & Ren'e Algesheimer & Manuel S. Mariani, 2025. "The amplifier effect of artificial agents in social contagion," Papers 2502.21037, arXiv.org, revised Mar 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2508.02966. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.