IDEAS home Printed from https://ideas.repec.org/p/feb/artefa/00777.html
   My bibliography  Save this paper

Generation Next: Experimentation with AI

Author

Listed:
  • Gary Charness
  • Brian Jabarian
  • John List

Abstract

We investigate the potential for Large Language Models (LLMs) to enhance scientific practice within experimentation by identifying key areas, directions, and implications. First, we discuss how these models can improve experimental design, including improving the elicitation wording, coding experiments, and producing documentation. Second, we discuss the implementation of experiments using LLMs, focusing on enhancing causal inference by creating consistent experiences, improving comprehension of instructions, and monitoring participant engagement in real time. Third, we highlight how LLMs can help analyze experimental data, including pre-processing, data cleaning, and other analytical tasks while helping reviewers and replicators investigate studies. Each of these tasks improves the probability of reporting accurate findings. Finally, we recommend a scientific governance blueprint that manages the potential risks of using LLMs for experimental research while promoting their benefits. This could pave the way for open science opportunities and foster a culture of policy and industry experimentation at scale.

Suggested Citation

  • Gary Charness & Brian Jabarian & John List, 2023. "Generation Next: Experimentation with AI," Artefactual Field Experiments 00777, The Field Experiments Website.
  • Handle: RePEc:feb:artefa:00777
    as

    Download full text from publisher

    File URL: http://s3.amazonaws.com/fieldexperiments-papers2/papers/00777.pdf
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Susan Athey & Michael Luca, 2019. "Economists (and Economics) in Tech Companies," Journal of Economic Perspectives, American Economic Association, vol. 33(1), pages 209-230, Winter.
    2. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    3. Luigi Butera & Philip Grossman & Daniel Houser & John List & Marie-Claire Villeval, 2020. "A New Mechanism to Alleviate the Crises of Confidence in Science - With an Application to the Public Goods Game," Artefactual Field Experiments 00684, The Field Experiments Website.
    4. Erik Brynjolfsson & Danielle Li & Lindsey Raymond, 2023. "Generative AI at Work," Papers 2304.11771, arXiv.org.
    5. Gary Charness & Guillaume R. Frechette & John H. Kagel, 2004. "How Robust is Laboratory Gift Exchange?," Experimental Economics, Springer;Economic Science Association, vol. 7(2), pages 189-205, June.
    6. Matthew O. Jackson, 2009. "Networks and Economic Behavior," Annual Review of Economics, Annual Reviews, vol. 1(1), pages 489-513, May.
    7. Alex Davies & Petar Veličković & Lars Buesing & Sam Blackwell & Daniel Zheng & Nenad Tomašev & Richard Tanburn & Peter Battaglia & Charles Blundell & András Juhász & Marc Lackenby & Geordie Williamson, 2021. "Advancing mathematics by guiding human intuition with AI," Nature, Nature, vol. 600(7887), pages 70-74, December.
    8. Korinek, Anton, 2023. "Language Models and Cognitive Automation for Economic Research," CEPR Discussion Papers 17923, C.E.P.R. Discussion Papers.
    9. Deaton, Angus & Cartwright, Nancy, 2018. "Understanding and misunderstanding randomized controlled trials," Social Science & Medicine, Elsevier, vol. 210(C), pages 2-21.
    10. Guillaume R. Fréchette & Kim Sarnoff & Leeat Yariv, 2022. "Experimental Economics: Past and Future," Annual Review of Economics, Annual Reviews, vol. 14(1), pages 777-794, August.
    11. Richard A. Bettis, 2012. "The search for asterisks: Compromised statistical tests and flawed theories," Strategic Management Journal, Wiley Blackwell, vol. 33(1), pages 108-113, January.
    12. Colin F. Camerer, 2018. "Artificial Intelligence and Behavioral Economics," NBER Chapters, in: The Economics of Artificial Intelligence: An Agenda, pages 587-608, National Bureau of Economic Research, Inc.
    13. Gordon Pennycook & Ziv Epstein & Mohsen Mosleh & Antonio A. Arechar & Dean Eckles & David G. Rand, 2021. "Shifting attention to accuracy can reduce misinformation online," Nature, Nature, vol. 592(7855), pages 590-595, April.
    14. Brynjolfsson, Erik & Li, Danielle & Raymond, Lindsey R., 2023. "Generative AI at Work," Research Papers 4141, Stanford University, Graduate School of Business.
    15. Luigi Butera & Philip J Grossman & Daniel Houser & John A List & Marie Claire Villeval, 2020. "A New Mechanism to Alleviate the Crises of Confidence in Science With An Application to the Public Goods GameA Review," Working Papers halshs-02512932, HAL.
    16. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    17. Jake M. Hofman & Duncan J. Watts & Susan Athey & Filiz Garip & Thomas L. Griffiths & Jon Kleinberg & Helen Margetts & Sendhil Mullainathan & Matthew J. Salganik & Simine Vazire & Alessandro Vespignani, 2021. "Integrating explanation and prediction in computational social science," Nature, Nature, vol. 595(7866), pages 181-188, July.
    18. Erik Snowberg & Leeat Yariv, 2021. "Testing the Waters: Behavior across Participant Pools," American Economic Review, American Economic Association, vol. 111(2), pages 687-719, February.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Nir Chemaya & Daniel Martin, 2023. "Perceptions and Detection of AI Use in Manuscript Preparation for Academic Journals," Papers 2311.14720, arXiv.org, revised Jan 2024.
    2. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    3. Takeuchi, Ai & Seki, Erika, 2023. "Coordination and free-riding problems in the provision of multiple public goods," Journal of Economic Behavior & Organization, Elsevier, vol. 206(C), pages 95-121.
    4. Omar Al-Ubaydli & John List & Claire Mackevicius & Min Sok Lee & Dana Suskind, 2019. "How Can Experiments Play a Greater Role in Public Policy? 12 Proposals from an Economic Model of Scaling," Artefactual Field Experiments 00679, The Field Experiments Website.
    5. John A. List, 2024. "Optimally generate policy-based evidence before scaling," Nature, Nature, vol. 626(7999), pages 491-499, February.
    6. Kevin Leyton-Brown & Paul Milgrom & Neil Newman & Ilya Segal, 2023. "Artificial Intelligence and Market Design: Lessons Learned from Radio Spectrum Reallocation," NBER Chapters, in: New Directions in Market Design, National Bureau of Economic Research, Inc.
    7. Marie Ferré & Stefanie Engel & Elisabeth Gsottbauer, 2023. "External validity of economic experiments on Agri‐environmental scheme design," Journal of Agricultural Economics, Wiley Blackwell, vol. 74(3), pages 661-685, September.
    8. Elias Bouacida & Renaud Foucart, 2022. "Rituals of Reason," Working Papers 344119591, Lancaster University Management School, Economics Department.
    9. Zengqing Wu & Run Peng & Xu Han & Shuyuan Zheng & Yixin Zhang & Chuan Xiao, 2023. "Smart Agent-Based Modeling: On the Use of Large Language Models in Computer Simulations," Papers 2311.06330, arXiv.org, revised Dec 2023.
    10. Eszter Czibor & David Jimenez‐Gomez & John A. List, 2019. "The Dozen Things Experimental Economists Should Do (More of)," Southern Economic Journal, John Wiley & Sons, vol. 86(2), pages 371-432, October.
    11. Joshua C. Yang & Marcin Korecki & Damian Dailisan & Carina I. Hausladen & Dirk Helbing, 2024. "LLM Voting: Human Choices and AI Collective Decision Making," Papers 2402.01766, arXiv.org.
    12. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    13. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    14. Sven Grüner & Mira Lehberger & Norbert Hirschauer & Oliver Mußhoff, 2022. "How (un)informative are experiments with students for other social groups? A study of agricultural students and farmers," Australian Journal of Agricultural and Resource Economics, Australian Agricultural and Resource Economics Society, vol. 66(3), pages 471-504, July.
    15. Brodeur, Abel & Cook, Nikolai M. & Hartley, Jonathan S. & Heyes, Anthony, 2023. "Do Pre-Registration and Pre-Analysis Plans Reduce p-Hacking and Publication Bias?: Evidence from 15,992 Test Statistics and Suggestions for Improvement," GLO Discussion Paper Series 1147 [pre.], Global Labor Organization (GLO).
    16. John A. List & Azeem M. Shaikh & Atom Vayalinkal, 2023. "Multiple testing with covariate adjustment in experimental economics," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 38(6), pages 920-939, September.
    17. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    18. Anil R. Doshi & Oliver P. Hauser, 2023. "Generative artificial intelligence enhances creativity but reduces the diversity of novel content," Papers 2312.00506, arXiv.org, revised Mar 2024.
    19. Brodeur, Abel & Cook, Nikolai & Hartley, Jonathan & Heyes, Anthony, 2022. "Do Pre-Registration and Pre-analysis Plans Reduce p-Hacking and Publication Bias?," MetaArXiv uxf39, Center for Open Science.
    20. Ahsanuzzaman, & Palm-Forster, Leah H. & Suter, Jordan F., 2022. "Experimental evidence of common pool resource use in the presence of uncertainty," Journal of Economic Behavior & Organization, Elsevier, vol. 194(C), pages 139-160.

    More about this item

    JEL classification:

    • C0 - Mathematical and Quantitative Methods - - General
    • C1 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods and Methodology: General
    • C80 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - General
    • C82 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Methodology for Collecting, Estimating, and Organizing Macroeconomic Data; Data Access
    • C87 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - Econometric Software
    • C9 - Mathematical and Quantitative Methods - - Design of Experiments
    • C90 - Mathematical and Quantitative Methods - - Design of Experiments - - - General
    • C92 - Mathematical and Quantitative Methods - - Design of Experiments - - - Laboratory, Group Behavior
    • C99 - Mathematical and Quantitative Methods - - Design of Experiments - - - Other

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:feb:artefa:00777. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: David Franks (email available below). General contact details of provider: http://www.fieldexperiments.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.