IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2005.09980.html
   My bibliography  Save this paper

Artificial Intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry

Author

Listed:
  • Nils Kobis
  • Luca Mossink

Abstract

The release of openly available, robust natural language generation algorithms (NLG) has spurred much public attention and debate. One reason lies in the algorithms' purported ability to generate human-like text across various domains. Empirical evidence using incentivized tasks to assess whether people (a) can distinguish and (b) prefer algorithm-generated versus human-written text is lacking. We conducted two experiments assessing behavioral reactions to the state-of-the-art Natural Language Generation algorithm GPT-2 (Ntotal = 830). Using the identical starting lines of human poems, GPT-2 produced samples of poems. From these samples, either a random poem was chosen (Human-out-of-the-loop) or the best one was selected (Human-in-the-loop) and in turn matched with a human-written poem. In a new incentivized version of the Turing Test, participants failed to reliably detect the algorithmically-generated poems in the Human-in-the-loop treatment, yet succeeded in the Human-out-of-the-loop treatment. Further, people reveal a slight aversion to algorithm-generated poetry, independent on whether participants were informed about the algorithmic origin of the poem (Transparency) or not (Opacity). We discuss what these results convey about the performance of NLG algorithms to produce human-like text and propose methodologies to study such learning algorithms in human-agent experimental settings.

Suggested Citation

  • Nils Kobis & Luca Mossink, 2020. "Artificial Intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry," Papers 2005.09980, arXiv.org, revised Sep 2020.
  • Handle: RePEc:arx:papers:2005.09980
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2005.09980
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Jon Kleinberg & Jens Ludwig & Sendhil Mullainathan & Ashesh Rambachan, 2018. "Algorithmic Fairness," AEA Papers and Proceedings, American Economic Association, vol. 108, pages 22-27, May.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Leo Leppänen & Hanna Tuulonen & Stefanie Sirén-Heikel, 2020. "Automated Journalism as a Source of and a Diagnostic Device for Bias in Reporting," Media and Communication, Cogitatio Press, vol. 8(3), pages 39-49.
    2. Chris Lam, 2024. "Debiasing Alternative Data for Credit Underwriting Using Causal Inference," Papers 2410.22382, arXiv.org, revised Oct 2024.
    3. Emily Owens & CarlyWill Sloan, 2023. "Can text messages reduce incarceration in rural and vulnerable populations?," Journal of Policy Analysis and Management, John Wiley & Sons, Ltd., vol. 42(4), pages 992-1009, September.
    4. Christophe Hurlin & Christophe Perignon & Sébastien Saurin, 2021. "The Fairness of Credit Scoring Models," Working Papers hal-03501452, HAL.
    5. Anna Zink & Sherri Rose, 2020. "Fair regression for health care spending," Biometrics, The International Biometric Society, vol. 76(3), pages 973-982, September.
    6. Claire Lazar Reich, 2021. "Affirmative Action vs. Affirmative Information," Papers 2102.10019, arXiv.org, revised Oct 2024.
    7. Ghysels, Eric & Babii, Andrii & Chen, Xi & Kumar, Rohit, 2020. "Binary Choice with Asymmetric Loss in a Data-Rich Environment: Theory and an Application to Racial Justice," CEPR Discussion Papers 15418, C.E.P.R. Discussion Papers.
    8. Laura Blattner & Scott Nelson, 2021. "How Costly is Noise? Data and Disparities in Consumer Credit," Papers 2105.07554, arXiv.org.
    9. Ashesh Rambachan & Jon Kleinberg & Sendhil Mullainathan & Jens Ludwig, 2020. "An Economic Approach to Regulating Algorithms," NBER Working Papers 27111, National Bureau of Economic Research, Inc.
    10. Andini, Monica & Boldrini, Michela & Ciani, Emanuele & de Blasio, Guido & D'Ignazio, Alessio & Paladini, Andrea, 2022. "Machine learning in the service of policy targeting: The case of public credit guarantees," Journal of Economic Behavior & Organization, Elsevier, vol. 198(C), pages 434-475.
    11. Charlson, G., 2022. "Digital Gold? Pricing, Inequality and Participation in Data Markets," Janeway Institute Working Papers 2225, Faculty of Economics, University of Cambridge.
    12. Charlson, G., 2022. "Digital gold? Pricing, inequality and participation in data markets," Cambridge Working Papers in Economics 2258, Faculty of Economics, University of Cambridge.
    13. Davide Viviano & Jelena Bradic, 2020. "Fair Policy Targeting," Papers 2005.12395, arXiv.org, revised Jun 2022.
    14. Bauer, Kevin & Pfeuffer, Nicolas & Abdel-Karim, Benjamin M. & Hinz, Oliver & Kosfeld, Michael, 2020. "The terminator of social welfare? The economic consequences of algorithmic discrimination," SAFE Working Paper Series 287, Leibniz Institute for Financial Research SAFE.
    15. Phyllis Asorh Oteng & Victor Curtis Lartey & Amos Kwasi Amofa, 2023. "Modeling the Macroeconomic and Demographic Determinants of Life Insurance Demand in Ghana Using the Elastic Net Algorithm," SAGE Open, , vol. 13(3), pages 21582440231, September.
    16. Wenlong Sun & Olfa Nasraoui & Patrick Shafto, 2020. "Evolution and impact of bias in human and machine learning algorithm interaction," PLOS ONE, Public Library of Science, vol. 15(8), pages 1-39, August.
    17. Annie Liang & Jay Lu & Xiaosheng Mu & Kyohei Okumura, 2021. "Algorithm Design: A Fairness-Accuracy Frontier," Papers 2112.09975, arXiv.org, revised May 2024.
    18. Karaenke, Paul & Bichler, Martin & Merting, Soeren & Minner, Stefan, 2020. "Non-monetary coordination mechanisms for time slot allocation in warehouse delivery," European Journal of Operational Research, Elsevier, vol. 286(3), pages 897-907.
    19. Jon Kleinberg & Sendhil Mullainathan, 2019. "Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability," NBER Working Papers 25854, National Bureau of Economic Research, Inc.
    20. Chen, Yutong, 2024. "Does the gig economy discriminate against women? Evidence from physicians in China," Journal of Development Economics, Elsevier, vol. 169(C).

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2005.09980. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.