IDEAS home Printed from https://ideas.repec.org/p/hal/wpaper/hal-04164419.html

Bad machines corrupt good morals

Author

Listed:
  • Nils Köbis

    (Max Planck Institute for Human Development - Max-Planck-Gesellschaft)

  • Jean-François Bonnefon

    (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - Comue de Toulouse - Communauté d'universités et établissements de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, CNRS - Centre National de la Recherche Scientifique)

  • Iyad Rahwan

    (Max Planck Institute for Human Development - Max-Planck-Gesellschaft)

Abstract

Machines powered by Artificial Intelligence (AI) are now influencing the behavior of humans in ways that are both like and unlike the ways humans influence each other. In light of recent research showing that other humans can exert a strong corrupting influence on people's ethical behavior, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioral science, human-computer interaction, and AI research. We propose that the main social roles through which both humans and machines can influence ethical behavior are (a) role model, (b) advisor,(c) partner, and (d) delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed (yet) the corrupting power of humans. However, AI agents acting as enablers of unethical behavior (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, indicating good reasons for worry. Based on these insights, we outline a research agenda that aims at providing more behavioral insights for better AI oversight.

Suggested Citation

  • Nils Köbis & Jean-François Bonnefon & Iyad Rahwan, 2023. "Bad machines corrupt good morals," Working Papers hal-04164419, HAL.
  • Handle: RePEc:hal:wpaper:hal-04164419
    Note: View the original document on HAL open archive server: https://hal.science/hal-04164419v1
    as

    Download full text from publisher

    File URL: https://hal.science/hal-04164419v1/document
    Download Restriction: no
    ---><---

    Other versions of this item:

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Nils Köbis & Zoe Rahwan & Raluca Rilla & Bramantyo Ibrahim Supriyatno & Clara Bersch & Tamer Ajaj & Jean-François Bonnefon & Iyad Rahwan, 2025. "Delegation to artificial intelligence can increase dishonest behaviour," Nature, Nature, vol. 646(8083), pages 126-134, October.
    2. Foucart, Renaud & Zeng, Fanqi & Wang, Shidong, 2025. "The Social Importance of Being Stubborn When an Organization Meets AI," SocArXiv nfgy3_v1, Center for Open Science.
    3. Werner, Tobias, 2021. "Algorithmic and human collusion," DICE Discussion Papers 372, Heinrich Heine University Düsseldorf, Düsseldorf Institute for Competition Economics (DICE).
    4. Leib, Margarita & Köbis, Nils & Rilke, Rainer Michael & Hagens, Marloes & Irlenbusch, Bernd, 2023. "Corrupted by Algorithms? How AI-Generated and Human-Written Advice Shape (Dis)Honesty," IZA Discussion Papers 16293, Institute of Labor Economics (IZA).
    5. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    6. Köbis, Nils & Rahwan, Zoe & Bersch, Clara & Ajaj, Tamer & Bonnefon, Jean-François & Rahwan, Iyad, 2024. "Experimental evidence that delegating to intelligent machines can increase dishonest behaviour," OSF Preprints dnjgz, Center for Open Science.
    7. Elias Fernández Domingos & Inês Terrucha & Rémi Suchon & Jelena Grujić & Juan Burguillo & Francisco Santos & Tom Lenaerts, 2022. "Delegation to artificial agents fosters prosocial behaviors in the collective risk dilemma," Post-Print hal-04296038, HAL.
    8. Marius Protte & Behnud Mir Djawadi, 2025. "Human vs. Algorithmic Auditors: The Impact of Entity Type and Ambiguity on Human Dishonesty," Papers 2507.15439, arXiv.org.
    9. Alicia von Schenk & Victor Klockmann & Jean-Franc{c}ois Bonnefon & Iyad Rahwan & Nils Kobis, 2022. "Lie detection algorithms attract few users but vastly increase accusation rates," Papers 2212.04277, arXiv.org.
    10. Margarita Leib & Nils Kobis & Ivan Soraperra, 2025. "Does AI and Human Advice Mitigate Punishment for Selfish Behavior? An Experiment on AI ethics From a Psychological Perspective," Papers 2507.19487, arXiv.org.
    11. von Schenk, Alicia & Klockmann, Victor & Bonnefon, Jean-François & Rahwan, Iyad & Köbis, Nils, 2023. "Lie-detection algorithms attract few users but vastly increase accusation rates," IAST Working Papers 23-155, Institute for Advanced Study in Toulouse (IAST).
    12. Lei, Shaohui & Xie, Lishan, 2025. "“Servant” versus “Partner”: Investigating the effect of service robot personas on customer misbehavior," Journal of Business Research, Elsevier, vol. 199(C).
    13. Lukas Lanz & Roman Briker & Fabiola H. Gerpott, 2024. "Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning," Journal of Business Ethics, Springer, vol. 189(3), pages 625-646, January.
    14. Emilio Ferrara, 2024. "GenAI against humanity: nefarious applications of generative artificial intelligence and large language models," Journal of Computational Social Science, Springer, vol. 7(1), pages 549-569, April.
    15. repec:osf:osfxxx:dnjgz_v1 is not listed on IDEAS
    16. Lechardoy, Lucie & López Forés, Laura & Codagnone, Cristiano, 2023. "Artificial intelligence at the workplace and the impacts on work organisation, working conditions and ethics," 32nd European Regional ITS Conference, Madrid 2023: Realising the digital decade in the European Union – Easier said than done? 277997, International Telecommunications Society (ITS).

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hal:wpaper:hal-04164419. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: CCSD (email available below). General contact details of provider: https://hal.archives-ouvertes.fr/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.