Author
Listed:
- Walter Laurito
(a Information Process Engineering , Forschungszentrum Informatik , Karlsruhe 76131 , Germany)
- Benjamin Davis
(b Private address , Andover , MA 04216)
- Peli Grietzer
(c Arb Research , Prague 11636 , Czech Republic)
- Tomáš GavenÄ iak
(d Alignment of Complex Systems (ACS) Research Group , Center for Theoretical Studies , Charles University , Prague 110 00 , Czech Republic)
- Ada Böhm
(d Alignment of Complex Systems (ACS) Research Group , Center for Theoretical Studies , Charles University , Prague 110 00 , Czech Republic)
- Jan Kulveit
(d Alignment of Complex Systems (ACS) Research Group , Center for Theoretical Studies , Charles University , Prague 110 00 , Czech Republic)
Abstract
Are large language models (LLMs) biased in favor of communications produced by LLMs, leading to possible antihuman discrimination? Using a classical experimental design inspired by employment discrimination studies, we tested widely used LLMs, including GPT-3.5, GPT-4 and a selection of recent open-weight models in binary choice scenarios. These involved LLM-based assistants selecting between goods (the goods we study include consumer products, academic papers, and film-viewings) described either by humans or LLMs. Our results show a consistent tendency for LLM-based AIs to prefer LLM-presented options. This suggests the possibility of future AI systems implicitly discriminating against humans as a class, giving AI agents and AI-assisted humans an unfair advantage.
Suggested Citation
Walter Laurito & Benjamin Davis & Peli Grietzer & Tomáš GavenÄ iak & Ada Böhm & Jan Kulveit, 2025.
"AI–AI bias: Large language models favor communications generated by large language models,"
Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 122(31), pages 2415697122-, August.
Handle:
RePEc:nas:journl:v:122:y:2025:p:e2415697122
DOI: 10.1073/pnas.2415697122
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nas:journl:v:122:y:2025:p:e2415697122. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: PNAS Product Team (email available below). General contact details of provider: http://www.pnas.org/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.