IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2504.13871.html
   My bibliography  Save this paper

Human aversion? Do AI Agents Judge Identity More Harshly Than Performance

Author

Listed:
  • Yuanjun Feng
  • Vivek Chodhary
  • Yash Raj Shrestha

Abstract

This study examines the understudied role of algorithmic evaluation of human judgment in hybrid decision-making systems, a critical gap in management research. While extant literature focuses on human reluctance to follow algorithmic advice, we reverse the perspective by investigating how AI agents based on large language models (LLMs) assess and integrate human input. Our work addresses a pressing managerial constraint: firms barred from deploying LLMs directly due to privacy concerns can still leverage them as mediating tools (for instance, anonymized outputs or decision pipelines) to guide high-stakes choices like pricing or discounts without exposing proprietary data. Through a controlled prediction task, we analyze how an LLM-based AI agent weights human versus algorithmic predictions. We find that the AI system systematically discounts human advice, penalizing human errors more severely than algorithmic errors--a bias exacerbated when the agent's identity (human vs AI) is disclosed and the human is positioned second. These results reveal a disconnect between AI-generated trust metrics and the actual influence of human judgment, challenging assumptions about equitable human-AI collaboration. Our findings offer three key contributions. First, we identify a reverse algorithm aversion phenomenon, where AI agents undervalue human input despite comparable error rates. Second, we demonstrate how disclosure and positional bias interact to amplify this effect, with implications for system design. Third, we provide a framework for indirect LLM deployment that balances predictive power with data privacy. For practitioners, this research emphasize the need to audit AI weighting mechanisms, calibrate trust dynamics, and strategically design decision sequences in human-AI systems.

Suggested Citation

  • Yuanjun Feng & Vivek Chodhary & Yash Raj Shrestha, 2025. "Human aversion? Do AI Agents Judge Identity More Harshly Than Performance," Papers 2504.13871, arXiv.org.
  • Handle: RePEc:arx:papers:2504.13871
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2504.13871
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    2. Logg, Jennifer M. & Minson, Julia A. & Moore, Don A., 2019. "Algorithm appreciation: People prefer algorithmic to human judgment," Organizational Behavior and Human Decision Processes, Elsevier, vol. 151(C), pages 90-103.
    3. repec:dar:wpaper:137446 is not listed on IDEAS
    4. Yuping Liu-Thompkins & Shintaro Okazaki & Hairong Li, 2022. "Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience," Journal of the Academy of Marketing Science, Springer, vol. 50(6), pages 1198-1218, November.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kevin Bauer & Andrej Gill, 2024. "Mirror, Mirror on the Wall: Algorithmic Assessments, Transparency, and Self-Fulfilling Prophecies," Information Systems Research, INFORMS, vol. 35(1), pages 226-248, March.
    2. Li, Sixian & Peluso, Alessandro M. & Duan, Jinyun, 2023. "Why do we prefer humans to artificial intelligence in telemarketing? A mind perception explanation," Journal of Retailing and Consumer Services, Elsevier, vol. 70(C).
    3. Pathak, Kanishka & Prakash, Gyan & Samadhiya, Ashutosh & Kumar, Anil & Luthra, Sunil, 2025. "Impact of Gen-AI chatbots on consumer services experiences and behaviors: Focusing on the sensation of awe and usage intentions through a cybernetic lens," Journal of Retailing and Consumer Services, Elsevier, vol. 82(C).
    4. Kevin Bauer & Moritz von Zahn & Oliver Hinz, 2023. "Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing," Information Systems Research, INFORMS, vol. 34(4), pages 1582-1602, December.
    5. F. Olan & K. Spanaki & W. Ahmed & G. Zhao, 2025. "Enabling Explainable Artificial Intelligence capabilities in Supply Chain Decision Support Making," Post-Print hal-05018234, HAL.
    6. Yongping Bao & Ludwig Danwitz & Fabian Dvorak & Sebastian Fehrler & Lars Hornuf & Hsuan Yu Lin & Bettina von Helversen, 2022. "Similarity and Consistency in Algorithm-Guided Exploration," CESifo Working Paper Series 10188, CESifo.
    7. Daniel Woods & Mustafa Abdallah & Saurabh Bagchi & Shreyas Sundaram & Timothy Cason, 2022. "Network defense and behavioral biases: an experimental study," Experimental Economics, Springer;Economic Science Association, vol. 25(1), pages 254-286, February.
    8. Siliang Tong & Nan Jia & Xueming Luo & Zheng Fang, 2021. "The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance," Strategic Management Journal, Wiley Blackwell, vol. 42(9), pages 1600-1631, September.
    9. Christoph Riedl & Eric Bogert, 2024. "Effects of AI Feedback on Learning, the Skill Gap, and Intellectual Diversity," Papers 2409.18660, arXiv.org.
    10. Mahmud, Hasan & Islam, A.K.M. Najmul & Ahmed, Syed Ishtiaque & Smolander, Kari, 2022. "What influences algorithmic decision-making? A systematic literature review on algorithm aversion," Technological Forecasting and Social Change, Elsevier, vol. 175(C).
    11. Bryce McLaughlin & Jann Spiess, 2022. "Algorithmic Assistance with Recommendation-Dependent Preferences," Papers 2208.07626, arXiv.org, revised Jan 2024.
    12. Markus Jung & Mischa Seiter, 2021. "Towards a better understanding on mitigating algorithm aversion in forecasting: an experimental study," Journal of Management Control: Zeitschrift für Planung und Unternehmenssteuerung, Springer, vol. 32(4), pages 495-516, December.
    13. Moshe Glickman & Tali Sharot, 2025. "How human–AI feedback loops alter human perceptual, emotional and social judgements," Nature Human Behaviour, Nature, vol. 9(2), pages 345-359, February.
    14. Gómez de Ágreda, Ángel, 2020. "Ethics of autonomous weapons systems and its applicability to any AI systems," Telecommunications Policy, Elsevier, vol. 44(6).
    15. Zhu, Yimin & Zhang, Jiemin & Wu, Jifei & Liu, Yingyue, 2022. "AI is better when I'm sure: The influence of certainty of needs on consumers' acceptance of AI chatbots," Journal of Business Research, Elsevier, vol. 150(C), pages 642-652.
    16. Yao, Xintong & Xi, Yipeng, 2024. "Pathways linking expectations for AI chatbots to loyalty: A moderated mediation analysis," Technology in Society, Elsevier, vol. 78(C).
    17. Merle, Aurélie & St-Onge, Anik & Sénécal, Sylvain, 2022. "Does it pay to be honest? The effect of retailer-provided negative feedback on consumers’ product choice and shopping experience," Journal of Business Research, Elsevier, vol. 147(C), pages 532-543.
    18. Benjamin Semujanga & Xavier Parent-Rocheleau, 2024. "Time-Based Stress and Procedural Justice: Can Transparency Mitigate the Effects of Algorithmic Compensation in Gig Work?," IJERPH, MDPI, vol. 21(1), pages 1-16, January.
    19. Benedikt Berger & Martin Adam & Alexander Rühr & Alexander Benlian, 2021. "Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 63(1), pages 55-68, February.
    20. Robert M. Gillenkirch & Julia Ortner & Sebastian Robert & Louis Velthuis, 2023. "Designing incentives and performance measurement for advisors: How to make decision-makers listen to advice," Working Papers 2304, Gutenberg School of Management and Economics, Johannes Gutenberg-Universität Mainz.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2504.13871. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.