IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0292944.html

Who should decide how limited healthcare resources are prioritized? Autonomous technology as a compelling alternative to humans

Author

Listed:
  • Jonathan J Rolison
  • Peter L T Gooding
  • Riccardo Russo
  • Kathryn E Buchanan

Abstract

Who should decide how limited resources are prioritized? We ask this question in a healthcare context where patients must be prioritized according to their need and where advances in autonomous artificial intelligence-based technology offer a compelling alternative to decisions by humans. Qualitative (Study 1a; N = 50) and quantitative (Study 1b; N = 800) analysis identified agency, emotional experience, bias-free, and error-free as four main qualities describing people’s perceptions of autonomous computer programs (ACPs) and human staff members (HSMs). Yet, the qualities were not perceived to be possessed equally by HSMs and ACPs. HSMs were endorsed with human qualities of agency and emotional experience, whereas ACPs were perceived as more capable than HSMs of bias- and error-free decision-making. Consequently, better than average (Study 2; N = 371), or relatively better (Studies 3, N = 181; & 4, N = 378), ACP performance, especially on qualities characteristic of ACPs, was sufficient to reverse preferences to favor ACPs over HSMs as the decision makers for how limited healthcare resources should be prioritized. Our findings serve a practical purpose regarding potential barriers to public acceptance of technology, and have theoretical value for our understanding of perceptions of autonomous technologies.

Suggested Citation

  • Jonathan J Rolison & Peter L T Gooding & Riccardo Russo & Kathryn E Buchanan, 2024. "Who should decide how limited healthcare resources are prioritized? Autonomous technology as a compelling alternative to humans," PLOS ONE, Public Library of Science, vol. 19(2), pages 1-34, February.
  • Handle: RePEc:plo:pone00:0292944
    DOI: 10.1371/journal.pone.0292944
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0292944
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0292944&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0292944?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Chiara Longoni & Andrea Bonezzi & Carey K Morewedge, 2019. "Resistance to Medical Artificial Intelligence," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 46(4), pages 629-650.
    2. Rosseel, Yves, 2012. "lavaan: An R Package for Structural Equation Modeling," Journal of Statistical Software, Foundation for Open Access Statistics, vol. 48(i02).
    3. Benedikt Berger & Martin Adam & Alexander Rühr & Alexander Benlian, 2021. "Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 63(1), pages 55-68, February.
    4. Andrew Prahl & Lyn Van Swol, 2017. "Understanding algorithm aversion: When is advice from automation discounted?," Journal of Forecasting, John Wiley & Sons, Ltd., vol. 36(6), pages 691-702, September.
    5. Romain Cadario & Chiara Longoni & Carey K. Morewedge, 2021. "Understanding, explaining, and utilizing medical artificial intelligence," Nature Human Behaviour, Nature, vol. 5(12), pages 1636-1642, December.
    6. Berger, Benedikt & Adam, Martin & Rühr, Alexander & Benlian, Alexander, 2021. "Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 124219, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hermann, Erik & Puntoni, Stefano, 2024. "Artificial intelligence and consumer behavior: From predictive to generative AI," Journal of Business Research, Elsevier, vol. 180(C).
    2. Chen, Changdong, 2024. "How consumers respond to service failures caused by algorithmic mistakes: The role of algorithmic interpretability," Journal of Business Research, Elsevier, vol. 176(C).
    3. Zhao, Taiyang & Ran, Yaxuan & Wu, Banggang & Lynette Wang, Valerie & Zhou, Liying & Lu Wang, Cheng, 2024. "Virtual versus human: Unraveling consumer reactions to service failures through influencer types," Journal of Business Research, Elsevier, vol. 178(C).
    4. Wang, Xun & Rodrigues, Vasco Sanchez & Demir, Emrah & Sarkis, Joseph, 2024. "Algorithm aversion during disruptions: The case of safety stock," International Journal of Production Economics, Elsevier, vol. 278(C).
    5. Yang, Yikai & Zheng, Jiehui & Yu, Yining & Qiu, Yiling & Wang, Lei, 2024. "The role of recommendation sources and attribute framing in online product recommendations," Journal of Business Research, Elsevier, vol. 174(C).
    6. Roshni Raveendhran & Nathanael J. Fast, 2024. "When and why consumers prefer human-free behavior tracking products," Marketing Letters, Springer, vol. 35(3), pages 395-408, September.
    7. Mahmud, Hasan & Islam, A.K.M. Najmul & Mitra, Ranjan Kumar, 2023. "What drives managers towards algorithm aversion and how to overcome it? Mitigating the impact of innovation resistance through technology readiness," Technological Forecasting and Social Change, Elsevier, vol. 193(C).
    8. Lukas Lanz & Roman Briker & Fabiola H. Gerpott, 2024. "Employees Adhere More to Unethical Instructions from Human Than AI Supervisors: Complementing Experimental Evidence with Machine Learning," Journal of Business Ethics, Springer, vol. 189(3), pages 625-646, January.
    9. Lars Hornuf & David J. Streich & Niklas Töllich, 2025. "Making GenAI Smarter: Evidence from a Portfolio Allocation Experiment," CESifo Working Paper Series 11862, CESifo.
    10. Marius Protte & Behnud Mir Djawadi, 2025. "Human vs. Algorithmic Auditors: The Impact of Entity Type and Ambiguity on Human Dishonesty," Papers 2507.15439, arXiv.org.
    11. Evgeny Kagan & Brett Hathaway & Maqbool Dada, 2025. "Deploying Chatbots in Customer Service: Adoption Hurdles and Simple Remedies," Papers 2504.06145, arXiv.org.
    12. Zaitsava, Maryia & Marku, Elona & Di Guardo, Maria Chiara, 2022. "Is data-driven decision-making driven only by data? When cognition meets data," European Management Journal, Elsevier, vol. 40(5), pages 656-670.
    13. Tse, Tiffany Tsz Kwan & Hanaki, Nobuyuki & Mao, Bolin, 2024. "Beware the performance of an algorithm before relying on it: Evidence from a stock price forecasting experiment," Journal of Economic Psychology, Elsevier, vol. 102(C).
    14. Christian Fieberg & Lars Hornuf & Maximilian Meiler & David J. Streich, 2025. "Using Large Language Models for Financial Advice," CESifo Working Paper Series 11666, CESifo.
    15. Wang, Cuicui & Li, Yiyang & Fu, Weizhong & Jin, Jia, 2023. "Whether to trust chatbots: Applying the event-related approach to understand consumers’ emotional experiences in interactions with chatbots in e-commerce," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).
    16. Harvey, Nigel & De Baets, Shari, 2025. "Factors affecting preferences between judgmental and algorithmic forecasts: Feedback, guidance and labeling effects," International Journal of Forecasting, Elsevier, vol. 41(2), pages 532-553.
    17. Alexander Mayr & Philip Stahmann & Maximilian Nebel & Christian Janiesch, 2024. "Still doing it yourself? Investigating determinants for the adoption of intelligent process automation," Electronic Markets, Springer;IIM University of St. Gallen, vol. 34(1), pages 1-22, December.
    18. Qin, Hongyi & Zhu, Yifan & Jiang, Yan & Luo, Siqi & Huang, Cui, 2024. "Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments," Technology in Society, Elsevier, vol. 79(C).
    19. Mahmud, Hasan & Islam, A.K.M. Najmul & Ahmed, Syed Ishtiaque & Smolander, Kari, 2022. "What influences algorithmic decision-making? A systematic literature review on algorithm aversion," Technological Forecasting and Social Change, Elsevier, vol. 175(C).
    20. Benedikt Berger & Martin Adam & Alexander Rühr & Alexander Benlian, 2021. "Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 63(1), pages 55-68, February.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0292944. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.