IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000237.html
   My bibliography  Save this article

Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis

Author

Listed:
  • Christopher Robertson
  • Andrew Woods
  • Kelly Bergstrand
  • Jess Findley
  • Cayley Balser
  • Marvin J Slepian

Abstract

Artificial intelligence (AI) has the potential to improve diagnostic accuracy. Yet people are often reluctant to trust automated systems, and some patient populations may be particularly distrusting. We sought to determine how diverse patient populations feel about the use of AI diagnostic tools, and whether framing and informing the choice affects uptake. To construct and pretest our materials, we conducted structured interviews with a diverse set of actual patients. We then conducted a pre-registered (osf.io/9y26x), randomized, blinded survey experiment in factorial design. A survey firm provided n = 2675 responses, oversampling minoritized populations. Clinical vignettes were randomly manipulated in eight variables with two levels each: disease severity (leukemia versus sleep apnea), whether AI is proven more accurate than human specialists, whether the AI clinic is personalized to the patient through listening and/or tailoring, whether the AI clinic avoids racial and/or financial biases, whether the Primary Care Physician (PCP) promises to explain and incorporate the advice, and whether the PCP nudges the patient towards AI as the established, recommended, and easy choice. Our main outcome measure was selection of AI clinic or human physician specialist clinic (binary, “AI uptake”). We found that with weighting representative to the U.S. population, respondents were almost evenly split (52.9% chose human doctor and 47.1% chose AI clinic). In unweighted experimental contrasts of respondents who met pre-registered criteria for engagement, a PCP’s explanation that AI has proven superior accuracy increased uptake (OR = 1.48, CI 1.24–1.77, p

Suggested Citation

  • Christopher Robertson & Andrew Woods & Kelly Bergstrand & Jess Findley & Cayley Balser & Marvin J Slepian, 2023. "Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis," PLOS Digital Health, Public Library of Science, vol. 2(5), pages 1-16, May.
  • Handle: RePEc:plo:pdig00:0000237
    DOI: 10.1371/journal.pdig.0000237
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000237
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000237&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000237?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Chiara Longoni & Andrea Bonezzi & Carey K Morewedge, 2019. "Resistance to Medical Artificial Intelligence," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 46(4), pages 629-650.
    2. Hal R. Arkes & Victoria A. Shaffer & Mitchell A. Medow, 2007. "Patients Derogate Physicians Who Use a Computer-Assisted Diagnostic Aid," Medical Decision Making, , vol. 27(2), pages 189-202, March.
    3. Chiara Longoni & Andrea Bonezzi & Carey K. Morewedge, 2020. "Resistance to medical artificial intelligence is an attribute in a compensatory decision process: response to Pezzo and Becksted (2020)," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 15(3), pages 446-448, May.
    4. repec:cup:judgdm:v:15:y:2020:i:3:p:446-448 is not listed on IDEAS
    5. Longoni, Chiara & Bonezzi, Andrea & Morewedge, Carey K., 2020. "Resistance to medical artificial intelligence is an attribute in a compensatory decision process: response to Pezzo and Beckstead (2020)," Judgment and Decision Making, Cambridge University Press, vol. 15(3), pages 446-448, May.
    6. Logg, Jennifer M. & Minson, Julia A. & Moore, Don A., 2019. "Algorithm appreciation: People prefer algorithmic to human judgment," Organizational Behavior and Human Decision Processes, Elsevier, vol. 151(C), pages 90-103.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Hajiheydari, Nastaran & Delgosha, Mohammad Soltani & Saheb, Tahereh, 2025. "AI in medical diagnosis: A contextualised study of patient motivations and concerns," Social Science & Medicine, Elsevier, vol. 371(C).

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    2. Tinglong Dai & Sridhar Tayur, 2022. "Designing AI‐augmented healthcare delivery systems for physician buy‐in and patient acceptance," Production and Operations Management, Production and Operations Management Society, vol. 31(12), pages 4443-4451, December.
    3. Siliang Tong & Nan Jia & Xueming Luo & Zheng Fang, 2021. "The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance," Strategic Management Journal, Wiley Blackwell, vol. 42(9), pages 1600-1631, September.
    4. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    5. repec:cup:judgdm:v:15:y:2020:i:3:p:449-451 is not listed on IDEAS
    6. Wang, Xun & Rodrigues, Vasco Sanchez & Demir, Emrah & Sarkis, Joseph, 2024. "Algorithm aversion during disruptions: The case of safety stock," International Journal of Production Economics, Elsevier, vol. 278(C).
    7. Kim, Jeong Hyun & Kim, Jungkeun & Baek, Tae Hyun & Kim, Changju, 2025. "ChatGPT personalized and humorous recommendations," Annals of Tourism Research, Elsevier, vol. 110(C).
    8. Chen, Yaqi & Wang, Haizhong & Rao Hill, Sally & Li, Binglian, 2024. "Consumer attitudes toward AI-generated ads: Appeal types, self-efficacy and AI’s social role," Journal of Business Research, Elsevier, vol. 185(C).
    9. Chiara Longoni & Andrea Bonezzi & Carey K. Morewedge, 2020. "Resistance to medical artificial intelligence is an attribute in a compensatory decision process: response to Pezzo and Becksted (2020)," Judgment and Decision Making, Society for Judgment and Decision Making, vol. 15(3), pages 446-448, May.
    10. Chen Yang & Jing Hu, 2022. "When do consumers prefer AI-enabled customer service? The interaction effect of brand personality and service provision type on brand attitudes and purchase intentions," Journal of Brand Management, Palgrave Macmillan, vol. 29(2), pages 167-189, March.
    11. Mahmud, Hasan & Islam, A.K.M. Najmul & Mitra, Ranjan Kumar, 2023. "What drives managers towards algorithm aversion and how to overcome it? Mitigating the impact of innovation resistance through technology readiness," Technological Forecasting and Social Change, Elsevier, vol. 193(C).
    12. Zhang, Lixuan & Yencha, Christopher, 2022. "Examining perceptions towards hiring algorithms," Technology in Society, Elsevier, vol. 68(C).
    13. Frank, Darius-Aurel & Chrysochou, Polymeros & Mitkidis, Panagiotis & Otterbring, Tobias & Ariely, Dan, 2024. "Navigating uncertainty: Exploring consumer acceptance of artificial intelligence under self-threats and high-stakes decisions," Technology in Society, Elsevier, vol. 79(C).
    14. Martin Adam & Konstantin Roethke & Alexander Benlian, 2023. "Human vs. Automated Sales Agents: How and Why Customer Responses Shift Across Sales Stages," Information Systems Research, INFORMS, vol. 34(3), pages 1148-1168, September.
    15. Marina Chugunova & Wolfgang J. Luhan, 2022. "Ruled by robots: Preference for algorithmic decision makers and perceptions of their choices," Working Papers in Economics & Finance 2022-03, University of Portsmouth, Portsmouth Business School, Economics and Finance Subject Group.
    16. Wang, Cuicui & Li, Yiyang & Fu, Weizhong & Jin, Jia, 2023. "Whether to trust chatbots: Applying the event-related approach to understand consumers’ emotional experiences in interactions with chatbots in e-commerce," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).
    17. Harvey, Nigel & De Baets, Shari, 2025. "Factors affecting preferences between judgmental and algorithmic forecasts: Feedback, guidance and labeling effects," International Journal of Forecasting, Elsevier, vol. 41(2), pages 532-553.
    18. van Esch, Patrick & Cui, Yuanyuan (Gina) & Das, Gopal & Jain, Shailendra Pratap & Wirtz, Jochen, 2022. "Tourists and AI: A political ideology perspective," Annals of Tourism Research, Elsevier, vol. 97(C).
    19. Rebitschek, Felix G. & Gigerenzer, Gerd & Wagner, Gert G., 2021. "People underestimate the errors made by algorithms for credit scoring and recidivism prediction but accept even fewer errors," EconStor Open Access Articles and Book Chapters, ZBW - Leibniz Information Centre for Economics, vol. 11, pages 1-11.
    20. Mahmud, Hasan & Islam, A.K.M. Najmul & Ahmed, Syed Ishtiaque & Smolander, Kari, 2022. "What influences algorithmic decision-making? A systematic literature review on algorithm aversion," Technological Forecasting and Social Change, Elsevier, vol. 175(C).
    21. Zhu, Yimin & Zhang, Jiemin & Wu, Jifei & Liu, Yingyue, 2022. "AI is better when I'm sure: The influence of certainty of needs on consumers' acceptance of AI chatbots," Journal of Business Research, Elsevier, vol. 150(C), pages 642-652.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000237. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.