IDEAS home Printed from https://ideas.repec.org/a/eee/teinso/v79y2024ics0160791x24002744.html
   My bibliography  Save this article

Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments

Author

Listed:
  • Qin, Hongyi
  • Zhu, Yifan
  • Jiang, Yan
  • Luo, Siqi
  • Huang, Cui

Abstract

Artificial intelligence (AI) technologies, exemplified by health chatbots, are transforming the healthcare industry. Their widespread application has the potential to enhance decision-making efficiency, improve the quality of healthcare services, and reduce medical costs. While there is ongoing discussion about the opportunities and challenges brought by AI, more needs to be known about the public's attitude towards its use in the healthcare domain. Understanding public attitudes can help policymakers better grasp their needs and involve them in making decisions that benefit both technological development and social welfare. Therefore, this study presents evidence from two between-subjects experiments. This study aims to compare the public's adoption and trust levels in health advice provided by human vs. AI doctors and explore the potential effects of personalization and carefulness on the public's attitudes. Experimental designs adopt a trust-centered, cognitively and emotionally balanced perspective to study the public's intention to adopt AI. In Experiment 1, the experimental conditions involve the types of decision-makers providing online consultation advice, either AI or human doctors. In Experiment 2, the experimental conditions involve varying levels of perceived personalization and carefulness (high vs. low). A total of 734 participants took part in the study. They were randomly assigned to one of the intervention conditions and responded to manipulation checks after reading the materials. Using a seven-point Likert-type scale, participants rated their cognitive and emotional trust levels and intention to adopt the advice. Partial Least Squares Structural Equation Modeling (PLS-SEM) is conducted to estimate the proposed theoretical perspective. Qualitative interviews on both real-world and AI-generated treatment recommendations further enriched the understanding of public perceptions.The results show that AI-generated advice is generally slightly less trusted and adopted by the public. However, a noticeable inclination towards AI-generated advice emerges when AI demonstrates proficiency in understanding individuals' health conditions and providing empathetic consultations. Further analyses confirm the mediating influence of emotional trust between cognitive trust and adoption intention. These findings provide deeper insights into the process of adoption and trust formation. Moreover, they offer guidance to digital healthcare providers, empowering them with the knowledge to co-design AI implementation strategies that cater to the public's expectations.

Suggested Citation

  • Qin, Hongyi & Zhu, Yifan & Jiang, Yan & Luo, Siqi & Huang, Cui, 2024. "Examining the impact of personalization and carefulness in AI-generated health advice: Trust, adoption, and insights in online healthcare consultations experiments," Technology in Society, Elsevier, vol. 79(C).
  • Handle: RePEc:eee:teinso:v:79:y:2024:i:c:s0160791x24002744
    DOI: 10.1016/j.techsoc.2024.102726
    as

    Download full text from publisher

    File URL: http://www.sciencedirect.com/science/article/pii/S0160791X24002744
    Download Restriction: Full text for ScienceDirect subscribers only

    File URL: https://libkey.io/10.1016/j.techsoc.2024.102726?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Chiara Longoni & Andrea Bonezzi & Carey K Morewedge, 2019. "Resistance to Medical Artificial Intelligence," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 46(4), pages 629-650.
    2. Petrocchi, S. & Iannello, P. & Lecciso, F. & Levante, A. & Antonietti, A. & Schulz, P.J., 2019. "Interpersonal trust in doctor-patient relation: Evidence from dyadic analysis and association with quality of dyadic communication," Social Science & Medicine, Elsevier, vol. 235(C), pages 1-1.
    3. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Publisher Correction: Large language models encode clinical knowledge," Nature, Nature, vol. 620(7973), pages 19-19, August.
    4. Kühl, Niklas & Goutier, Marc & Baier, Lucas & Wolff, Clemens & Martin, Dominik, 2022. "Human vs. supervised machine learning: Who learns patterns faster?," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 135657, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    5. Tibert Verhagen & Daniel Bloemers, 2018. "Exploring the cognitive and affective bases of online purchase intentions: a hierarchical test across product types," Electronic Commerce Research, Springer, vol. 18(3), pages 537-561, September.
    6. Kamal, Syeda Ayesha & Shafiq, Muhammad & Kakria, Priyanka, 2020. "Investigating acceptance of telemedicine services through an extended technology acceptance model (TAM)," Technology in Society, Elsevier, vol. 60(C).
    7. Qingchuan Li, 2020. "Healthcare at Your Fingertips: The Acceptance and Adoption of Mobile Medical Treatment Services among Chinese Users," IJERPH, MDPI, vol. 17(18), pages 1-21, September.
    8. Mitja Vrdelja & Sanja Vrbovšek & Vito Klopčič & Kevin Dadaczynski & Orkan Okan, 2021. "Facing the Growing COVID-19 Infodemic: Digital Health Literacy and Information-Seeking Behaviour of University Students in Slovenia," IJERPH, MDPI, vol. 18(16), pages 1-16, August.
    9. Romain Cadario & Chiara Longoni & Carey K. Morewedge, 2021. "Understanding, explaining, and utilizing medical artificial intelligence," Nature Human Behaviour, Nature, vol. 5(12), pages 1636-1642, December.
    10. Mechanic, David & Meyer, Sharon, 2000. "Concepts of trust among patients with serious illness," Social Science & Medicine, Elsevier, vol. 51(5), pages 657-668, September.
    11. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Large language models encode clinical knowledge," Nature, Nature, vol. 620(7972), pages 172-180, August.
    12. Hanmei Fan & Reeva Lederman & Frantz Rowe & Sabine Matook, 2018. "Online health communities: how do community members build the trust required to adopt information and form close relationships?," European Journal of Information Systems, Taylor & Francis Journals, vol. 27(1), pages 62-89, January.
    13. Paul Crawford & Brian Brown & Marit Kvangarsnes & Paul Gilbert, 2014. "The design of compassionate care," Journal of Clinical Nursing, John Wiley & Sons, vol. 23(23-24), pages 3589-3599, December.
    14. Mahmud, Hasan & Islam, A.K.M. Najmul & Ahmed, Syed Ishtiaque & Smolander, Kari, 2022. "What influences algorithmic decision-making? A systematic literature review on algorithm aversion," Technological Forecasting and Social Change, Elsevier, vol. 175(C).
    15. Gursoy, Dogan & Chi, Oscar Hengxuan & Lu, Lu & Nunkoo, Robin, 2019. "Consumers acceptance of artificially intelligent (AI) device use in service delivery," International Journal of Information Management, Elsevier, vol. 49(C), pages 157-169.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Hermann, Erik & Puntoni, Stefano, 2024. "Artificial intelligence and consumer behavior: From predictive to generative AI," Journal of Business Research, Elsevier, vol. 180(C).
    2. Chen, Changdong, 2024. "How consumers respond to service failures caused by algorithmic mistakes: The role of algorithmic interpretability," Journal of Business Research, Elsevier, vol. 176(C).
    3. Maxime Griot & Coralie Hemptinne & Jean Vanderdonckt & Demet Yuksel, 2025. "Large Language Models lack essential metacognition for reliable medical reasoning," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    4. Ali Nemati & Mohammad Assadi Shalmani & Qiang Lu & Jake Luo, 2025. "Benchmarking Large Language Models from Open and Closed Source Models to Apply Data Annotation for Free-Text Criteria in Healthcare," Future Internet, MDPI, vol. 17(4), pages 1-27, March.
    5. Wang, Xun & Rodrigues, Vasco Sanchez & Demir, Emrah & Sarkis, Joseph, 2024. "Algorithm aversion during disruptions: The case of safety stock," International Journal of Production Economics, Elsevier, vol. 278(C).
    6. Yang, Yikai & Zheng, Jiehui & Yu, Yining & Qiu, Yiling & Wang, Lei, 2024. "The role of recommendation sources and attribute framing in online product recommendations," Journal of Business Research, Elsevier, vol. 174(C).
    7. Brüns, Jasper David & Meißner, Martin, 2024. "Do you create your content yourself? Using generative artificial intelligence for social media content creation diminishes perceived brand authenticity," Journal of Retailing and Consumer Services, Elsevier, vol. 79(C).
    8. Cheng-Yi Li & Kao-Jung Chang & Cheng-Fu Yang & Hsin-Yu Wu & Wenting Chen & Hritik Bansal & Ling Chen & Yi-Ping Yang & Yu-Chun Chen & Shih-Pin Chen & Shih-Jen Chen & Jiing-Feng Lirng & Kai-Wei Chang & , 2025. "Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation," Nature Communications, Nature, vol. 16(1), pages 1-14, December.
    9. Roshni Raveendhran & Nathanael J. Fast, 2024. "When and why consumers prefer human-free behavior tracking products," Marketing Letters, Springer, vol. 35(3), pages 395-408, September.
    10. Tingmingke Lu, 2025. "Maximum Hallucination Standards for Domain-Specific Large Language Models," Papers 2503.05481, arXiv.org.
    11. Deng, Shichang & Zhang, Jingjing & Lin, Zhengnan & Li, Xiangqian, 2024. "Service staff makes me nervous: Exploring the impact of insecure attachment on AI service preference," Technological Forecasting and Social Change, Elsevier, vol. 198(C).
    12. Zheng, Shuwen & Pan, Kai & Liu, Jie & Chen, Yunxia, 2024. "Empirical study on fine-tuning pre-trained large language models for fault diagnosis of complex systems," Reliability Engineering and System Safety, Elsevier, vol. 252(C).
    13. Zhou, Zhen & Gu, Ziyuan & Qu, Xiaobo & Liu, Pan & Liu, Zhiyuan & Yu, Wenwu, 2024. "Urban mobility foundation model: A literature review and hierarchical perspective," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 192(C).
    14. Wang, Cuicui & Li, Yiyang & Fu, Weizhong & Jin, Jia, 2023. "Whether to trust chatbots: Applying the event-related approach to understand consumers’ emotional experiences in interactions with chatbots in e-commerce," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).
    15. Qingyu Chen & Yan Hu & Xueqing Peng & Qianqian Xie & Qiao Jin & Aidan Gilson & Maxwell B. Singer & Xuguang Ai & Po-Ting Lai & Zhizheng Wang & Vipina K. Keloth & Kalpana Raja & Jimin Huang & Huan He & , 2025. "Benchmarking large language models for biomedical natural language processing applications and recommendations," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
    16. Zhenjia Chen & Zhenyuan Lin & Ji Yang & Cong Chen & Di Liu & Liuting Shan & Yuanyuan Hu & Tailiang Guo & Huipeng Chen, 2024. "Cross-layer transmission realized by light-emitting memristor for constructing ultra-deep neural network with transfer learning ability," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    17. Wei Wei & Jie Sun & Wei Miao & Tong Chen & Hanchu Sun & Shuyuan Lin & Chao Gu, 2024. "Using the Extended Unified Theory of Acceptance and Use of Technology to explore how to increase users’ intention to take a robotaxi," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-14, December.
    18. Yujin Oh & Sangjoon Park & Hwa Kyung Byun & Yeona Cho & Ik Jae Lee & Jin Sung Kim & Jong Chul Ye, 2024. "LLM-driven multimodal target volume contouring in radiation oncology," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    19. Luiz Philipi Calegari & Guilherme Luz Tortorella & Diego Castro Fettermann, 2023. "Getting Connected to M-Health Technologies through a Meta-Analysis," IJERPH, MDPI, vol. 20(5), pages 1-33, February.
    20. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:teinso:v:79:y:2024:i:c:s0160791x24002744. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/technology-in-society .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.