Author
Listed:
- Cheng, Qiang
- Dai, Yuting
- Liu, Xin
- Peng, Shuang
Abstract
The prevalence of AI hallucinations in generative artificial intelligence poses a serious challenge to human-AI collaboration, which relies heavily on trustworthy interactions. This phenomenon can lead to flawed decision-making and a crisis of trust. While existing research has highlighted the associated risks, there remains a lack of systematic empirical investigation and theoretical explanation regarding how these phenomena influence human-AI collaborative behavior via psychological mechanisms, as well as the nuanced role of task-technology fit (TTF) in this process. To address this gap, this study develops an integrated theoretical model incorporating human-machine trust and TTF, aiming to uncover the underlying pathways and boundary conditions through which AI hallucinations affect collaboration. Based on a survey of 310 users of AI-augmented work tools and analyzed using partial least squares structural equation modeling (PLS-SEM) and artificial neural network (ANN) techniques, the study revealed the following findings: AI hallucinations exert a significant negative impact on human-AI collaboration. Specifically, they impair collaborative outcomes indirectly by eroding users' cognitive and emotional trust in AI systems. Moreover, task-technology fit plays a bidirectional moderating role in this relationship. This study provides systematic empirical evidence quantifying the direct effect of AI hallucinations on human-AI collaboration. Furthermore, it advances theoretical understanding by identifying human-machine trust as a core mediating mechanism—thereby challenging the simplistic "hallucination→behavior" direct-path assumption. It also refines the conventional view of task-technology fit as a uniformly positive construct by uncovering its multifaceted roles in technology-related risk scenarios. These findings provide empirical grounding and practical insights for designing more robust and trustworthy human-AI collaborative systems. They also offer theoretical guidance for formulating effective hallucination risk management strategies and trust-calibration mechanisms in high-stakes application environments.
Suggested Citation
Cheng, Qiang & Dai, Yuting & Liu, Xin & Peng, Shuang, 2026.
"The trust crisis in artificial intelligence: AI hallucinations and human-AI collaboration,"
Technology in Society, Elsevier, vol. 86(C).
Handle:
RePEc:eee:teinso:v:86:y:2026:i:c:s0160791x26000758
DOI: 10.1016/j.techsoc.2026.103286
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:teinso:v:86:y:2026:i:c:s0160791x26000758. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/technology-in-society .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.