Author
Listed:
- Xia, Hui
- Yang, Yuqing
- Duan, Jialing
Abstract
In an era in which millions seek mental health support, they paradoxically turn to nonsentient algorithms for their most intimate confessions. Why do some people feel safer in disclosing their deepest fears to a machine than to a fellow human? Challenging the traditional axiom that physical proximity fosters psychological intimacy, this research employs privacy calculus theory to investigate how psychotherapist type (offline human, online human, and artificial intelligence, hereafter AI) influences privacy disclosure intention. Across five vignette-based experiments with Chinese online participants (Ntotal = 1461), we demonstrate that participants report significantly higher disclosure intentions toward AI psychotherapists than toward both offline and online human psychotherapists. This effect is mediated by reduced fear of negative evaluation, which is particularly salient among individuals with low authenticity (both state and trait), and the effect also generalizes to a male psychotherapist. We identify a critical boundary condition: proactively disclosing a privacy security policy attenuates the higher disclosure intentions toward the AI psychotherapist, revealing a nuanced trade-off between mitigating social judgment risk and activating data privacy risk. Our findings suggest that AI can serve as a valuable supplemental tool to lower disclosure barriers for stigma-sensitive populations, but caution that standard transparency practices may paradoxically suppress candor in this unique context. We extend privacy calculus theory by identifying fear of negative evaluation as a central social-evaluative cost and by distinguishing social from technical dimensions of privacy risk. This research offers guidance for designing ethically responsible AI psychotherapeutic tools that lower disclosure barriers for stigma-sensitive users while maintaining transparency and accountability.
Suggested Citation
Xia, Hui & Yang, Yuqing & Duan, Jialing, 2026.
"Unveiling the digital confidant: How artificial intelligence psychotherapists surpass human counterparts in enhancing privacy disclosure intention,"
Technology in Society, Elsevier, vol. 86(C).
Handle:
RePEc:eee:teinso:v:86:y:2026:i:c:s0160791x26000527
DOI: 10.1016/j.techsoc.2026.103263
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:eee:teinso:v:86:y:2026:i:c:s0160791x26000527. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Catherine Liu (email available below). General contact details of provider: https://www.journals.elsevier.com/technology-in-society .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.