IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v16y2025i1d10.1038_s41467-025-62060-x.html
   My bibliography  Save this article

Large-vocabulary forensic pathological analyses via prototypical cross-modal contrastive learning

Author

Listed:
  • Chen Shen

    (Xi’an Jiaotong University)

  • Chunfeng Lian

    (Xi’an Jiaotong University
    Pazhou Lab (Huangpu))

  • Wanqing Zhang

    (Xi’an Jiaotong University)

  • Fan Wang

    (Xi’an Jiaotong University)

  • Jianhua Zhang

    (Academy of Forensic Science)

  • Shuanliang Fan

    (Xi’an Jiaotong University)

  • Xin Wei

    (Xi’an Jiaotong University)

  • Gongji Wang

    (Xi’an Jiaotong University)

  • Kehan Li

    (Xi’an Jiaotong University)

  • Hongshu Mu

    (Xian’yang Public Security Bureau)

  • Hao Wu

    (Xi’an Jiaotong University)

  • Xinggong Liang

    (Xi’an Jiaotong University)

  • Jianhua Ma

    (Pazhou Lab (Huangpu)
    Xi’an Jiaotong University)

  • Zhenyuan Wang

    (Xi’an Jiaotong University)

Abstract

Forensic pathology plays a vital role in determining the cause and manner of death through macroscopic and microscopic post-mortem examinations. However, the field faces challenges such as variability in outcomes, labor-intensive processes, and a shortage of skilled professionals. This paper introduces SongCi, a visual-language model tailored for forensic pathology. Leveraging advanced prototypical cross-modal self-supervised contrastive learning, SongCi improves the accuracy, efficiency, and generalizability of forensic analyses. Pre-trained and validated on a large multi-center dataset comprising over 16 million high-resolution image patches, 2, 228 vision-language pairs from post-mortem whole slide images, gross key findings, and 471 unique diagnostic outcomes, SongCi demonstrates superior performance over existing multi-modal models and computational pathology foundation models in forensic tasks. It matches experienced forensic pathologists’ capabilities, significantly outperforms less experienced practitioners, and offers robust multi-modal explainability.

Suggested Citation

  • Chen Shen & Chunfeng Lian & Wanqing Zhang & Fan Wang & Jianhua Zhang & Shuanliang Fan & Xin Wei & Gongji Wang & Kehan Li & Hongshu Mu & Hao Wu & Xinggong Liang & Jianhua Ma & Zhenyuan Wang, 2025. "Large-vocabulary forensic pathological analyses via prototypical cross-modal contrastive learning," Nature Communications, Nature, vol. 16(1), pages 1-20, December.
  • Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-62060-x
    DOI: 10.1038/s41467-025-62060-x
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-025-62060-x
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-025-62060-x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Shih-Chiang Huang & Chi-Chung Chen & Jui Lan & Tsan-Yu Hsieh & Huei-Chieh Chuang & Meng-Yao Chien & Tao-Sheng Ou & Kuang-Hua Chen & Ren-Chin Wu & Yu-Jen Liu & Chi-Tung Cheng & Yu-Jen Huang & Liang-Wei, 2022. "Deep neural network trained on gigapixel images improves lymph node metastasis detection in clinical settings," Nature Communications, Nature, vol. 13(1), pages 1-14, December.
    2. Yifan Zhong & Chuang Cai & Tao Chen & Hao Gui & Jiajun Deng & Minglei Yang & Bentong Yu & Yongxiang Song & Tingting Wang & Xiwen Sun & Jingyun Shi & Yangchun Chen & Dong Xie & Chang Chen & Yunlang She, 2023. "PET/CT based cross-modal deep learning signature to predict occult nodal metastasis in lung cancer," Nature Communications, Nature, vol. 14(1), pages 1-14, December.
    3. Shenghua Cheng & Sibo Liu & Jingya Yu & Gong Rao & Yuwei Xiao & Wei Han & Wenjie Zhu & Xiaohua Lv & Ning Li & Jing Cai & Zehua Wang & Xi Feng & Fei Yang & Xiebo Geng & Jiabo Ma & Xu Li & Ziquan Wei & , 2021. "Robust whole slide image analysis for cervical cancer screening using deep learning," Nature Communications, Nature, vol. 12(1), pages 1-10, December.
    4. Xiyue Wang & Junhan Zhao & Eliana Marostica & Wei Yuan & Jietian Jin & Jiayu Zhang & Ruijiang Li & Hongping Tang & Kanran Wang & Yu Li & Fang Wang & Yulong Peng & Junyou Zhu & Jing Zhang & Christopher, 2024. "A pathology foundation model for cancer diagnosis and prognosis prediction," Nature, Nature, vol. 634(8035), pages 970-978, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jiaxin Bai & Ning Li & Hua Ye & Xu Li & Li Chen & Junbo Hu & Baochuan Pang & Xiaodong Chen & Gong Rao & Qinglei Hu & Shijie Liu & Si Sun & Cheng Li & Xiaohua Lv & Shaoqun Zeng & Jing Cai & Shenghua Ch, 2025. "AI-assisted cervical cytology precancerous screening for high-risk population in resource-limited regions using a compact microscope," Nature Communications, Nature, vol. 16(1), pages 1-13, December.
    2. Peng Xue & Le Dang & Ling-Hua Kong & Hong-Ping Tang & Hai-Miao Xu & Hai-Yan Weng & Zhe Wang & Rong-Gan Wei & Lian Xu & Hong-Xia Li & Hai-Yan Niu & Ming-Juan Wang & Zi-Chen Ye & Zhi-Fang Li & Wen Chen , 2025. "Deep learning enabled liquid-based cytology model for cervical precancer and cancer detection," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    3. Pengzhi Zhang & Weiqing Chen & Tu N. Tran & Minghao Zhou & Kaylee N. Carter & Ibrahem Kandel & Shengyu Li & Xen Ping Hoi & Yuxing Sun & Li Lai & Keith Youker & Qianqian Song & Yu Yang & Fotis Nikolos , 2025. "Thor: a platform for cell-level investigation of spatial transcriptomics and histology," Nature Communications, Nature, vol. 16(1), pages 1-22, December.
    4. Ertunc Erdil & Anton S. Becker & Moritz Schwyzer & Borja Martinez-Tellez & Jonatan R. Ruiz & Thomas Sartoretti & H. Alberto Vargas & A. Irene Burger & Alin Chirindel & Damian Wild & Nicola Zamboni & B, 2024. "Predicting standardized uptake value of brown adipose tissue from CT scans using convolutional neural networks," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    5. Junhan Zhao & Shih-Yen Lin & Raphaël Attias & Liza Mathews & Christian Engel & Guillaume Larghero & Dmytro Vremenko & Ting-Wan Kao & Tsung-Hua Lee & Yu-Hsuan Wang & Cheng Che Tsai & Eliana Marostica &, 2025. "Uncertainty-aware ensemble of foundation models differentiates glioblastoma from its mimics," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
    6. Zhaochang Yang & Ting Wei & Ying Liang & Xin Yuan & RuiTian Gao & Yujia Xia & Jie Zhou & Yue Zhang & Zhangsheng Yu, 2025. "A foundation model for generalizable cancer diagnosis and survival prediction from histopathological images," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
    7. Ruixue Zhang & Huate Zhu & Qinglin Chang & Qirong Mao, 2025. "A Comprehensive Review of Digital Twins Technology in Agriculture," Agriculture, MDPI, vol. 15(9), pages 1-25, April.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-62060-x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.